United
States
Solid
Waste
and
EPA530­
D­
02­
002
Environmental
Protection
Emergency
Response
August
2002
Agency
(5305W)
www.
epa.
gov/
osw
Office
of
Solid
Waste
RCRA
Waste
Sampling
Draft
Technical
Guidance
Planning,
Implementation,
and
Assessment
EPA530­
D­
02­
002
August
2002
RCRA
Waste
Sampling
Draft
Technical
Guidance
Planning,
Implementation,
and
Assessment
Office
of
Solid
Waste
U.
S.
Environmental
Protection
Agency
Washington,
DC
20460
i
DISCLAIMER
The
United
States
Environmental
Protection
Agency's
Office
of
Solid
Waste
(EPA
or
the
Agency)
has
prepared
this
draft
document
to
provide
guidance
to
project
planners,
field
personnel,
data
users,
and
other
interested
parties
regarding
sampling
for
the
evaluation
of
solid
waste
under
the
Resource
Conservation
and
Recovery
Act
(RCRA).

EPA
does
not
make
any
warranty
or
representation,
expressed
or
implied,
with
respect
to
the
accuracy,
completeness,
or
usefulness
of
the
information
contained
in
this
report.
EPA
does
not
assume
any
liability
with
respect
to
the
use
of,
or
for
damages
resulting
from
the
use
of,
any
information,
apparatus,
method,
or
process
disclosed
in
this
report.
Reference
to
trade
names
or
specific
commercial
products,
commodities,
or
services
in
this
report
does
not
represent
or
constitute
an
endorsement,
recommendation,
or
favoring
by
EPA
of
the
specific
commercial
product,
commodity,
or
service.
In
addition,
the
policies
set
out
in
this
document
are
not
final
Agency
action,
but
are
intended
solely
as
guidance.
They
are
not
intended,
nor
can
they
be
relied
upon,
to
create
any
rights
enforceable
by
any
party
in
litigation
with
the
United
States.
EPA
officials
may
decide
to
follow
the
guidance
provided
in
this
document,
or
to
act
at
variance
with
the
guidance,
based
on
an
analysis
of
specific
site
or
facility
circumstances.
The
Agency
also
reserves
the
right
to
change
this
guidance
at
any
time
without
public
notice.
ii
ACKNOWLEDGMENTS
Development
of
this
document
was
funded,
wholly
or
in
part,
by
the
United
States
Environmental
Protection
Agency
(U.
S.
EPA)
under
Contract
No.
68­
W6­
0068
and
68­
W­
00­
122.
It
has
been
reviewed
by
EPA
and
approved
for
publication.
It
was
developed
under
the
direction
of
Mr.
Oliver
M.
Fordham,
Office
of
Solid
Waste
(OSW)
and
Kim
Kirkland
(OSW)
in
collaboration
with
Dr.
Brian
A.
Schumacher,
Office
of
Research
and
Development
(ORD).
This
document
was
prepared
by
Mr.
Robert
B.
Stewart,
Science
Applications
International
Corporation
(SAIC).
Additional
writers
included
Dr.
Kirk
Cameron
(MacStat
Consulting,
Ltd.),
Dr.
Larry
P.
Jackson
(Environmental
Quality
Management),
Dr.
John
Maney
(Environmental
Measurements
Assessment
Co.),
Ms.
Jennifer
Bramlett
(SAIC),
and
Mr.
Oliver
M.
Fordham
(U.
S.
EPA).

EPA
gratefully
acknowledges
the
contributions
of
the
technical
reviewers
involved
in
this
effort,
including
the
following:

U.
S.
EPA
Program
Offices
Deana
Crumbling,
TIO
Evan
Englund,
ORD
George
Flatman,
ORD
Joan
Fisk,
OERR
David
Friedman,
ORD
Chris
Gaines,
OW
Gail
Hansen,
OSW
Barnes
Johnson,
OSW
Joe
Lowry,
NEIC
John
Nocerino,
ORD
Brian
A.
Schumacher,
ORD
Jim
Thompson,
OECA
Jeff
Van
Ee,
ORD
Brad
Venner,
NEIC
John
Warren,
OEI
U.
S.
EPA
Regions
Dan
Granz,
Region
I
Bill
Cosgrove,
Region
IV
Mike
Neill,
Region
IV
Judy
Sophianopoulos,
Region
IV
Brian
Freeman,
Region
V
Gene
Keepper,
Region
VI
Gregory
Lyssy,
Region
VI
Bill
Gallagher,
Region
VI
Deanna
Lacy,
Region
VI
Maria
Martinez,
Region
VI
Walt
Helmick,
Region
VI
Charles
Ritchey,
Region
VI
Terry
Sykes,
Region
VI
Stephanie
Doolan,
Region
VII
Dedriel
Newsome,
Region
VII
Tina
Diebold,
Region
VIII
Mike
Gansecki,
Region
VIII
Roberta
Hedeen,
Region
X
Mary
Queitzsch,
Region
X
ASTM
Subcommittee
D­
34
Brian
M.
Anderson,
SCA
Services
Eric
Chai,
Shell
Alan
B.
Crockett,
INEL
Jim
Frampton,
CA
DTSC
Susan
Gagner,
LLNL
Alan
Hewitt,
CRREL
Larry
Jackson,
EQM
John
Maney,
EMA
Other
Organizations
Jeffrey
Farrar,
U.
S.
Bureau
of
Reclamation
Jeff
Myers,
Westinghouse
SMS
Rock
Vitale,
Environmental
Standards
Ann
Strahl,
Texas
NRCC
iii
CONTENTS
1
INTRODUCTION
......................................................
1
1.1
What
Will
I
Find
in
This
Guidance
Document?
..........................
1
1.2
Who
Can
Use
This
Guidance
Document?
.............................
1
1.3
Does
This
Guidance
Document
Replace
Other
Guidance?
................
2
1.4
How
Is
This
Document
Organized?
..................................
3
2
SUMMARY
OF
RCRA
REGULATORY
DRIVERS
FOR
WASTE
SAMPLING
AND
ANALYSIS
...........................................................
6
2.1
Background
.....................................................
6
2.2
Sampling
For
Regulatory
Compliance
................................
8
2.2.1
Making
a
Hazardous
Waste
Determination
.......................
8
2.2.2
Land
Disposal
Restrictions
(LDR)
Program
......................
9
2.2.3
Other
RCRA
Regulations
and
Programs
That
May
Require
Sampling
and
Testing
..............................................
10
2.2.4
Enforcement
Sampling
and
Analysis
..........................
10
3
FUNDAMENTAL
STATISTICAL
CONCEPTS
...............................
13
3.1
Populations,
Samples,
and
Distributions
.............................
14
3.1.1
Populations
and
Decision
Units
..............................
14
3.1.2
Samples
and
Measurements
................................
15
3.
1.
3
Distributions
.............................................
17
3.2
Measures
of
Central
Tendency,
Variability,
and
Relative
Standing
.........
18
3.2.1
Measures
of
Central
Tendency
...............................
18
3.2.2
Measures
of
Variability
.....................................
19
3.2.3
Measures
of
Relative
Standing
...............................
21
3.3
Precision
and
Bias
..............................................
21
3.4
Using
Sample
Analysis
Results
to
Classify
a
Waste
or
to
Determine
Its
Status
Under
RCRA
...................................................
24
3.4.1
Using
an
Average
To
Determine
Whether
a
Waste
or
Media
Meets
the
Applicable
Standard
.......................................
24
3.4.2
Using
a
Proportion
or
Percentile
To
Determine
Whether
a
Waste
or
Media
Meets
an
Applicable
Standard
..........................
26
3.4.2.1
Using
a
Confidence
Limit
on
a
Percentile
to
Classify
a
Waste
or
Media
........................................
27
3.4.2.2
Using
a
Simple
Exceedance
Rule
Method
To
Classify
a
Waste
........................................
27
3.4.3
Comparing
Two
Populations
.................................
28
3.
4.
4
Estimating
Spatial
Patterns..................................
29
4
PLANNING
YOUR
PROJECT
USING
THE
DQO
PROCESS
...................
30
4.
1
Step
1:
State
the
Problem
........................................
32
4.1.1
Identify
Members
of
the
Planning
Team
........................
32
iv
4.1.2
Identify
the
Primary
Decision
Maker
...........................
32
4.
1.
3
Develop
a
Concise
Description
of
the
Problem...................
32
4.2
Step
2:
Identify
the
Decision
......................................
33
4.2.1
Identify
the
Principal
Study
Question
..........................
33
4.2.2
Define
the
Alternative
Actions
That
Could
Result
from
Resolution
of
the
Principal
Study
Question....................................
34
4.
2.
3
Develop
a
Decision
Statement
...............................
34
4.2.4
Organize
Multiple
Decisions
.................................
34
4.3
Step
3:
Identify
Inputs
to
the
Decision
...............................
34
4.3.1
Identify
the
Information
Required
.............................
34
4.
3.
2
Determine
the
Sources
of
Information
.........................
35
4.3.3
Identify
Information
Needed
To
Establish
the
Action
Level
..........
35
4.3.4
Confirm
That
Sampling
and
Analytical
Methods
Exist
That
Can
Provide
the
Required
Environmental
Measurements
.....................
36
4.4
Step
4:
Define
the
Study
Boundaries
................................
36
4.4.1
Define
the
Target
Population
of
Interest
........................
36
4.4.2
Define
the
Spatial
Boundaries
................................
37
4.4.3
Define
the
Temporal
Boundary
of
the
Problem
...................
37
4.4.4
Identify
Any
Practical
Constraints
on
Data
Collection
..............
38
4.
4.
5
Define
the
Scale
of
Decision
Making
..........................
38
4.
5
Step
5:
Develop
a
Decision
Rule...................................
39
4.
5.
1
Specify
the
Parameter
of
Interest
.............................
39
4.
5.
2
Specify
the
Action
Level
for
the
Study
.........................
40
4.
5.
3
Develop
a
Decision
Rule....................................
41
4.
6
Step
6:
Specify
Limits
on
Decision
Errors
............................
41
4.6.1
Determine
the
Possible
Range
on
the
Parameter
of
Interest
........
43
4.6.2
Identify
the
Decision
Errors
and
Choose
the
Null
Hypothesis
........
43
4.6.3
Specify
a
Range
of
Possible
Parameter
Values
Where
the
Consequences
of
a
False
Acceptance
Decision
Error
are
Relatively
Minor
(Gray
Region)
.......................................
45
4.6.4
Specify
an
Acceptable
Probability
of
Making
a
Decision
Error
.......
47
4.
7
Outputs
of
the
First
Six
Steps
of
the
DQO
Process
.....................
48
5
OPTIMIZING
THE
DESIGN
FOR
OBTAINING
THE
DATA
.....................
50
5.
1
Review
the
Outputs
of
the
First
Six
Steps
of
the
DQO
Process
............
50
5.
2
Consider
Data
Collection
Design
Options.............................
51
5.2.1
Simple
Random
Sampling
...................................
57
5.2.2
Stratified
Random
Sampling
.................................
57
5.
2.
3
Systematic
Sampling.......................................
59
5.
2.
4
Ranked
Set
Sampling
......................................
60
5.2.5
Sequential
Sampling
.......................................
61
5.
2.
6
Authoritative
Sampling
.....................................
62
5.2.6.1
Judgmental
Sampling
.............................
63
5.2.6.2
Biased
Sampling
.................................
64
5.
3
Composite
Sampling.............................................
64
5.3.1
Advantages
and
Limitations
of
Composite
Sampling
..............
65
5.
3.
2
Basic
Approach
To
Composite
Sampling
.......................
66
5.
3.
3
Composite
Sampling
Designs................................
67
v
5.3.3.1
Simple
Random
Composite
Sampling
.................
67
5.3.3.2
Systematic
Composite
Sampling
.....................
68
5.
3.
4
Practical
Considerations
for
Composite
Sampling
................
69
5.3.5
Using
Composite
Sampling
To
Obtain
a
More
Precise
Estimate
of
the
Mean...................................................
69
5.3.6
Using
Composite
Sampling
To
Locate
Extreme
Values
or
"Hot
Spots"
............................................
71
5.4
Determining
the
Appropriate
Number
of
Samples
Needed
To
Estimate
the
Mean.........................................................
73
5.4.1
Number
of
Samples
to
Estimate
the
Mean:
Simple
Random
Sampling
75
5.4.2
Number
of
Samples
to
Estimate
the
Mean:
Stratified
Random
Sampling................................................
77
5.4.2.1
Optimal
Allocation
................................
78
5.4.2.2
Proportional
Allocation
.............................
78
5.4.3
Number
of
Samples
to
Estimate
the
Mean:
Systematic
Sampling
....
80
5.4.4
Number
of
Samples
to
Estimate
the
Mean:
Composite
Sampling
....
80
5.5
Determining
the
Appropriate
Number
of
Samples
to
Estimate
A
Percentile
or
Proportion
.....................................................
81
5.5.1
Number
of
Samples
To
Test
a
Proportion:
Simple
Random
or
Systematic
Sampling.......................................
81
5.5.2
Number
of
Samples
When
Using
a
Simple
Exceedance
Rule
.......
83
5.
6
Selecting
the
Most
Resource­
Effective
Design.........................
84
5.7
Preparing
a
QAPP
or
WAP
........................................
84
5.7.1
Project
Management
.......................................
85
5.
7.
2
Measurement/
Data
Acquisition...............................
86
5.
7.
3
Assessment/
Oversight
.....................................
86
5.7.4
Data
Validation
and
Usability
................................
86
5.
7.
5
Data
Assessment
.........................................
87
6
CONTROLLING
VARIABILITY
AND
BIAS
IN
SAMPLING
.....................
88
6.1
Sources
of
Random
Variability
and
Bias
in
Sampling
....................
88
6.2
Overview
of
Sampling
Theory
......................................
90
6.2.1
Heterogeneity
............................................
90
6.
2.
2
Types
of
Sampling
Error
....................................
91
6.2.2.1
Fundamental
Error
................................
92
6.2.2.2
Grouping
and
Segregation
Error
.....................
93
6.2.2.3
Increment
Delimitation
Error
........................
94
6.2.2.4
Increment
Extraction
Error
..........................
94
6.2.2.5
Preparation
Error
.................................
94
6.2.3
The
Concept
of
"Sample
Support"
............................
94
6.3
Practical
Guidance
for
Reducing
Sampling
Error
.......................
95
6.
3.
1
Determining
the
Optimal
Mass
of
a
Sample
.....................
96
6.3.2
Obtaining
the
Correct
Shape
and
Orientation
of
a
Sample
..........
98
6.3.2.1
Sampling
of
a
Moving
Stream
of
Material
..............
98
6.3.2.2
Sampling
of
a
Stationary
Batch
of
Material
.............
99
6.3.3
Selecting
Sampling
Devices
That
Minimize
Sampling
Errors
........
99
6.3.3.1
General
Performance
Goals
for
Sampling
Tools
and
Devices
........................................
99
vi
6.3.3.2
Use
and
Limitations
of
Common
Devices
.............
100
6.3.4
Special
Considerations
for
Sampling
Waste
and
Soils
for
Volatile
Organic
Compounds
......................................
101
7
IMPLEMENTATION:
SELECTING
EQUIPMENT
AND
CONDUCTING
SAMPLING
.........................................................
102
7.1
Selecting
Sampling
Tools
and
Devices
..............................
102
7.1.1
Step
1:
Identify
the
Waste
Type
or
Medium
to
be
Sampled
........
104
7.1.2
Step
2:
Identify
the
Site
or
Point
of
Sample
Collection
............
104
7.1.2.1
Drums
and
Sacks
or
Bags
.........................
104
7.1.2.2
Surface
Impoundments
...........................
105
7.1.2.3
Tanks
.........................................
105
7.1.2.4
Pipes,
Point
Source
Discharges,
or
Sampling
Ports
.....
106
7.1.2.5
Storage
Bins,
Roll­
Off
Boxes,
or
Collection
Hoppers
....
106
7.1.2.6
Waste
Piles
....................................
106
7.1.2.7
Conveyors
.....................................
106
7.1.2.8
Structures
and
Debris
............................
107
7.1.2.9
Surface
or
Subsurface
Soil
........................
107
7.
1.
3
Step
3:
Consider
Device­
Specific
Factors
.....................
107
7.1.3.1
Sample
Type
...................................
108
7.1.3.2
Sample
Volume
.................................
108
7.1.3.3
Other
Device­
Specific
Considerations
................
108
7.
1.
4
Step
4:
Select
the
Sampling
Device..........................
108
7.2
Conducting
Field
Sampling
Activities
...............................
122
7.
2.
1
Selecting
Sample
Containers
...............................
122
7.2.2
Sample
Preservation
and
Holding
Times
......................
123
7.
2.
3
Documentation
of
Field
Activities
............................
124
7.
2.
4
Field
Quality
Control
Samples...............................
124
7.2.5
Sample
Identification
and
Chain­
of­
Custody
Procedures
..........
125
7.2.6
Decontamination
of
Equipment
and
Personnel
..................
128
7.2.7
Health
and
Safety
Considerations
............................
130
7.2.8
Sample
Packaging
and
Shipping
............................
131
7.2.8.1
Sample
Packaging
...............................
131
7.2.8.2
Sample
Shipping
................................
133
7.3
Using
Sample
Homogenization,
Splitting,
and
Subsampling
Techniques
.
.
.
134
7.3.1
Homogenization
Techniques
................................
134
7.
3.
2
Sample
Splitting
.........................................
135
7.
3.
3
Subsampling
............................................
135
7.3.3.1
Subsampling
Liquids
.............................
136
7.3.3.2
Subsampling
Mixtures
of
Liquids
and
Solids
...........
136
7.3.3.3
Subsampling
Soils
and
Solid
Media
.................
136
8
ASSESSMENT:
ANALYZING
AND
INTERPRETING
DATA
..................
139
8.1
Data
Verification
and
Validation
...................................
139
8.
1.
1
Sampling
Assessment.....................................
139
8.1.1.1
Sampling
Design
................................
140
vii
8.1.1.2
Sampling
Methods
...............................
141
8.1.1.3
Sample
Handling
and
Custody
Procedures
............
141
8.1.1.4
Documentation
.................................
141
8.1.1.5
Control
Samples
................................
142
8.
1.
2
Analytical
Assessment
....................................
142
8.1.2.1
Analytical
Data
Verification
........................
143
8.1.2.2
Analytical
Data
Validation
(Evaluation)
...............
144
8.
2
Data
Quality
Assessment
........................................
145
8.2.1
Review
the
DQOs
and
the
Sampling
Design
...................
145
8.2.2
Prepare
Data
for
Statistical
Analysis
..........................
145
8.2.3
Conduct
Preliminary
Review
of
the
Data
and
Check
Statistical
Assumptions
............................................
147
8.2.3.1
Statistical
Quantities
.............................
147
8.2.3.2
Checking
Data
for
Normality
.......................
147
8.2.3.3
How
To
Assess
"Outliers"
.........................
148
8.2.4
Select
and
Perform
Statistical
Tests
..........................
149
8.2.4.1
Data
Transformations
in
Statistical
Tests
.............
150
8.2.4.2
Treatment
of
Nondetects
..........................
154
8.2.5
Draw
Conclusions
and
Report
Results
........................
154
Appendix
A:
Glossary
of
Terms
.............................................
157
Appendix
B:
Summary
of
RCRA
Regulatory
Drivers
for
Conducting
Waste
Sampling
and
Analysis
.......................................................
171
Appendix
C:
Strategies
for
Sampling
Heterogeneous
Wastes
....................
191
Appendix
D:
A
Quantitative
Approach
for
Controlling
Fundamental
Error
..........
197
Appendix
E:
Sampling
Devices
.............................................
201
Appendix
F:
Statistical
Methods
............................................
241
Appendix
G:
Statistical
Tables
..............................................
263
Appendix
H:
Statistical
Software
............................................
273
Appendix
I:
Examples
of
Planning,
Implementation,
and
Assessment
for
RCRA
Waste
Sampling
....................................................
277
Appendix
J:
Summary
of
ASTM
Standards
...................................
305
References
..............................................................
323
Index
...................................................................
337
viii
LIST
OF
ACRONYMS
AL
Action
Level
ASTM
American
Society
for
Testing
and
Materials
BDAT
Best
Demonstrated
Available
Technology
BIF
Boiler
and
Industrial
Furnace
CERCLA
Comprehensive,
Environmental
Response,
Compensation
&
Liability
Act
CFR
Code
of
Federal
Regulations
DOT
Department
of
Transportation
DQA
Data
Quality
Assessment
DQO
Data
Quality
Objective
EA
Exposure
area
FR
Federal
Register
HWIR
Hazardous
Waste
Identification
Rule
(waste)
IATA
International
Air
Transport
Association
ICR
Ignitability,
Corrosivity,
and
Reactivity
IDW
Investigation­
derived
waste
LCL
Lower
confidence
limit
LDR
Land
Disposal
Restrictions
ORD
Office
of
Research
and
Development
OSHA
Occupational
Safety
and
Health
Administration
OSW
Office
of
Solid
Waste
PBMS
Performance­
based
measurement
system
ppm
Parts
per
million
QAD
Quality
Assurance
Division
QAPP
Quality
Assurance
Project
Plan
QA/
QC
Quality
Assurance/
Quality
Control
RCRA
Resource
Conservation
and
Recovery
Act
RT
Regulatory
Threshold
SOP
Standard
operating
procedure
SWMU
Solid
waste
management
unit
TC
Toxicity
Characteristic
TCLP
Toxicity
Characteristic
Leaching
Procedure
TSDF
Treatment,
storage,
or
disposal
facility
UCL
Upper
confidence
limit
USEPA
U.
S.
Environmental
Protection
Agency
(we,
us,
our,
EPA,
the
Agency)
UTS
Universal
Treatment
Standard
VOC
Volatile
organic
compound
WAP
Waste
analysis
plan
1
If
a
solid
waste
is
not
excluded
from
regulation
under
40
CFR
261,
then
a
generator
must
determine
whether
the
waste
exhibits
any
of
the
characteristics
of
hazardous
waste.
A
generator
may
determine
if
a
waste
exhibits
a
characteristic
either
by
testing
the
waste
or
applying
knowledge
of
the
waste,
the
raw
materials,
and
the
processes
used
in
its
generation.

1
ASSESSMENT
Data
Verification
&
Validation,
Data
Quality
Assessment,
Conclusions
Drawn
from
Data
IMPLEMENTATION
Field
Sample
Collection,
Sample
Analysis,
and
Associated
Quality
Assurance/
Quality
Control
Activities
PLANNING
Data
Quality
Objectives
Process,
Quality
Assurance
Project
Plan
or
Waste
Analysis
Plan
Figure
1.
QA
Planning
and
the
Data
Life
Cycle
(after
USEPA
1998a).
RCRA
WASTE
SAMPLING
DRAFT
TECHNICAL
GUIDANCE
1
INTRODUCTION
1.1
What
Will
I
Find
in
This
Guidance
Document?

You'll
find
recommended
procedures
for
sampling
solid
waste
under
the
Resource
Conservation
and
Recovery
Act
(RCRA).
The
regulated
and
regulatory
communities
can
use
this
guidance
to
develop
sampling
plans
to
determine
if
(1)
a
solid
waste
exhibits
any
of
the
characteristics
of
a
hazardous
waste
1
,
(2)
a
hazardous
waste
is
prohibited
from
land
disposal,
and
(3)
a
numeric
treatment
standard
has
been
met.
You
also
can
use
information
in
this
document
along
with
that
found
in
other
guidance
documents
to
meet
other
sampling
objectives
such
as
site
characterization
under
the
RCRA
corrective
action
program.

This
guidance
document
steps
you
through
the
three
phases
of
the
sampling
and
analysis
process
shown
in
Figure
1:
planning,
implementation,
and
assessment.
Planning
involves
"asking
the
right
questions."
Using
a
systematic
planning
process
such
as
the
Data
Quality
Objectives
(DQO)
Process
helps
you
do
so.
DQOs
are
the
specifications
you
need
to
develop
a
plan
for
your
project
such
as
a
quality
assurance
project
plan
(QAPP)
or
a
waste
analysis
plan
(WAP).
Implementation
involves
using
the
field
sampling
procedures
and
analytical
methods
specified
in
the
plan
and
taking
measures
to
control
error
that
might
be
introduced
along
the
way.
Assessment
is
the
final
stage
in
which
you
evaluate
the
results
of
the
study
in
terms
of
the
original
objectives
and
make
decisions
regarding
management
or
treatment
of
the
waste.

1.2
Who
Can
Use
This
Guidance
Document?

Any
person
who
generates,
treats,
stores,
or
disposes
of
solid
and
hazardous
waste
and
conducts
sampling
and
analysis
under
RCRA
can
use
the
information
in
this
guidance
document.
2
For
the
development
of
a
technically
sound
sampling
and
project
plan,
seek
competent
advice
during
the
initial
stages
of
project
design.
This
is
particularly
true
in
the
early
developmental
stages
of
a
sampling
plan
when
planners
need
to
understand
basic
statistical
concepts,
how
to
establish
objectives,
and
how
the
results
of
the
project
will
be
evaluated.

This
document
is
a
practical
guide,
and
many
examples
are
included
throughout
the
text
to
demonstrate
how
to
apply
the
guidance.
In
addition,
we
have
included
a
comprehensive
glossary
of
terms
in
Appendix
A
to
help
you
with
any
unfamiliar
terminology.
We
encourage
you
to
review
other
documents
referenced
in
the
text,
especially
those
related
to
the
areas
of
sampling
theory
and
practice
and
the
statistical
analysis
of
environmental
data.

1.3
Does
This
Guidance
Document
Replace
Other
Guidance?

EPA
prepared
this
guidance
document
to
update
technical
information
contained
in
other
sources
of
EPA
guidance
such
as
Chapter
Nine
"Sampling
Plan"
found
in
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,
EPA
publication
SW­
846
(1986a).
This
draft
guidance
document
does
not
replace
SW­
846
Chapter
Nine,
nor
does
it
create,
amend,
or
otherwise
alter
any
regulation.
Since
publication
of
SW­
846
Chapter
Nine,
EPA
has
published
a
substantial
body
of
additional
sampling
and
statistical
guidance
documents
that
support
waste
and
site
characterization
under
both
RCRA
and
the
Comprehensive,
Environmental
Response,
Compensation
&
Liability
Act
(CERCLA)
or
"Superfund."
Most
of
these
guidance
documents,
which
focus
on
specific
Agency
regulations
or
program
initiatives,
should
continue
to
be
used,
as
appropriate.
Relevant
EPA
guidance
documents,
other
references,
and
resources
are
identified
in
Appendix
B
and
throughout
this
document.

In
addition
to
RCRA
program­
specific
guidance
documents
issued
by
EPA's
Office
of
Solid
Waste
(OSW),
EPA's
Office
of
Environmental
Information's
Quality
Staff
has
developed
policy
for
quality
assurance,
guidance
documents
and
software
tools,
and
provides
training
and
outreach.
For
example,
the
Quality
Staff
have
issued
guidance
on
the
following
key
topic
areas:

°
The
data
quality
objectives
process
(USEPA
2000a,
2000b,
and
2001a)

°
Preparation
of
quality
assurance
project
plans
(USEPA
1998a
and
2001b)
and
sampling
plans
(2000c)

°
Verification
and
validation
of
environmental
data
(USEPA
2001c)

°
Data
quality
assessment
(USEPA
2000d).

Information
about
EPA's
Quality
System
and
QA
procedures
and
policies
can
be
found
on
the
World
Wide
Web
at
http://
www.
epa.
gov/
quality/.

If
you
require
additional
information,
you
should
review
these
documents
and
others
cited
in
this
document.
In
the
future,
EPA
may
issue
additional
supplemental
guidance
supporting
other
regulatory
initiatives.

Finally,
other
organizations
including
EPA
Regions,
States,
the
American
Society
for
Testing
and
Materials
(ASTM),
the
Department
of
Defense
(e.
g.,
the
Air
Force
Center
for
Environmental
3
Excellence),
and
the
Department
of
Energy
have
developed
a
wide
range
of
relevant
guidance
and
methods.
Consult
these
resources
for
further
assistance,
as
necessary.

1.4
How
Is
This
Document
Organized?

As
previously
indicated
in
Figure
1,
this
guidance
document
covers
the
three
components
of
a
sampling
and
analysis
program:
planning,
implementation,
and
assessment.
Even
though
the
process
is
pictured
in
a
linear
format,
in
practice
a
sampling
program
should
include
feedback
between
the
various
components.
You
should
review
and
analyze
data
as
collected
so
you
can
determine
whether
the
data
satisfy
the
objectives
of
the
study
and
if
the
approach
or
objectives
need
to
be
revised
or
refined,
and
so
you
can
make
reasoned
and
intelligent
decisions.

The
remaining
sections
of
this
guidance
document
address
specific
topics
pertaining
to
various
components
of
a
sampling
program.
These
sections
include
the
following:

Section
2
­
Summary
of
RCRA
Regulatory
Drivers
for
Waste
Sampling
and
Analysis
–
This
section
identifies
and
summarizes
the
major
RCRA
programs
that
specify
some
sort
of
sampling
and
testing
to
determine
if
a
waste
is
a
hazardous
waste,
to
determine
if
a
hazardous
waste
treatment
standard
is
attained,
and
other
determinations.

Section
3
­
Fundamental
Statistical
Concepts
­­
This
section
provides
an
overview
of
fundamental
statistical
concepts
and
how
the
sample
analysis
results
can
be
used
to
classify
a
waste
or
determine
its
status
under
RCRA.
The
section
serves
as
a
refresher
to
those
familiar
with
basic
statistics.
In
those
cases
where
you
require
more
advanced
techniques,
seek
the
assistance
of
a
professional
environmental
statistician.
Detailed
guidance
on
the
selection
and
use
of
statistical
methods
is
provided
in
Section
8
and
Appendix
F.

Section
4
­
Planning
Your
Project
Using
the
DQO
Process
­­
The
first
phase
of
sampling
involves
development
of
DQOs
using
the
DQO
Process
or
a
similar
structured
systematic
planning
process.
The
DQOs
provide
statements
about
the
expectations
and
requirements
of
the
data
user
(such
as
the
decision
maker).

Section
5
­
Optimizing
the
Design
for
Obtaining
the
Data
­­
This
section
describes
how
to
link
the
results
of
the
DQO
Process
with
the
development
of
the
QAPP.
You
optimize
the
sampling
design
to
control
sampling
errors
within
acceptable
limits
and
minimize
costs
while
continuing
to
meet
the
sampling
objectives.
You
document
the
output
of
the
DQO
Process
in
a
QAPP,
WAP,
or
similar
planning
document.
Here
is
where
you
translate
the
data
requirements
into
measurement
performance
specifications
and
QA/
QC
procedures.

Section
6
­
Controlling
Variability
and
Bias
in
Sampling
­­
In
this
section,
we
recognize
that
random
variability
and
bias
(collectively
known
as
"error")
in
sampling
account
for
a
significant
portion
of
the
total
error
in
the
sampling
and
analysis
process
–
far
outweighing
typical
analytical
error.
To
address
this
concern,
the
section
describes
the
sources
of
error
in
sampling
and
offers
some
strategies
for
minimizing
those
errors.
4
Section
7
­
Implementation:
Selecting
Equipment
and
Conducting
Sampling
­­
In
this
section,
we
describe
the
steps
for
selecting
sampling
equipment
based
on
the
physical
and
chemical
characteristics
of
the
media
to
be
sampled
and
the
type
of
RCRA
unit
or
location
from
which
the
samples
will
be
obtained.
The
section
provides
guidance
on
field
sampling
activities,
such
as
documentation,
chain­
of­
custody
procedures,
decontamination,
and
sample
packaging
and
shipping.
Finally,
guidance
is
provided
on
sample
homogenization
(or
mixing),
splitting,
and
subsampling.

Section
8
­
Assessment:
Analyzing
and
Interpreting
Data
­­
Once
you
have
obtained
the
data
in
accordance
with
the
elements
of
the
QAPP
or
WAP,
you
should
evaluate
the
data
to
determine
whether
you
have
satisfied
the
DQOs.
Section
8
describes
the
data
quality
assessment
(DQA)
process
and
the
statistical
analysis
of
waste­
sampling
data.

Appendix
A
­
Glossary
of
Terms
­­
This
appendix
comprises
a
glossary
of
terms
that
are
used
in
this
document.

Appendix
B
­
Summary
of
RCRA
Regulatory
Drivers
for
Conducting
Waste
Sampling
and
Analysis
­­
An
overview
of
the
RCRA
regulatory
requirements
and
other
citations
related
to
waste
sampling
and
testing
is
provided
in
this
appendix.

Appendix
C
­
Strategies
for
Sampling
Heterogeneous
Wastes
­­
The
heterogeneity
of
a
waste
or
media
plays
an
important
role
in
how
you
collect
and
handle
samples
and
what
type
of
sampling
design
you
use.
This
appendix
provides
a
supplemental
discussion
of
large­
scale
heterogeneity
of
waste
and
its
impact
on
waste­
sampling
strategies.
Various
types
of
large­
scale
heterogeneity
are
identified
and
techniques
are
described
for
stratifying
a
waste
stream
based
on
heterogeneity.
Stratified
sampling
can
be
a
cost­
effective
approach
for
sampling
and
analysis
of
heterogeneous
wastes.

Appendix
D
­
A
Quantitative
Approach
for
Controlling
Fundamental
Error
­­
The
mass
of
a
sample
can
influence
our
ability
to
obtain
reproducible
analytical
results.
This
appendix
provides
an
approach
for
determining
the
appropriate
mass
of
a
sample
of
particulate
material
using
information
about
the
size
and
shape
of
the
particles.

Appendix
E
­
Sampling
Devices
­­
This
appendix
provides
descriptions
of
recommended
sampling
devices.
For
each
type
of
sampling
device,
information
is
provided
in
a
uniform
format
that
includes
a
brief
description
of
the
device
and
its
use,
advantages
and
limitations
of
the
device,
and
a
figure
to
indicate
the
general
design
of
the
device.
Each
summary
also
identifies
sources
of
other
guidance
on
each
device,
particularly
any
relevant
ASTM
standards.

Appendix
F
­
Statistical
Methods
­­
This
appendix
provides
statistical
guidance
for
the
analysis
of
data
generated
in
support
of
a
waste­
testing
program
under
RCRA.

Appendix
G
­
Statistical
Tables
­­
A
series
of
statistical
tables
needed
to
perform
the
statistical
tests
used
in
this
guidance
document
are
presented
here.

Appendix
H
­
Statistical
Software
­­
A
list
of
statistical
software
and
"freeware"
nocost
software)
that
you
might
find
useful
in
implementing
the
statistical
methods
outlined
5
in
this
guidance
document
is
contained
in
this
appendix,
as
are
Internet
addresses
at
which
you
can
download
no­
cost
software.

Appendix
I
­
Examples
of
Planning,
Implementation,
and
Assessment
for
RCRA
Waste
Sampling
­­
Two
hypothetical
examples
of
how
to
apply
the
planning,
implementation,
and
assessment
guidance
provided
in
this
guidance
document
are
provided
here.

Appendix
J
­
Summaries
of
ASTM
Standards
­­
This
appendix
provides
summaries
of
ASTM
standards
related
to
waste
sampling
and
referenced
in
this
document.
6
2
SUMMARY
OF
RCRA
REGULATORY
DRIVERS
FOR
WASTE
SAMPLING
AND
ANALYSIS
2.1
Background
Through
RCRA,
Congress
provided
EPA
with
the
framework
to
develop
regulatory
programs
for
the
management
of
solid
and
hazardous
waste.
The
provisions
of
RCRA
Subtitle
C
establish
the
criteria
for
identifying
hazardous
waste
and
managing
it
from
its
point
of
generation
to
ultimate
disposal.
EPA's
regulations
set
out
in
40
CFR
Parts
260
to
279
are
the
primary
source
for
the
requirements
of
the
hazardous
waste
program.
These
regulations
were
developed
over
a
period
of
25
years.
While
EPA's
approach
for
developing
individual
regulations
may
have
evolved
over
this
period,
the
current
RCRA
statute
and
codified
regulations
remain
the
standard
for
determining
compliance.

Many
of
the
RCRA
regulations
either
require
the
waste
handler
to
conduct
sampling
and
analysis,
or
they
include
provisions
under
which
sampling
and
analysis
can
be
performed
at
the
discretion
of
the
waste
handler.
If
the
regulations
require
sampling
and
analysis
of
a
waste
or
environmental
media,
then
any
regulatory
requirements
for
conducting
the
sampling
and
analysis
and
for
evaluating
the
results
must
be
followed.
Regardless
of
whether
there
are
regulatory
requirements
to
conduct
sampling,
some
waste
handlers
may
wish
to
conduct
a
sampling
program
that
allows
them
to
quantify
any
uncertainties
associated
with
their
waste
classification
decisions.
The
information
in
this
document
can
be
used
to
aid
in
the
planning
and
implementation
of
such
a
sampling
program.

Some
RCRA
regulations
do
not
specify
sampling
and
analysis
requirements
and/
or
do
not
specify
how
the
sample
analysis
results
should
be
evaluated.
In
many
cases,
this
is
because
EPA
realized
that
the
type,
quantity,
and
quality
of
data
needed
should
be
specified
on
a
sitespecific
basis,
such
as
in
the
waste
analysis
plan
of
a
permitted
facility.
In
those
situations,
you
can
use
the
guidance
in
this
document
to
help
you
plan
and
implement
the
sampling
and
analysis
program,
evaluate
the
sample
analysis
results
against
the
regulatory
standards,
and
quantify
the
level
of
uncertainty
associated
with
the
decisions.

This
section
identifies
the
major
RCRA
programs
that
specify
some
sort
of
sampling
and
testing
to
determine
if
a
waste
is
a
hazardous
waste,
to
determine
if
a
hazardous
waste
treatment
standard
is
attained,
or
to
meet
other
objectives
such
as
site
characterization.
Table
1
provides
a
listing
of
these
major
RCRA
programs
that
may
require
waste
sampling
and
testing
as
part
of
their
implementation.
Appendix
B
provides
a
more
detailed
listing
of
the
regulatory
citations,
the
applicable
RCRA
standards,
requirements
for
demonstrating
attainment
or
compliance
with
the
standards,
and
relevant
USEPA
guidance
documents.

Prior
to
conducting
a
waste
sampling
and
testing
program
to
comply
with
RCRA,
review
the
specific
regulations
in
detail.
Consult
the
latest
40
CFR,
related
Federal
Register
notices,
and
EPA's
World
Wide
Web
site
(www.
epa.
gov)
for
new
or
revised
regulations.
In
addition,
because
some
states
have
requirements
that
differ
from
EPA
regulations
and
guidance,
we
recommend
that
you
consult
with
a
representative
from
your
State
if
your
State
is
authorized
to
implement
the
regulation.
7
Table
1.
Major
RCRA
Program
Areas
Involving
Waste
Sampling
and
Analysis
1
40
CFR
Citation
Program
Description
Hazardous
Waste
Identification
§
261.3(
a)(
2)(
v)
Used
oil
rebuttable
presumption
(also
Part
279,
Subparts
B,
E,
F
and
G
standards
for
the
management
of
used
oil)

§
261.3(
c)(
2)(
ii)(
C)
Generic
exclusion
levels
for
K061,
K062,
and
F006
nonwastewater
HTMR
residues
§
261.21
Characteristic
of
Ignitability
§
261.22
Characteristic
of
Corrosivity
§
261.23
Characteristic
of
Reactivity
§
261.24
Toxicity
Characteristic
§
261.38(
c)(
8)
Exclusion
of
Comparable
Fuels
from
the
Definition
of
Solid
and
Hazardous
Waste
Part
261,
Appendix
I
Representative
Sampling
Methods
Mixed
Hazardous
Waste
Joint
EPA­
NRC
sampling
guidance.
See
November
20,
1997
Federal
Register
(62
FR
62079)

Land
Disposal
Restriction
Program
§
268.6
Petitions
to
Allow
Land
Disposal
of
a
Waste
Prohibited
Under
Subpart
C
of
Part
268
(No­
Migration
Petition).
Sampling
and
testing
criteria
are
specified
at
§
268.6(
b)(
1)
and
(2).

§
268.40
Land
Disposal
Restriction
(LDR)
concentration­
level
standards
§
268.44
Land
Disposal
Restriction
Treatability
Variance
§
268.49(
c)(
1)
Alternative
LDR
Treatment
Standards
for
Contaminated
Soil
Other
RCRA
Programs
and
References
§
260.10
Definitions
(for
Representative
Sample)

Part
260,
Subpart
C
Rulemaking
Petitions
Part
262,
Subpart
A
Generator
Standards
­
General
(including
§
262.11
Hazardous
Waste
Determination)

Part
262,
Subpart
C
Pre­
Transport
Requirements
Part
264,
Subpart
A
Treatment,
Storage,
and
Disposal
Facility
Standards
­
General
Parts
264/
265,
Subpart
B
Treatment,
Storage,
and
Disposal
Facility
Standards
­
General
Facility
Standards
Parts
264/
265,
Subpart
F
Releases
from
Solid
Waste
Management
Units
(ground­
water
monitoring)

Parts
264/
265,
Subpart
G
Closure
and
Post­
Closure
Parts
264,
Subpart
I
Use
and
Management
of
Containers
Parts
264/
265
­
Subpart
J
Tank
Systems
1.
Expanded
descriptions
of
the
programs
listed
in
Table
1
are
given
in
Appendix
B.
8
Table
1.
Major
RCRA
Program
Areas
Involving
Waste
Sampling
and
Analysis
(continued)

40
CFR
Citation
Program
Description
Other
RCRA
Programs
and
References
(continued)

Parts
264/
265
­
Subpart
M
Land
Treatment
Part
264/
265
­
Subpart
O
Incinerators
Part
264,
Subpart
S
Corrective
Action
for
Solid
Waste
Management
Units
(including
§
264.552
Corrective
Action
Management
Units)

Parts
264/
265
­
Subparts
AA/
BB/
CC
Air
Emission
Standards
Part
266
­
Subpart
H
Hazardous
Waste
Burned
in
Boiler
and
Industrial
Furnaces
(BIFs)
(including
§
266.112
Regulation
of
Residues)

Part
270
­
Subpart
B
Permit
Application,
Hazardous
Waste
Permitting
Part
270
­
Subpart
C
Conditions
Applicable
to
All
Permits
Part
270
­
Subpart
F
Special
Forms
of
Permits
Part
273
Standards
for
Universal
Waste
Management
Part
279
Standards
for
the
Management
of
Used
Oil
2.2
Sampling
For
Regulatory
Compliance
Many
RCRA
programs
involve
sampling
and
analysis
of
waste
or
environmental
media
by
the
regulated
community.
Sampling
and
analysis
often
is
employed
to
make
a
hazardous
waste
determination
(see
Section
2.2.1),
to
determine
if
a
waste
is
subject
to
treatment
or,
if
so,
has
been
adequately
treated
under
the
Land
Disposal
Restrictions
program
(see
Section
2.2.2),
or
in
responding
to
other
RCRA
programs
that
include
routine
monitoring,
unit
closure,
or
cleanup
(see
Section
2.2.3).

2.2.1
Making
a
Hazardous
Waste
Determination
Under
RCRA,
a
hazardous
waste
is
defined
as
a
solid
waste,
or
a
combination
of
solid
wastes
which,
because
of
its
quantity,
concentration,
or
physical,
chemical,
or
infectious
characteristics,
may
cause,
or
significantly
contribute
to
an
increase
in
mortality
or
an
increase
in
serious
irreversible
or
incapacitating
reversible
illness,
or
pose
a
substantial
present
or
potential
hazard
to
human
health
or
the
environment
when
improperly
treated,
stored,
transported,
disposed,
or
otherwise
managed.
The
regulatory
definition
of
a
hazardous
waste
is
found
in
40
CFR
§
261.3.

Solid
wastes
are
defined
by
regulation
as
hazardous
wastes
in
two
ways.
First,
solid
wastes
are
hazardous
wastes
if
EPA
lists
them
as
hazardous
wastes.
The
lists
of
hazardous
wastes
are
found
in
40
CFR
Part
261,
Subpart
D.
Second,
EPA
identifies
the
characteristics
of
a
hazardous
waste
based
on
criteria
in
40
CFR
§
261.10.
Accordingly,
solid
wastes
are
hazardous
if
they
exhibit
any
of
the
following
four
characteristics
of
a
hazardous
waste:
ignitability,
corrosivity,
reactivity,
or
toxicity
(based
on
the
results
of
the
Toxicity
Characteristic
Leaching
Procedure,
or
TCLP).
Descriptions
of
the
hazardous
waste
characteristics
are
found
in
40
CFR
Part
261,
Subpart
C.
1
Since
the
40
CFR
Part
261
Appendix
I
sampling
methods
are
not
formally
adopted
by
the
EPA
Administrator,
a
person
who
desires
to
employ
an
alternative
sampling
method
is
not
required
to
demonstrate
the
equivalency
of
his
or
her
method
under
the
procedures
set
forth
in
§§
260.20
and
260.21
(see
comment
at
§
261.20(
c)).

9
Generators
must
conduct
a
hazardous
waste
determination
according
to
the
hierarchy
specified
in
40
CFR
§
262.11.
Persons
who
generate
a
solid
waste
first
must
determine
if
the
solid
waste
is
excluded
from
the
definition
of
hazardous
waste
under
the
provisions
of
40
CFR
§
261.4.
Once
the
generator
determines
that
a
solid
waste
is
not
excluded,
then
he/
she
must
determine
if
the
waste
meets
one
or
more
of
the
hazardous
waste
listing
descriptions
and
determine
whether
the
waste
is
mixed
with
a
hazardous
waste,
is
derived
from
a
listed
hazardous
waste,
or
contains
a
hazardous
waste.

For
purposes
of
compliance
with
40
CFR
Part
268,
or
if
the
solid
waste
is
not
a
listed
hazardous
waste,
the
generator
must
determine
if
the
waste
exhibits
a
characteristic
of
a
hazardous
waste.
This
evaluation
involves
testing
the
waste
or
using
knowledge
of
the
process
or
materials
used
to
produce
the
waste.

When
a
waste
handler
conducts
testing
to
determine
if
the
waste
exhibits
any
of
the
four
characteristics
of
a
hazardous
waste,
he
or
she
must
obtain
a
representative
sample
(within
the
meaning
of
a
representative
sample
given
at
§
260.10)
using
the
applicable
sampling
method
specified
in
Appendix
I
of
Part
261
or
alternative
method
(per
§
261.20(
c))
1
and
test
the
waste
for
the
hazardous
waste
characteristics
of
interest
at
§
261.21
through
261.24.

For
the
purposes
of
subpart
261,
the
identification
of
hazardous
waste,
the
regulations
state
that
a
sample
obtained
using
any
of
the
applicable
sampling
methods
specified
in
Appendix
I
of
Part
261
to
be
a
representative
sample
within
the
meaning
of
the
Part
260
definition
of
representative
sample.
Since
these
sampling
methods
are
not
officially
required,
anyone
desiring
to
use
a
different
sampling
method
may
do
so
without
demonstrating
the
equivalency
of
that
method
under
the
procedures
set
forth
in
§
260.21.
The
user
of
an
alternate
sampling
method
must
use
a
method
that
yields
samples
that
"meet
the
definition
of
representative
sample
found
in
Part
260"
(45
FR
33084
and
33108,
May
18,
1990).
Such
methods
should
enable
one
to
obtain
samples
that
are
equally
representative
as
those
specified
in
Appendix
I
of
Part
261.
The
planning
process
and
much
of
the
information
described
in
this
guidance
document
may
be
helpful
to
someone
regulated
under
Part
261
wishing
to
use
an
alternate
sampling
method.
The
guidance
should
be
help
full
as
well
for
purposes
other
than
Part
261.

Certain
states
also
may
have
requirements
for
identifying
hazardous
wastes
in
addition
to
those
requirements
specified
by
Federal
regulations.
States
authorized
to
implement
the
RCRA
or
HSWA
programs
under
Section
3006
of
RCRA
may
promulgate
regulations
that
are
more
stringent
or
broader
in
scope
than
Federal
regulations.

2.2.2
Land
Disposal
Restrictions
(LDR)
Program
The
LDR
program
regulations
found
at
40
CFR
Part
268
require
that
a
hazardous
waste
generator
determine
if
the
waste
has
to
be
treated
before
it
can
be
land
disposed.
This
is
done
by
determining
if
the
hazardous
waste
meets
the
applicable
treatment
standards
at
§
268.40,
§
268.45,
or
§268.49.
EPA
expresses
treatment
standards
either
as
required
treatment
technologies
that
must
be
applied
to
the
waste
or
as
contaminant
concentration
levels
that
must
10
be
met.
(Alternative
LDR
treatments
standards
have
been
promulgated
for
contaminated
soil,
debris,
and
lab
packs.)
Determining
the
need
for
waste
treatment
can
be
made
by
either
of
two
ways:
testing
the
waste
or
using
knowledge
of
the
waste
(see
§
268.7(
a)).

If
a
hazardous
waste
generator
is
managing
and
treating
prohibited
waste
or
contaminated
soil
in
tanks,
containers,
or
containment
buildings
to
meet
the
applicable
treatment
standard,
then
the
generator
must
develop
and
follow
a
written
waste
analysis
plan
(WAP)
in
accordance
with
§
268.7(
a)(
5).

A
hazardous
waste
treater
must
test
their
waste
according
to
the
frequency
specified
in
their
WAP
as
required
by
40
CFR
264.13
(for
permitted
facilities)
or
40
CFR
265.13
(for
interim
status
facilities).
See
§
268.7(
b).

If
testing
is
performed,
no
portion
of
the
waste
may
exceed
the
applicable
treatment
standard,
otherwise,
there
is
evidence
that
the
standard
is
not
met
(see
63
FR
28567,
March
26,
1998).
Statistical
variability
is
"built
in"
to
the
standards
(USEPA
1991c).
Wastes
that
do
not
meet
treatment
standards
can
not
be
land
disposed
unless
EPA
has
granted
a
variance,
extension,
or
exclusion
(or
the
waste
is
managed
in
a
"no­
migration
unit").
In
addition
to
the
disposal
prohibition,
there
are
prohibitions
and
limits
in
the
LDR
program
regarding
the
dilution
and
storage
of
wastes.
The
program
also
requires
tracking
and
recordkeeping
to
ensure
proper
management
and
safe
land
disposal
of
hazardous
wastes.

General
guidance
on
the
LDR
program
can
be
found
in
Land
Disposal
Restrictions:
Summary
of
Requirements
(USEPA
2001d).
Detailed
guidance
on
preparing
a
waste
analysis
plan
(WAP)
under
the
LDR
program
can
be
found
in
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes
­
A
Guidance
Manual
(USEPA
1994a).
Detailed
guidance
on
measuring
compliance
with
the
alternative
LDR
treatment
standards
for
contaminated
soil
can
be
found
in
Guidance
on
Demonstrating
Compliance
With
the
Land
Disposal
Restrictions
(LDR)
Alternative
Soil
Treatment
Standards
(USEPA
2002a).

2.2.3
Other
RCRA
Regulations
and
Programs
That
May
Require
Sampling
and
Testing
In
addition
to
the
RCRA
hazardous
waste
identification
regulations
and
the
LDR
regulations,
EPA
has
promulgated
other
regulations
and
initiated
other
programs
that
may
involve
sampling
and
testing
of
solid
waste
and
environmental
media
(such
as
ground
water
or
soil).
Programspecific
EPA
guidance
should
be
consulted
prior
to
implementing
a
sampling
or
monitoring
program
to
respond
to
the
requirements
of
these
regulations
or
programs.
For
example,
EPA
has
issued
separate
program­
specific
guidance
on
sampling
to
support
preparation
of
a
delisting
petition,
ground­
water
and
unsaturated
zone
monitoring
at
regulated
units,
unit
closure,
corrective
action
for
solid
waste
management
units,
and
other
programs.
See
also
Appendix
B
of
this
document.

2.2.4
Enforcement
Sampling
and
Analysis
The
sampling
and
analysis
conducted
by
a
waste
handler
during
the
normal
course
of
operating
a
waste
management
operation
might
be
quite
different
than
the
sampling
and
analysis
conducted
by
an
enforcement
agency.
The
primary
reason
is
that
the
data
quality
objectives
(DQOs)
of
the
enforcement
agency
often
may
be
legitimately
different
from
those
of
a
waste
handler.
Consider
an
example
to
illustrate
this
potential
difference
in
approach:
Many
of
11
RCRA's
standards
were
developed
as
concentrations
that
should
not
be
exceeded
(or
equaled)
or
as
characteristics
that
should
not
be
exhibited
for
the
waste
or
environmental
media
to
comply
with
the
standard.
In
the
case
of
such
a
standard,
the
waste
handler
and
enforcement
officials
might
have
very
different
objectives.
An
enforcement
official,
when
conducting
a
compliance
sampling
inspection
to
evaluate
a
waste
handler's
compliance
with
a
"do
not
exceed"
standard,
take
only
one
sample.
Such
a
sample
may
be
purposively
selected
based
on
professional
judgment.
This
is
because
alI
the
enforcement
official
needs
to
observe
–
for
example
to
determine
that
a
waste
is
hazardous
–
is
a
single
exceedance
of
the
standard.

A
waste
handler,
however,
in
responding
to
the
same
regulatory
standard
may
want
to
ensure,
with
a
specified
level
of
confidence,
that
his
or
her
waste
concentrations
are
low
enough
so
that
it
would
be
unlikely,
for
example,
that
an
additional
sample
drawn
from
the
waste
would
exceed
the
regulatory
standard.
In
designing
such
an
evaluation
the
waste
handler
could
decide
to
take
a
sufficient
number
of
samples
in
a
manner
that
would
allow
evaluation
of
the
results
statistically
to
show,
with
the
desired
level
of
confidence,
that
there
is
a
low
probability
that
another
randomly
selected
sample
would
exceed
the
standard.

An
important
component
of
the
enforcement
official's
DQO
is
to
"prove
the
positive."
In
other
words,
the
enforcement
official
is
trying
to
demonstrate
whether
the
concentration
of
a
specific
constituent
in
some
portion
of
the
waste
exceeds
the
"do
not
exceed"
regulatory
level.
The
"prove
the
positive"
objective
combined
with
the
"do
not
exceed"
standard
only
requires
a
single
observation
above
the
regulatory
level
in
order
to
draw
a
valid
conclusion
that
at
least
some
of
the
waste
exceeds
the
level
of
concern.

The
Agency
has
made
it
clear
that
in
"proving
the
positive,"
the
enforcement
agency's
DQOs
may
not
require
low
detection
limits,
high
analyte
recoveries,
or
high
degrees
of
precision:

"If
a
sample
possesses
the
property
of
interest,
or
contains
the
constituent
at
a
high
enough
level
relative
to
the
regulatory
threshold,
then
the
population
from
which
the
sample
was
drawn
must
also
possess
the
property
of
interest
or
contain
that
constituent.
Depending
on
the
degree
to
which
the
property
of
interest
is
exceeded,
testing
of
samples
which
represent
all
aspects
of
the
waste
or
other
material
may
not
be
necessary
to
prove
that
the
waste
is
subject
to
regulation"
(see
55
FR
4440,
"Hazardous
Waste
Management
System:
Testing
and
Monitoring
Activities,"
February
8,
1990).

A
waste
handler
may
have
a
different
objective
when
characterizing
his
or
her
waste.
Instead,
the
waste
handler
may
wish
to
"prove
the
negative."
While
proving
the
negative
in
absolute
terms
is
not
realistic,
the
waste
handler
may
try
to
demonstrate
with
a
desired
level
of
confidence
that
the
vast
majority
of
his
or
her
waste
is
well
below
the
standard
such
that
another
sample
or
samples
taken
from
the
waste
would
not
likely
exceed
the
regulatory
standard.
The
Agency
also
has
spoken
to
the
need
for
sound
sampling
designs
and
proper
quality
control
when
one
is
trying
to
"prove
the
negative:"

"The
sampling
strategy
for
these
situations
(proving
the
negative)
should
be
thorough
enough
to
insure
that
one
does
not
conclude
a
waste
is
nonhazardous
when,
in
fact,
it
is
hazardous.
For
example,
one
needs
to
take
enough
samples
so
that
one
does
not
miss
areas
of
high
concentration
in
an
otherwise
clean
material.
Samples
must
be
handled
so
that
properties
do
not
change
and
12
contaminants
are
not
lost.
The
analytical
methods
must
be
quantitative,
and
regulatory
detection
limits
must
be
met
and
documented"
(see
55
FR
4440,
"Hazardous
Waste
Management
System:
Testing
and
Monitoring
Activities,"
February
8,
1990).

"Proving
the
negative"
can
be
a
more
demanding
objective
for
the
waste
handler
in
terms
of
the
sampling
strategy
and
resources
than
that
faced
by
the
enforcement
official.
To
address
this
objective
the
waste
handler
could
use
the
advice
in
this
or
similar
guidance
documents.
In
doing
so,
the
waste
handler
should
establish
objectives
using
a
systematic
planning
process,
design
a
sampling
and
analysis
plan
based
on
the
objectives,
collect
and
analyze
the
appropriate
number
of
samples,
and
use
the
information
from
the
sample
analysis
results
for
decision­
making.

The
distinction
between
a
sampling
strategy
designed
to
"prove
the
negative"
versus
one
designed
to
"prove
the
positive"
also
has
been
supported
in
a
recent
judicial
ruling.
In
United
States
v.
Allen
Elias
(9
th
Cir.
2001)
the
Government
used
a
limited
number
of
samples
to
prove
that
hazardous
waste
was
improperly
managed
and
disposed.
The
court
affirmed
that
additional
sampling
by
the
Government
was
not
necessary
to
"prove
the
positive."
13
3
FUNDAMENTAL
STATISTICAL
CONCEPTS
Throughout
the
life
cycle
of
a
waste­
testing
program,
the
tools
of
statistics
often
are
employed

in
planning,
implementation,
and
assessment.
For
example,
in
the
planning
phase,
you
may
state
certain
project
objectives
quantitatively
and
use
statistical
terminology.
Designing
and
implementing
a
sampling
plan
requires
an
understanding
of
error
and
uncertainty.
Statistical
techniques
can
be
used
to
describe
and
evaluate
the
data
and
to
support
decisions
regarding
the
regulatory
status
of
a
waste
or
contaminated
media,
attainment
of
treatment
or
cleanup
goals,
or
whether
there
has
been
a
release
to
the
environment.
Because
statistical
concepts
may
be
used
throughout
the
sampling
and
analysis
program,
an
understanding
of
basic
statistical
concepts
and
terminology
is
important.

While
statistical
methods
can
be
valuable
in
designing
and
implementing
a
scientifically
sound
waste­
sampling
program,
their
use
should
not
be
a
substitute
for
knowledge
of
the
waste
or
as
a
substitute
for
common
sense.
Not
every
problem
can,
or
necessarily
must,
be
evaluated
using
probabilistic
techniques.
Qualitative
expressions
of
decision
confidence
through
the
exercise
of
professional
judgment
(such
as
a
"weight
of
evidence"
approach)
may
well
be
sufficient,
and
in
some
cases
may
be
the
only
option
available
(Crumbling
2001).

If
the
objective
of
the
sampling
program
is
to
make
a
hazardous
waste
determination,
the
regulations
allow
that
a
single
representative
sample
is
sufficient
to
classify
a
waste
as
hazardous.
If
a
representative
sample
is
found
to
have
the
properties
set
forth
for
the
corrosivity,
ignitability,
reactivity,
or
toxicity
characteristics,
then
the
waste
is
hazardous.
The
regulations
do
not
address
directly
what
is
a
sufficient
number
of
samples
to
classify
a
solid
waste
as
nonhazardous.
However,
for
a
petition
to
reclassify
(delist)
a
listed
hazardous
waste,
which
includes
a
determination
that
the
listed
hazardous
waste
is
not
a
characteristic
hazardous
waste
(a
"nonhazardous"
classification),
the
regulations
provide
that
at
least
four
representative
samples
sufficient
to
represent
the
variability
or
uniformity
of
the
waste
must
be
tested
(40
CFR
260.22).
This
approach
is
not
necessarily
based
on
any
statistical
method
but
reflects
concepts
of
proving
the
negative
and
proving
the
positive
(see
also
Section
2.2.4).

Even
if
you
have
no
formal
training
in
statistics,
you
probably
are
familiar
with
basic
statistical
concepts
and
how
samples
are
used
to
make
inferences
about
the
population
from
which
the
samples
were
drawn.
For
example,
the
news
media
frequently
cite
the
results
of
surveys
that
make
generalized
conclusions
about
public
opinion
based
on
interviews
with
a
relatively
small
proportion
of
the
population.
These
results,
however,
are
only
estimates
because
no
matter
how
carefully
a
survey
is
done,
if
repeated
over
and
over
in
an
identical
manner,
the
answer
will
be
a
little
different
each
time.
There
always
will
be
some
random
sampling
variation
because
it
is
not
possible
to
survey
every
member
of
a
population.
There
also
will
be
measurement
and
estimation
errors
because
of
mistakes
made
in
how
data
are
obtained
and
interpreted.
Responsible
pollsters
report
this
as
their
"margin
of
error"
along
with
the
findings
of
the
survey
Do
the
RCRA
regulations
require
statistical
sampling?

Some
RCRA
regulations
require
the
use
of
statistical
tests
(e.
g.,
to
determine
if
there
has
been
a
release
to
ground
water
from
a
waste
management
unit
under
40
CFR
Subpart
F),
whereas,
other
RCRA
regulations
do
not
require
the
use
of
statistical
tests
(such
as
those
for
determining
if
a
solid
waste
is
or
is
not
a
hazardous
waste
or
determining
compliance
with
LDR
treatment
standards).
Even
where
there
is
no
regulatory
obligation
to
conduct
sampling
or
apply
statistical
tests
to
evaluate
sampling
results,
statistical
methods
can
be
useful
in
interpreting
data
and
managing
uncertainty
associated
with
waste
classification
decisions.
14
(Edmondson
1996).

Similar
to
surveys
of
human
populations,
waste
characterization
studies
can
be
designed
in
such
a
way
that
a
population
can
be
identified,
samples
can
be
collected,
and
the
uncertainty
in
the
results
can
be
reported.

The
following
sections
provide
a
brief
overview
of
the
statistical
concepts
used
in
this
guidance.
Four
general
topics
are
described:

°
Populations,
samples,
and
distributions
(Section
3.1)

°
Measures
of
central
tendency,
variability,
and
relative
standing
(Section
3.2)

°
Precision
and
bias
(Section
3.3)

°
Using
sample
analysis
results
to
classify
a
waste
or
determine
its
status
under
RCRA
(Section
3.4).

Guidance
on
selecting
and
using
statistical
methods
for
evaluating
data
is
given
in
Section
8.2
and
Appendix
F
of
this
document.
Statistical
tables
are
given
in
Appendix
G.
Additional
statistical
guidance
can
be
found
in
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d)
and
other
references
cited.

3.1
Populations,
Samples,
and
Distributions
A
"population"
consists
of
all
the
waste
or
media
whose
characteristics
are
to
be
studied
and
estimated.
A
set
of
observations,
known
as
a
statistical
sample,
is
a
portion
of
the
population
that
is
studied
in
order
to
learn
about
the
whole
population.
Sampling
is
necessary
when
a
study
of
the
entire
population
would
be
too
expensive
or
physically
impossible.

Inferences
about
the
population
are
made
from
samples
selected
from
the
population.
For
example,
the
sample
mean
(or
average)
is
a
consistent
estimator
of
the
population
mean.
In
general,
estimates
made
from
samples
tend
to
more
closely
approximate
the
true
population
parameter
as
the
number
of
samples
increases.
The
precision
of
these
inferences
depends
on
the
theoretical
sampling
distribution
of
the
statistic
that
would
occur
if
the
sampling
process
were
repeated
over
and
over
using
the
same
sampling
design
and
number
of
samples.

3.1.1
Populations
and
Decision
Units
A
"population"
is
the
entire
selection
of
interest
for
study.
Populations
can
have
spatial
boundaries,
which
define
the
physical
area
to
be
studied,
and
temporal
boundaries,
which
describe
the
time
interval
the
study
will
represent.
The
definition
of
the
population
can
be
subjective,
defined
by
regulation
or
permit
condition,
or
based
on
risks
to
human
health
and
the
environment.
In
all
cases,
however,
the
population
needs
to
be
finite
and
have
well­
defined,
unambiguous
physical
and/
or
temporal
boundaries.
The
physical
boundary
defines
the
size,
shape,
orientation,
and
location
of
the
waste
or
media
about
which
a
decision
will
be
made.

For
a
large
population
of
waste
or
media,
you
may
wish
to
subdivide
the
population
into
smaller
units
about
which
decisions
can
be
made,
rather
than
attempt
to
characterize
the
entire
15
population.
These
units
are
called
"decision
units,"
and
they
may
represent
a
single
type
of
waste
at
the
point
of
waste
generation,
a
waste
from
a
single
batch
operation,
waste
generated
over
a
specified
time,
or
a
volume
of
waste
or
contaminated
media
(such
as
soil)
subject
to
characterization,
removal,
and/
or
treatment.
The
concept
of
a
decision
unit
is
similar
to
an
"exposure
unit"
(Neptune,
et
al.
1990,
Blacker
and
Goodman
1994a
and
1994b,
Myers
1997),
or
"exposure
area"
(USEPA
1992a
and
1996a)
in
EPA's
Superfund
program
in
which
risk­
based
decisions
consider
the
mass
or
area
of
the
waste
or
media.
A
decision
unit
also
is
analogous
to
a
"remediation
unit"
as
described
in
EPA's
Data
Quality
Objective
Process
for
Superfund
(USEPA
1993a).

When
using
samples
to
determine
whether
a
solid
waste
is
a
hazardous
waste,
that
determination
must
be
made
at
the
point
of
generation
(i.
e.,
when
the
waste
becomes
a
solid
waste).

Hypothetical
examples
of
populations
or
decision
units
that
might
be
encountered
in
the
context
of
RCRA
waste
characterization
follow:

°
Filter
cake
being
placed
in
a
25­
cubic­
yard
roll­
off
bin
at
the
point
of
waste
generation
°
Waste
water
contained
in
a
55­
gallon
drum
°
Liquid
waste
flowing
from
the
point
of
generation
during
a
specified
time
interval
°
A
block
of
soil
(e.
g.,
10­
feet­
by­
10­
feet
square,
6­
inches
deep)
within
a
solid
waste
management
unit
(SWMU).

In
some
situations,
it
will
be
appropriate
to
define
two
separate
populations
for
comparison
to
each
other.
For
example,
in
monitoring
a
land­
based
waste
management
unit
to
determine
if
there
has
been
a
release
to
the
subsurface
at
statistically
significant
levels
above
background,
it
is
necessary
to
establish
two
populations:
(1)
a
background
population
and
(2)
an
exposed
(or
downgradient)
population
in
the
soil,
pore­
water,
or
ground­
water
system.

In
situations
in
which
the
boundaries
of
the
waste
or
contamination
are
not
obvious
or
cannot
be
defined
in
advance
(such
as
the
case
of
contaminated
soil
in
situ,
as
opposed
to
excavated
soil
in
a
pile),
the
investigator
is
interested
in
the
location
of
the
contamination
as
well
as
the
concentration
information.
Such
a
sampling
objective
is
best
addressed
by
spatial
analysis,
for
example,
by
using
geostatistical
methods
(See
also
Section
3.4.4).

3.1.2
Samples
and
Measurements
Samples
are
portions
of
the
population.
Using
information
from
a
set
of
samples
(such
as
measurements
of
chemical
concentrations)
and
the
tools
of
inductive
statistics,
inferences
can
be
made
about
the
population.
The
validity
of
the
inferences
depends
on
how
closely
the
samples
represent
the
physical
and
chemical
properties
of
the
population
of
interest.

In
this
document,
we
use
the
word
"sample"
in
several
different
ways.
To
avoid
confusion,
definitions
of
terms
follow:
16
1
Quart
Waste
Instrument
?

Primary
Sample
(e.
g.,
a
core)

Field
Sample
1
Gram
Subsample
Population
or
"Decision
Unit"

Sample
analysis
results
used
to
make
conclusions
about
the
waste
Figure
2.
Very
small
analytical
samples
are
used
to
make
decisions
about
much
larger
volumes
(modified
after
Myers
1997).
Sample:
A
portion
of
material
that
is
taken
from
a
larger
quantity
for
the
purpose
of
estimating
properties
or
composition
of
the
larger
quantity
(from
ASTM
D
6233­
98).

Statistical
sample:
A
set
of
samples
or
measurements
selected
by
probabilistic
means
(i.
e.,
by
using
some
form
of
randomness).

We
sometimes
refer
to
a
"set
of
samples"
to
indicate
more
than
one
individual
sample
that
may
or
may
not
have
been
obtained
by
probabilistic
means.

Outside
the
fields
of
waste
management
and
environmental
sciences,
the
concept
of
a
sample
or
"sampling
unit"
is
fairly
straightforward.
For
example,
a
pollster
measures
the
opinions
of
individual
human
beings,
or
the
QC
engineer
measures
the
diameter
of
individual
ball
bearings.
It
is
easy
to
see
that
the
measurement
and
the
sampling
unit
correspond;
however,
in
sampling
waste
or
environmental
media,
what
is
the
appropriate
"portion"
that
should
be
in
a
sampling
unit?
The
answer
to
this
question
requires
consideration
of
the
heterogeneities
of
the
sample
media
and
the
dimension
of
the
sampling
problem
(in
other
words,
are
you
sampling
over
time
or
sampling
over
space?).
The
information
can
be
used
to
define
the
appropriate
size,
shape,
and
orientation
of
the
sample.
The
size,
shape,
and
orientation
of
a
sample
are
known
as
the
sample
support,
and
the
sample
support
will
affect
the
measurement
value
obtained
from
the
sample.

As
shown
in
Figure
2,
after
a
sample
of
a
certain
size,
shape,
and
orientation
is
obtained
in
the
field
(as
the
primary
sample),
it
is
handled,
transported,
and
prepared
for
analysis.
At
each
stage,
changes
can
occur
in
the
sample
(such
as
the
gain
or
loss
of
constituents,
changes
in
the
particle
size
distribution,
etc.).
These
changes
accumulate
as
errors
throughout
the
sampling
process
such
that
measurements
made
on
relatively
small
analytical
samples
(often
less
than
1
gram)
may
no
longer
"represent"
the
population
of
interest.
Because
sampling
and
analysis
results
may
be
relied
upon
to
make
decisions
about
a
waste
or
media,
it
is
important
to
understand
the
sources
of
the
errors
introduced
at
each
stage
of
sampling
and
take
steps
to
minimize
or
control
those
errors.
In
doing
so,
samples
will
be
sufficiently
"representative"
of
the
population
from
which
they
are
obtained.

The
RCRA
solid
waste
regulations
at
40
CFR
§260.10
define
a
representative
sample
as:

"a
sample
of
a
universe
or
whole
(e.
g.,
waste
pile,
lagoon,
ground
water)
which
can
be
expected
to
exhibit
the
average
properties
of
the
universe
or
whole."
17
Total
Pb
(mg/
L)
Frequency
Histogram
2
0
1
0
0
9
8
7
6
5
4
3
2
1
0
Figure
3.
Histogram
representing
the
distribution
of
total
lead
(Pb)
in
11
samples
of
No.
2
fuel
oil
(USEPA
1998b).

Concentration
Frequency
Mean
=
Median
=
Mode
(a)
Normal
Distribution
Frequency
Concentration
Mean
=
Median
=
Mode
(b)
Lognormal
Distribution
Mean
Mode
Median
Figure
4.
Examples
of
two
distributions:
(a)
normal
distribution
and
(b)
lognormal
distribution
RCRA
implementors,
at
a
minimum,
must
use
this
definition
when
a
representative
sample
is
called
for
by
the
regulations.
Various
other
definitions
of
a
representative
sample
have
been
developed
by
other
organizations.
For
example,
ASTM
in
their
consensus
standard
D
6044­
96
defines
a
representative
sample
as
"a
sample
collected
in
such
a
manner
that
it
reflects
one
or
more
characteristics
of
interest
(as
defined
by
the
project
objectives)
of
a
population
from
which
it
was
collected"
(ASTM
D
6044).
A
detailed
discussion
of
representativeness
also
is
given
in
Guidance
on
Data
Quality
Indicators
(USEPA
2001e).

3.1.3
Distributions
Because
the
concentration
of
constituents
of
concern
will
not
be
the
same
for
every
individual
sample,
there
must
be
a
distribution
of
concentrations
among
the
population.
Understanding
the
distributional
characteristics
of
a
data
set
is
an
important
first
step
in
data
analysis.

If
we
have
a
sufficient
number
of
samples
selected
from
a
population,
a
picture
of
the
distribution
of
the
sample
data
can
be
represented
in
the
form
of
a
histogram.
A
histogram,
which
offers
a
simple
graphical
representation
of
the
shape
of
the
distribution
of
data,
can
be
constructed
by
dividing
the
data
range
into
units
or
"bins"
(usually
of
equal
width),
counting
the
number
of
points
within
each
unit,
and
displaying
the
data
as
the
height
or
area
within
a
bar
graph.
Figure
3
is
an
example
of
a
histogram
made
using
analysis
results
for
total
lead
in
11
samples
of
No.
2
fuel
oil
(data
set
from
USEPA
1998b).
Guidance
on
constructing
histograms
can
be
found
in
EPA's
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d).

With
a
sufficiently
large
number
of
samples,
the
bars
of
the
histogram
could
be
"blended
together"
to
form
a
curve
known
as
a
probability
density
function
(PDF).
Figure
4
shows
two
probability
density
functions
you
might
encounter:
Figure
4(
a)
is
a
normal
distribution
with
its
familiar
symmetrical
mound­
shape.
Figure
4(
b)
is
a
lognormal
distribution
in
which
the
natural
log­
transformed
values
exhibit
a
normal
distribution.
A
lognormal
distribution
indicates
that
a
relatively
small
proportion
of
the
population
includes
some
relatively
large
values.
18
Normal
Probability
Plot
Probability
Total
Pb
(mg/
L)
N
of
data:
11
Std
Dev:
4.
7209
Average:
9.
21546
2
0
1
0
0
.999
.9
9
.9
5
.8
0
.5
0
.2
0
.0
5
.0
1
.001
Figure
5.
Normal
probability
plot
Many
of
the
tools
used
in
statistics
are
based
on
the
assumption
that
the
data
are
normally
distributed,
can
be
transformed
to
a
normal
scale,
or
can
be
treated
as
if
they
are
approximately
normal.
The
assumption
of
a
normal
distribution
often
can
be
made
without
significantly
increasing
the
risk
of
making
a
"wrong"
decision.
Of
course,
the
normal
and
lognormal
distributions
are
assumed
models
that
only
approximate
the
underlying
population
distribution.

Another
distribution
of
interest
is
known
as
the
binomial
distribution.
The
binomial
distribution
can
be
used
when
the
sample
analysis
results
are
interpreted
as
either
"fail"
or
"pass"
(e.
g.,
a
sample
analysis
result
either
exceeds
a
regulatory
standard
or
does
not
exceed
the
standard).

In
some
cases,
you
may
not
be
able
to
"fit"
the
data
to
any
particular
distributional
model.
In
these
situations,
we
recommend
you
consider
using
a
"distribution­
free"
or
"nonparametric"
statistical
method
(see
Section
8.2).

A
simple
but
extremely
useful
graphical
test
for
normality
is
to
graph
the
data
as
a
probability
plot.
In
a
probability
plot,
the
vertical
axis
has
a
probability
scale
and
the
horizontal
axis
has
a
data
scale.
In
general,
if
the
data
plot
as
a
straight
line,
there
is
a
qualitative
indication
of
normality.
If
the
natural
logarithms
of
the
data
plot
as
a
straight
line,
there
is
an
indication
of
lognormality.

Figure
5
provides
an
example
of
a
normal
probability
plot
created
from
the
same
data
used
to
generate
the
histogram
in
Figure
3.
Guidance
on
constructing
probability
plots
can
be
found
in
EPA's
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d).

Section
8
(Assessment:
Analyzing
and
Interpreting
Data)
provides
guidance
on
checking
the
distribution
of
data
sets
and
provides
strategies
for
handling
sample
data
exhibiting
a
nonnormal
distribution.

3.2
Measures
of
Central
Tendency,
Variability,
and
Relative
Standing
In
addition
to
graphical
techniques
for
summarizing
and
describing
data
sets,
numerical
methods
can
be
used.
Numerical
methods
can
be
used
to
describe
the
central
tendency
of
the
set
of
measurements,
the
variability
or
spread
of
the
data,
and
the
relative
standing
or
relative
location
of
a
measurement
within
a
data
set.

3.2.1
Measures
of
Central
Tendency
The
average
or
mean
often
is
used
as
a
measure
of
central
tendency.
The
mean
of
a
set
of
quantitative
data
is
equal
to
the
sum
of
the
measurements
divided
by
the
number
of
measurements
contained
in
the
data
set.
Other
measures
of
central
tendency
include
the
19
median
(the
midpoint
of
an
ordered
data
set
in
which
half
the
values
are
below
the
median
and
half
are
above)
and
the
mode
(the
value
that
occurs
most
often
in
the
distribution).
For
distributions
that
are
not
symmetrical,
the
median
and
the
mean
do
not
coincide.
The
mean
for
a
lognormal
distribution,
for
instance,
will
exceed
its
median
(see
Figure
4(
b)).

The
true
population
mean,
("
mu"),
is
the
average
of
the
true
measurements
(e.
g.,
of
the
µ
constituent
concentration)
made
over
all
possible
samples.
The
population
mean
is
never
known
because
we
cannot
measure
all
the
members
of
a
population
(or
all
possible
samples).
We
can,
however,
estimate
the
population
mean
by
taking
random
samples
from
the
population.
The
average
of
measurements
taken
on
random
samples
is
called
the
sample
mean.
The
sample
mean
is
denoted
by
the
symbol
("
x­
bar")
and
calculated
by
summing
the
value
x
obtained
from
each
random
sample
(
)
and
dividing
by
the
number
of
samples
(
):
xi
n
x
n
xi
i
n
=
=
 
1
1
Equation
1
Box
1
provides
an
example
calculation
of
the
sample
mean.

Box
1.
Example
Calculation
of
the
Sample
Mean
Using
Equation
1
and
the
following
four
data
points
in
parts
per
million
(ppm):
86,
90,
98,
and
104,
the
following
is
an
example
of
computing
the
sample
mean.

x
n
x
i
i
n
=
=
+
+
+
=
=
 
1
86
90
98104
4
95ppm
1
Therefore,
the
sample
mean
is
95
ppm.

3.2.2
Measures
of
Variability
Random
variation
in
the
population
is
described
by
"dispersion"
parameters
­­
the
population
variance
(
)
and
the
population
standard
deviation
(
).
Because
we
cannot
measure
all
 
2
 
possible
samples
that
comprise
the
population,
the
values
for
and
are
unknown.
The
 
2
 
variance,
however,
can
be
estimated
from
a
statistical
sample
of
the
population
by
the
sample
variance:

s
n
x
x
i
i
n
2
2
1
1
1
=
 
 
=
 
()
Equation
2
The
variance
calculated
from
the
samples
is
known
as
the
sample
variance
(
)
and
it
s
2
includes
random
variation
in
the
population
as
well
as
random
variation
that
can
be
introduced
by
sample
collection
and
handling,
sample
transport,
and
sample
preparation
and
analysis.
The
sample
variance
is
an
estimate
of
the
variance
that
one
would
obtain
if
the
entire
set
of
all
possible
samples
in
the
population
were
measured
using
the
same
measurement
process
as
is
20
Frequency
Concentration
50th
Percentile
=
Mean
99th
Percentile
68%
95%
99
7%
.
 
3
 
+
3
 
 
2
 
+
2
 
+
1
 
 
1
 
Figure
6.
Percentage
of
values
falling
within
1,
2,
and
3
standard
deviations
of
the
mean
of
a
normal
distribution.
The
figure
also
shows
the
relationship
between
the
mean,
the
50
th
percentile,
and
the
99
th
percentile
in
a
normal
distribution.
being
employed
for
the
samples.
If
there
were
no
sample
handling
or
measurement
error,
n
this
sample
variance
(
)
would
estimate
the
population
variance
(
).
s
2
 
2
The
population
standard
deviation
(
)
is
estimated
by
,
the
sample
standard
deviation:
 
s
s
s
=
2
Equation
3
Box
2
provides
an
example
calculation
of
the
sample
variance
and
sample
standard
deviation.

Box
2.
Example
Calculations
of
Sample
Variance
and
Standard
Deviation
Using
Equation
2
and
the
data
points
in
Box
1,
the
following
is
an
example
calculation
of
the
sample
variance:

[
]
s
2
2
222
86
94
5
90
94
5
98
94
5
104
94
5
4
1
195
3
65
=
 
+
 
+
 
+
 
 
=
=
(
.)
(
.)
(
.)
(
.)

Using
Equation
3,
the
sample
standard
deviation
is
then
calculated
as
follows:

s
s
=
=
2
81
.

The
standard
deviation
is
used
to
measure
the
variability
in
a
data
set.
For
a
normal
distribution,
we
know
the
following
(see
Figure
6):

°
Approximately
68
percent
of
measurements
will
fall
within
1
standard
deviation
±
of
the
mean
°
Approximately
95
percent
of
the
measurements
will
fall
within
2
standard
±
deviations
of
the
mean
°
Almost
all
(99.74
percent)
of
the
measurements
will
fall
within
3
standard
±
deviations
of
the
mean.

Estimates
of
the
standard
deviation,
combined
with
the
assumption
of
a
normal
distribution,
allow
us
to
make
quantitative
statements
about
the
spread
of
the
data.
The
larger
the
spread
in
the
data,
the
less
certainty
we
have
in
estimates
or
decisions
made
from
the
data.
As
discussed
in
the
following
section,
a
small
spread
in
the
data
offers
21
more
certainty
in
estimates
and
decisions
made
from
the
data.

Because
is
an
estimate
of
a
population
parameter
based
on
a
statistical
sample,
we
expect
x
its
value
to
be
different
each
time
a
new
set
of
samples
is
drawn
from
the
population.
The
means
calculated
from
repeated
statistical
samples
also
form
a
distribution.
The
estimate
of
the
standard
deviation
of
the
sampling
distribution
of
means
is
called
the
standard
error.

The
standard
error
of
the
mean
(
)
is
estimated
by:
sx
s
s
n
x
=
Equation
4
The
standard
error
is
used
in
equations
to
calculate
the
appropriate
number
of
samples
to
estimate
the
mean
with
specified
confidence
(see
Section
5.4),
and
it
is
used
in
statistical
tests
to
make
inferences
about
(see
Appendix
F).
x
3.2.3
Measures
of
Relative
Standing
In
addition
to
measures
of
central
tendency
and
variability
to
describe
data,
we
also
may
be
interested
in
describing
the
relative
standing
or
location
of
a
particular
measurement
within
a
data
set.
One
such
measure
of
interest
is
the
percentile
ranking.
A
population
percentile
represents
the
percentage
of
elements
of
a
population
having
values
less
than
a
specified
value.
Mathematically,
for
a
set
of
measurements
the
percentile
(or
quantile)
is
a
n
pth
number
such
that
of
the
measurements
fall
below
the
percentile,
and
p%
pth
()%
100
 
p
fall
above
it.
For
example,
if
a
measurement
is
located
at
the
99
th
percentile
in
a
data
set,
it
means
that
99
percent
of
measurements
are
less
than
that
measurement,
and
1
percent
are
above.
In
other
words,
almost
the
entire
distribution
lies
below
the
value
representing
the
99
th
percentile.
Figure
6
depicts
the
relationship
between
the
mean,
the
50
th
percentile,
and
the
99
th
percentile
in
a
normal
distribution.

Just
like
the
mean
and
the
median,
a
percentile
is
a
population
parameter
that
must
be
estimated
from
the
sample
data.
As
indicated
in
Figure
6,
for
a
normal
distribution
a
"point
estimate"
of
a
percentile
(
)
can
be
obtained
using
the
sample
mean
(
)
and
the
sample
$
xp
x
standard
deviation
(
)
by:
s
$
x
xzs
p
p
=
+
Equation
5
where
is
the
quantile
of
the
standard
normal
distribution.
(Values
of
that
zp
pth
zp
correspond
to
values
of
can
be
obtained
from
the
last
row
of
Table
G­
1
in
Appendix
G).
A
p
probability
plot
(see
Figure
5)
offers
another
method
of
estimating
normal
percentiles.
See
EPA's
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d)
for
guidance
on
constructing
probability
plots
and
estimating
percentiles.
22
100
90
110
120
130
80
70
100
90
110
120
130
80
70
True
Concentration
=
100
ppm
Precise
Unbiased
Precise
Biased
170
Frequency
Concentration
Ave.
=
170
True
Value
(a)
(b)

Frequency
Concentration
Ave.
=
100
=
True
Value
Frequency
Concentration
Ave.
=
150
True
Value
100
90
110
120
130
80
70
100
90
110
120
130
80
70
True
Concentration
=
100
ppm
Imprecise
Unbiased
Imprecise
Biased
170
Frequency
Concentration
Ave.
=
100
=
True
Value
(c)
(d)
170
Figure
7.
Shots
at
a
target
illustrate
precision
and
bias
(modified
after
Jessen
1978).
3.3
Precision
and
Bias
The
representativeness
of
a
statistical
sample
(that
is,
a
set
of
samples)
can
be
described
in
terms
of
precision
and
bias.
Precision
is
a
measurement
of
the
closeness
of
agreement
between
repeated
measurements.
Bias
is
the
systematic
or
consistent
over­
or
underestimation
of
the
true
value
(Myers
1997,
USEPA
2000d).

The
analogy
of
a
target
often
is
used
to
illustrate
the
concepts
of
precision
and
bias.
In
Figure
7,
the
center
of
each
target
represents
the
true
(but
unknown)
average
concentration
in
a
batch
of
waste.
The
"shots"
in
targets
(a)
through
(d)
represent
measurement
results
from
samples
taken
to
estimate
the
true
concentration.
The
figure
also
can
be
used
to
illustrate
precision
and
bias
associated
with
measurement
processes
within
a
laboratory
in
which
the
same
sample
is
analyzed
multiple
times
(for
example,
four
times).

Figure
7(
a)
indicates
high
precision
and
low
bias
in
the
sampling
and
analysis
results.
Generally,
high
precision
and
minimal
bias
are
required
when
one
or
more
chemical
constituents
in
a
solid
waste
are
present
at
concentrations
close
to
the
applicable
regulatory
threshold
or
action
level.
Note
that
each
of
the
measurements
in
Figure
7(
a)
is
in
close
agreement
with
the
true
value.
These
measurements
can
be
described
as
having
high
accuracy.

If
the
sampling
and
measurement
process
is
very
precise
but
suffers
from
bias
(such
as
use
of
an
incorrect
sampling
procedure
or
contamination
of
an
analytical
instrument),
the
situation
could
be
as
pictured
in
Figure
7(
b)
in
which
the
repeated
measurements
are
close
to
one
another
but
not
close
to
the
true
value.
In
fact,
the
data
express
a
significant
70
percent
bias
that
might
go
undetected
if
the
true
value
is
not
known.

The
opposite
situation
is
depicted
in
Figure
7(
c),
where
the
data
show
low
precision
(that
is,
high
dispersion
around
the
mean)
but
are
unbiased
because
the
samples
lack
any
systematic
error
and
the
average
of
the
measurements
reflects
the
true
average
concentration.
Precision
in
sampling
can
be
improved
by
increasing
the
number
of
samples,
increasing
the
volume
23
(mass)
of
each
sample,
or
by
employing
a
composite
sampling
strategies.
Note,
however,
that
relatively
imprecise
results
can
be
tolerated
if
the
contaminants
of
concern
occur
at
levels
either
far
below
or
far
above
their
applicable
thresholds.

Figure
7(
d)
depicts
the
situation
where
the
sampling
and
analytical
process
suffers
from
both
imprecision
and
bias.
In
both
Figures
7(
b)
and
(d),
the
bias
will
result
in
an
incorrect
estimate
of
the
true
concentration,
even
if
innumerable
samples
are
collected
and
analyzed
to
control
the
impact
of
imprecision
(i.
e.,
bias
will
not
"cancel
out"
with
increasing
numbers
of
samples).

There
are
several
types
and
causes
of
bias,
including
sampling
bias,
analytical
bias,
and
statistical
bias:

Sampling
Bias:
There
are
three
potential
sources
of
sampling
bias:
(1)
Bias
can
be
introduced
in
the
field
and
the
laboratory
through
the
improper
selection
and
use
of
devices
for
sampling
and
subsampling.
Bias
related
to
sampling
tools
can
be
minimized
by
ensuring
all
of
the
material
of
interest
for
the
study
is
accessible
by
the
sampling
tool.
(2)
Bias
can
be
introduced
through
improper
design
of
the
sampling
plan.
Improper
sampling
design
can
cause
parts
of
the
population
of
interest
to
be
over­
or
undersampled
thereby
causing
the
estimated
values
to
be
systematically
shifted
away
from
the
true
values.
Bias
related
to
sampling
design
can
be
minimized
by
ensuring
the
sampling
protocol
is
impartial
so
there
is
an
equal
chance
for
each
part
of
the
waste
to
be
included
in
the
sample
over
both
the
spatial
and
temporal
boundaries
defined
for
the
study.
(3)
Bias
can
be
introduced
in
sampling
due
to
the
loss
or
addition
of
contaminants
during
sampling
and
sample
handling.
This
bias
can
be
controlled
using
sampling
devices
made
of
materials
that
do
not
sorb
or
leach
constituents
of
concern,
and
by
use
of
careful
decontamination
and
sample
handling
procedures.
For
example,
agitation
or
homogenization
of
samples
can
cause
a
loss
of
volatile
constituents,
thereby
indicating
a
concentration
of
volatiles
lower
than
the
true
value.
Proper
decontamination
of
sampling
equipment
between
sample
locations
or
the
use
of
disposable
devices,
and
the
use
of
appropriate
sample
containers
and
preservatives
also
can
control
bias
in
field
sampling.

Analytical
Bias:
Analytical
(or
measurement)
bias
is
a
systematic
error
caused
by
instrument
contamination,
calibration
drift,
or
by
numerous
other
causes,
such
as
extraction
inefficiency
by
the
solvent,
matrix
effect,
and
losses
during
shipping
and
handling.

Statistical
Bias:
After
the
sample
data
have
been
obtained,
statistics
are
used
to
estimate
population
parameters
using
the
sample
data.
Statistical
bias
can
occur
in
two
situations:
(1)
when
the
assumptions
made
about
the
sampling
distribution
are
not
consistent
with
the
underlying
population
distribution,
or
(2)
when
the
statistical
estimator
itself
is
biased.

Returning
to
Figure
7,
note
that
each
target
has
an
associated
frequency
distribution
curve.
Frequency
curves
are
made
by
plotting
a
concentration
value
versus
the
frequency
of
occurrence
of
that
concentration.
The
curves
show
that
as
precision
decreases
(i.
e.,
the
variance
increases),
the
curve
flattens
out
and
an
increasing
number
of
measurements
are
 
2
found
further
away
from
the
average
(figures
c
and
d).
More
precise
measurements
result
in
steeper
curves
(figures
a
and
b)
with
the
majority
of
measurements
relatively
closer
to
the
24
average
value
in
normally
distributed
data.
The
greater
the
bias
(figures
b
and
d)
the
further
the
average
of
the
measurements
is
shifted
away
from
the
true
value.
The
smaller
the
bias
(figures
a
and
c)
the
closer
the
average
of
the
samples
is
to
the
true
average.

Representative
samples
are
obtained
by
controlling
(at
acceptable
levels)
random
variability
(
)
and
systematic
error
(or
bias)
in
sampling
and
analysis.
Quality
control
procedures
and
 
2
samples
are
used
to
estimate
the
precision
and
bias
of
sampling
and
analytical
results.

3.4
Using
Sample
Analysis
Results
to
Classify
a
Waste
or
to
Determine
Its
Status
Under
RCRA
If
samples
are
used
to
classify
a
waste
or
determine
its
regulatory
status,
then
the
sampling
approach
(including
the
number
and
type
of
samples)
must
meet
the
requirements
specified
by
the
regulations.
Regardless
of
whether
or
not
the
regulations
specify
sampling
requirements
or
the
use
of
a
statistical
test,
the
Agency
encourages
waste
handlers
to
use
a
systematic
planning
process
such
as
the
DQO
Process
to
set
objectives
for
the
type,
quantity,
and
quality
of
data
needed
to
ensure
with
some
known
level
of
assurance
that
the
regulatory
standards
are
achieved.

After
consideration
of
the
objectives
identified
in
the
planning
process,
careful
implementation
of
the
sampling
plan,
and
review
of
the
analytical
results,
you
can
use
the
sample
analysis
results
to
classify
a
waste
or
make
other
decisions
regarding
the
status
of
the
waste
under
RCRA.
The
approach
you
select
to
obtain
and
evaluate
the
results
will
be
highly
dependent
on
the
regulatory
requirements
(see
Section
2
and
Appendix
B)
and
the
data
quality
objectives
(see
Section
4
and
Section
5).

The
following
sections
provide
a
conceptual
overview
of
how
you
can
use
sample
analysis
results
to
classify
a
waste
or
determine
its
status
under
RCRA.
Guidance
is
provided
on
the
following
topics:

°
Using
an
average
to
measure
compliance
with
a
fixed
standard
(Section
3.4.1)

°
Using
the
maximum
sample
analysis
result
or
an
upper
percentile
to
measure
compliance
with
a
fixed
standard
(Section
3.4.2)

There
are
other
approaches
you
might
use
to
evaluate
sample
analysis
results,
including
tests
that
compare
two
populations,
such
as
"downgradient"
to
"background"
(see
Section
3.4.3),
and
analysis
of
spatial
patterns
of
contamination
(see
Section
3.4.4).

Detailed
statistical
guidance,
including
the
necessary
statistical
equations,
is
provided
in
Section
8.2
and
Appendix
F.

3.4.1
Using
an
Average
To
Determine
Whether
a
Waste
or
Media
Meets
the
Applicable
Standard
The
arithmetic
average
(or
mean)
is
a
common
parameter
used
to
determine
whether
the
concentration
of
a
constituent
in
a
waste
or
media
is
below
a
fixed
standard.
The
mean
often
is
used
in
cases
in
which
a
long­
term
(chronic)
exposure
scenario
is
assumed
(USEPA
1992c)
or
where
some
average
condition
is
of
interest.
25
Sample
Set
1
2
3
4
5
6
7
8
9
10
Sample
Mean
Confidence
Interval
µ

Figure
8.
80­
percent
confidence
intervals
calculated
from
10
equal­
sized
sets
of
samples
drawn
at
random
from
the
same
waste
stream
Concentration
Frequency
Concentration
Frequency
A
B
95%
UCL
95%
UCL
Waste
inappropriately
judged
a
solid
waste
Waste
appropriately
judged
to
achieve
the
exclusion
level
Specification
Level
Specification
Level
Sample
mean
 
true
mean
Figure
9.
Example
of
how
sampling
precision
could
impact
a
waste
exclusion
demonstration
under
40
CFR
261.38.
Due
to
imprecision
(A),
the
waste
is
inappropriately
judged
a
solid
waste.
With
more
precise
results
(B),
the
entire
confidence
interval
lies
below
the
specification
level,
and
the
waste
is
appropriately
judged
eligible
for
the
comparable
fuels
exclusion.
Because
of
the
uncertainty
associated
with
estimating
the
true
mean
concentration,
a
confidence
interval
on
the
mean
is
used
to
define
the
upper
and
lower
limits
that
bracket
the
true
mean
with
a
known
level
of
confidence.
If
the
upper
confidence
limit
(UCL)
on
the
mean
is
less
than
the
fixed
standard,
then
we
can
conclude
the
true
average
is
below
the
standard
with
a
known
amount
of
confidence.
As
an
alternative
to
using
a
statistical
interval
to
draw
conclusions
from
the
data,
you
could
use
hypothesis
testing
as
described
in
EPA's
Guidance
for
the
Data
Quality
Objectives
Process,
EPA
QA/
G­
4
(USEPA
2000b)
and
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d).

Confidence
intervals
are
calculated
using
the
sample
analysis
results.
Figure
8
shows
what
is
expected
to
happen
when
ten
different
sets
of
samples
are
drawn
from
the
same
waste
and
a
confidence
interval
for
the
mean
is
calculated
for
each
set
of
samples.
The
true
(but
unknown)
mean
(
)
–
shown
as
a
vertical
line
–
µ
does
not
change,
but
the
positions
of
the
sample
means
(
)
and
confidence
x
intervals
(shown
as
the
horizontal
lines)
do
change.
For
most
of
the
sampling
events,
the
confidence
interval
contains
the
true
mean,
but
sometimes
it
does
not.
In
this
particular
example,
we
expect
8
out
of
10
intervals
to
contain
the
true
mean,
so
we
call
this
an
"80­
percent
confidence
interval
on
the
mean."
In
practice,
you
only
have
one
set
of
data
from
one
sampling
event,
not
ten.
Note
that
an
equal
degree
of
uncertainty
is
associated
with
the
parameter
of
interest
being
located
outside
each
of
the
two
interval
endpoints.
Consequently,
the
confidence
interval
employed
in
this
example
is,
for
all
practical
purposes,
a
90­
percent
interval.
We
will
refer
to
this
as
a
"one­
sided
90­
percent
confidence
limit
on
the
mean."
Of
course,
other
levels
of
confidence
could
be
used,
such
as
a
95­
percent
confidence
limit.

The
width
of
the
confidence
interval
(defined
by
the
upper
and
lower
confidence
limits)
is
an
indicator
of
the
precision
of
the
estimate
of
the
parameter
of
interest.
Generally,
one
can
improve
precision
(i.
e.,
reduce
the
standard
error,
)
by
taking
more
samples,
s
n
/
increasing
the
physical
size
of
each
26
sample
(i.
e.,
increasing
the
sample
support),
and
by
minimizing
random
variability
introduced
in
the
sampling
and
measurement
processes.

For
example,
Figure
9
shows
how
sampling
precision
can
affect
the
ability
to
claim
an
exclusion
from
the
definition
of
solid
waste
under
the
comparable
fuels
regulations
at
40
CFR
261.38.
In
Figure
9
"A,"
the
sampling
results
are
unbiased,
but
they
are
not
sufficiently
precise.
In
fact,
the
imprecision
causes
the
confidence
intervals
to
"straddle"
the
specification
level;
thus,
there
is
not
statistically
significant
evidence
that
the
mean
is
below
the
standard.
Imprecision
can
be
caused
by
the
heterogeneity
of
the
material
sampled,
by
random
errors
in
the
field
and
laboratory,
and
by
too
few
samples.
In
Figure
9
"B,"
the
results
also
are
unbiased,
but
significant
improvement
in
precision
is
observed
(e.
g.,
because
more
or
larger
samples
were
analyzed
and
errors
were
kept
within
acceptable
limits),
allowing
us
to
conclude
that
the
mean
is
indeed
below
the
specification
level.

Detailed
guidance
on
the
calculation
of
confidence
limits
for
the
mean
can
be
found
in
Appendix
F
of
this
document.

3.4.2
Using
a
Proportion
or
Percentile
To
Determine
Whether
a
Waste
or
Media
Meets
an
Applicable
Standard
Under
RCRA,
some
regulatory
thresholds
are
defined
as
concentration
values
that
cannot
be
exceeded
(e.
g.,
the
RCRA
LDR
program
concentration­
based
treatment
standards
for
hazardous
waste
specified
at
§
268.40
and
§
268.48),
concentration
values
that
cannot
be
equaled
or
exceeded
(e.
g.,
the
Toxicity
Characteristic
maximum
concentration
levels
specified
at
§
261.24),
or
waste
properties
that
cannot
be
exhibited
(e.
g.,
ignitability
per
§
261.21,
corrosivity
per
§
261.22,
or
reactivity
per
§
261.23)
for
the
waste
to
comply
with
the
regulatory
standard.

To
demonstrate
compliance
with
such
a
standard
using
sampling,
it
is
necessary
to
consider
the
waste
or
site
(whose
boundaries
are
defined
as
a
decision
unit)
as
a
population
of
discrete
sample
units
(of
a
defined
size,
shape,
and
orientation).
Ideally,
none
of
these
sample
units
may
exceed
the
standard
or
exhibit
the
properties
of
concern
for
the
waste
or
site
to
be
in
compliance
with
the
standard.
However,
since
it
is
not
possible
to
know
the
status
of
all
portions
of
a
waste
or
site,
samples
must
be
used
to
infer
­
using
statistical
methods
­
what
proportion
or
percentage
of
the
waste
complies,
or
does
not
comply,
with
the
standard.
Generally,
few
if
any
samples
drawn
from
the
population
of
interest
may
exceed
the
regulatory
standard
or
exhibit
the
property
of
concern
to
demonstrate
with
reasonable
confidence
that
a
high
proportion
or
percentage
of
the
population
complies
with
the
standard.

Two
simple
methods
for
measuring
whether
a
specified
proportion
or
percentile
of
a
waste
or
media
meets
an
applicable
standard
are
described
in
the
following
sections:

°
Using
an
upper
confidence
limit
on
a
percentile
to
classify
a
waste
or
media
(Section
3.4.2.1),
and
°
Using
a
simple
exceedance
rule
method
to
classify
a
waste
or
media
(Section
3.4.2.2).
1
EPA
uses
a
narrative
criteria
to
define
most
reactive
wastes,
and
waste
handlers
should
use
their
knowledge
to
determine
if
a
waste
is
sufficiently
reactive
to
be
regulated.

27
Frequency
Concentration
Sample
Mean
Regulatory
Threshold
UCL
on
Upper
Percentile
or
"Tolerance
Limit"

"Point
estimate"
of
99th
percentile
Confidence
Interval
on
99th
Percentile
Figure
10.
For
a
high
percentile
(e.
g.,
the
99
th
percentile)
to
be
less
than
an
applicable
standard,
the
mean
concentration
must
be
well
below
the
standard.
3.4.2.1
Using
a
Confidence
Limit
on
a
Percentile
to
Classify
a
Waste
or
Media
A
percentile
is
a
population
parameter.
We
cannot
know
the
true
value
of
that
parameter,
but
we
can
estimate
it
from
a
statistical
sample
drawn
from
the
population
by
using
a
confidence
interval
for
a
percentile.
If
the
upper
confidence
limit
(UCL)
on
the
upper
percentile
is
below
the
fixed
standard,
then
there
is
statistically
significant
evidence
that
the
specified
proportion
of
the
waste
or
media
attains
the
standard
(see
Figure
10).
If
the
UCL
on
the
upper
percentile
exceeds
the
standard
(but
all
sample
analysis
results
are
below
the
standard),
then
the
waste
or
media
still
could
be
judged
in
compliance
with
the
standard;
however,
you
would
not
have
the
specified
degree
of
confidence
that
the
specified
proportion
of
the
waste
or
media
complies
with
the
standard
(see
also
the
exceedance
rule
method,
Section
3.4.2.2).

Detailed
guidance
on
the
calculation
of
confidence
limits
for
percentiles
can
be
found
in
Section
8.2
and
Appendix
F
of
this
document.
Methods
also
are
given
in
Conover
(1999),
Gilbert
(1987,
page
136),
Hahn
and
Meeker
(1991),
and
USEPA
(1989a).
A
possible
alternative
to
using
a
confidence
limit
on
a
percentile
is
the
use
of
the
"one­
sample
test
for
proportions"
(see
Section
3.2.2.1
of
USEPA
2000d).

3.4.2.2
Using
a
Simple
Exceedance
Rule
Method
To
Classify
a
Waste
One
of
the
most
straightforward
methods
for
determining
whether
a
given
proportion
or
percentage
of
a
waste
(that
is,
all
possible
samples
of
a
given
sample
support)
complies
with
an
applicable
standard
is
to
use
a
simple
exceedance
rule.
To
apply
the
method,
simply
obtain
a
number
of
samples
and
require
that
zero
or
few
sample
analysis
results
be
allowed
to
exceed
the
applicable
standard
or
possess
the
property
(or
"attribute")
of
interest.
The
method
(also
known
as
"inspection
by
attributes")
is
from
a
class
of
methods
known
as
acceptance
sampling
plans
(Schilling
1982,
ASQ
1988
and
1993,
and
DoD
1996).
One
simple
form
of
the
exceedance
rule,
sometimes
used
by
regulatory
enforcement
agencies,
specifies
zero
exceedances
in
a
set
of
samples.
This
method
can
be
used
to
classify
a
waste
(i.
e.,
determine
if
it
exhibits
the
characteristics
of
ignitability,
corrosivity,
reactivity
1
,
or
toxicity)
or
to
determine
its
status
under
RCRA
(that
is,
to
determine
if
the
waste
is
prohibited
from
land
disposal
or
if
it
attains
an
LDR
treatment
standard).

The
method
is
attractive
because
it
is
simple
(e.
g.,
because
sample
analysis
results
are
28
recorded
as
either
"pass"
or
"fail"
and
statistical
tables
can
be
used
instead
of
equations),
it
does
not
require
an
assumption
about
the
form
of
the
underlying
distribution,
and
it
can
be
used
when
a
large
proportion
of
the
data
are
reported
as
less
than
a
quantitation
limit.
Furthermore,
the
method
has
statistical
properties
that
allow
the
waste
handler
to
have
a
known
level
of
confidence
that
at
least
a
given
proportion
of
the
waste
complies
with
the
standard.
One
potential
drawback
of
using
an
exceedance
rule
is
that
with
a
small
number
of
samples,
you
might
not
be
able
to
conclude
with
high
confidence
that
a
high
proportion
of
the
waste
complies
with
the
applicable
standard
(unless
you
have
sufficient
knowledge
of
the
waste
indicating
there
is
little
variability
in
concentrations
or
properties).
That
is,
with
a
small
number
of
samples,
there
is
little
statistical
power:
an
unacceptably
large
proportion
of
the
waste
or
site
could
exceed
the
standard
or
exhibit
the
property
even
though
no
such
exceedances
or
properties
were
observed
in
the
samples.
Increasing
the
number
of
samples
will
improve
the
statistical
performance.

As
a
practical
matter,
it
is
suggested
that
you
scale
the
statistical
performance
and
acceptance
requirements
(and
thus,
the
number
of
samples)
to
the
size
of
the
lot
or
batch
of
waste
of
interest.
For
example,
when
large
and/
or
very
heterogeneous
volumes
of
waste
are
the
subject
of
the
study,
decision­
makers
may
require
high
confidence
that
a
high
proportion
of
the
waste
meets
the
applicable
standard.
A
relatively
large
number
of
samples
will
be
required
to
satisfy
these
criteria
if
the
exceedance
rule
is
used.
On
the
other
hand,
decision­
makers
may
choose
to
relax
the
statistical
performance
criteria
when
characterizing
a
small
volume
of
waste
(or
a
very
homogeneous
waste)
and
thus
fewer
samples
would
be
needed.

Detailed
guidance
on
the
use
of
an
exceedance
rule
is
provided
in
Section
5.5.2
and
in
Appendix
F,
Section
F.
3.2,
of
this
document.
The
exceedance
rule
method
also
is
described
in
Methods
for
Evaluating
the
Attainment
of
Cleanup
Standards.
Volume
1:
Soils
and
Solid
Media
(USEPA
1989a,
Section
7.4).

3.4.3
Comparing
Two
Populations
Some
environmental
studies
do
not
involve
testing
compliance
against
a
fixed
standard
but
require
comparison
of
two
separate
data.
This
type
of
analysis
is
common
for
detecting
releases
to
ground
water
at
waste
management
units
such
as
landfills
and
surface
impoundments,
detecting
releases
to
soil
and
the
unsaturated
zone
at
land
treatment
units,
or
determining
if
site
contamination
is
distinguishable
from
natural
background
concentrations.
In
these
situations,
the
operator
must
compare
"on
site"
or
"downgradient"
concentrations
to
"background."

For
example,
at
a
new
land­
based
waste
management
unit
(such
as
a
new
landfill),
we
expect
the
concentrations
in
a
set
of
samples
from
downgradient
locations
to
be
similar
to
a
set
of
samples
from
background
locations.
If
a
statistically
significant
change
in
downgradient
conditions
is
detected,
then
there
may
be
evidence
of
a
release
to
the
environment.
Statistical
methods
called
two­
sample
tests
can
be
used
to
make
such
comparisons
(they
are
called
twosample
tests
because
two
sets
of
samples
are
used).
A
two­
sample
test
also
could
be
used
to
measure
changes
in
constituent
concentrations
in
a
waste
or
soil
"before"
treatment
and
"after"
treatment
to
assess
the
effectiveness
of
the
treatment
process
(see
USEPA
2002a).

For
detailed
guidance
on
the
use
of
two­
sample
tests,
see
EPA's
G­
9
guidance
(USEPA
2000d)
and
EPA's
guidance
on
the
statistical
analysis
of
ground­
water
monitoring
data
(USEPA
1989b
29
and
1992b).

Note
that
detecting
a
release
to
the
environment
may
not
necessarily
involve
use
of
a
statistical
test
and
may
not
even
involve
sampling.
For
example,
observation
of
a
broken
dike
at
a
surface
impoundment
may
indicate
that
a
release
has
occurred.

3.4.4
Estimating
Spatial
Patterns
Under
some
circumstances,
a
site
investigator
may
wish
to
determine
the
location
of
a
contaminant
in
the
environment
as
well
as
its
concentration.
Knowledge
of
spatial
trends
or
patterns
may
be
of
particular
value
when
conducting
risk
assessments
or
locating
areas
for
clean­
up
or
removal
under
the
RCRA
Corrective
Action
program.
Estimation
of
spatial
patterns
is
best
addressed
by
geostatistics
or
other
spatial
data
analysis
methods.

Geostatistical
models
are
based
on
the
notion
that
elements
of
the
population
that
are
close
together
in
space
and/
or
time
exhibit
an
identifiable
relationship
or
positive
correlation
with
one
another.
Geostatistical
techniques
attempt
to
recognize
and
describe
the
pattern
of
spatial
dependence
and
then
account
for
this
pattern
when
generating
statistical
estimates.
On
the
other
hand,
"classical"
methods
assume
that
members
of
a
population
are
not
correlated
(USEPA
1997a).

While
a
full
treatment
of
spatial
analysis
and
geostatistics
is
beyond
the
scope
of
this
guidance,
certain
techniques
recommended
in
the
guidance
require
consideration
of
spatial
differences.
For
example,
you
may
need
to
consider
whether
there
are
any
spatial
correlations
in
a
waste
or
site
when
selecting
a
sampling
design.
There
are
some
relatively
simple
graphical
techniques
that
can
be
used
to
explore
possible
spatial
patterns
or
relationships
in
data.
For
example,
posting
plots
or
spatial
contour
maps
can
be
generated
manually
or
via
software
(e.
g.,
see
EPA's
Geo­
EAS
software
described
in
Appendix
H).
Interested
readers
can
find
a
more
comprehensive
explanation
of
spatial
statistics
in
texts
such
as
Myers
(1997),
Isaaks
and
Srivastava
(1989),
Journel
(1988),
USEPA
(1991a,
1997a),
or
consult
a
professional
environmental
statistician
or
geostatistician.
30
Specify
Limits
on
Decision
Errors
Develop
a
Decision
Rule
Define
the
Study
Boundaries
Identify
Inputs
to
the
Decision
Identify
the
Decision
State
the
Problem
Optimize
the
Design
for
Obtaining
Data
Figure
11.
The
seven
steps
of
the
DQO
Process
(from
USEPA
2000b)
4
PLANNING
YOUR
PROJECT
USING
THE
DQO
PROCESS
To
be
successful,
a
waste­
testing
program
must
yield
data
of
the
type
and
quality
necessary
to
achieve
the
particular
purpose
of
the
program.
This
is
accomplished
through
correct,
focused,
and
well­
documented
sampling,
testing,
and
data
evaluation
activities.
In
each
case,
a
clear
understanding
of
the
program
objectives
and
thorough
planning
of
the
effort
are
essential
for
a
successful,
cost­
effective
waste­
testing
program.

Each
program
design
is
unique
because
of
the
many
possible
variables
in
waste
sampling
and
analysis
such
as
regulatory
requirements,
waste
and
facility­
specific
characteristics,
and
objectives
for
the
type
and
quantity
of
data
to
be
provided.
Nonetheless,
a
systematic
planning
process
such
as
the
Data
Quality
Objectives
(DQO)
Process,
which
takes
these
variables
into
account,
can
be
used
to
guide
planning
efforts.
EPA
recommends
using
the
DQO
Process
when
data
are
being
used
to
select
between
two
opposing
conditions,
such
as
determining
compliance
with
a
standard.

The
DQO
Process
yields
qualitative
and
quantitative
statements
that:

°
Clarify
the
study
objectives
°
Define
the
type,
quantity,
and
quality
of
required
data
°
Determine
the
most
appropriate
conditions
from
which
to
collect
the
samples
°
Specify
the
amount
of
uncertainty
you
are
willing
to
accept
in
the
results
°
Specify
how
the
data
will
be
used
to
test
a
decision
rule.

The
outputs
of
the
DQO
Process
are
used
to
define
the
quality
control
requirements
for
sampling,
analysis,
and
data
assessment.
These
requirements
are
then
incorporated
into
a
QAPP,
WAP,
or
other
similar
planning
document.

The
DQO
Process
comprises
seven
planning
steps
depicted
in
Figure
11.
The
figure
shows
one
of
the
most
important
features
of
the
process:
its
iterative
nature.
You
don't
have
to
"get
it
right
the
first
time."
You
can
use
existing
information
to
establish
DQOs.
If
the
initial
design
is
not
feasible,
then
you
can
iterate
through
one
or
more
of
the
earlier
planning
steps
to
identify
a
sampling
design
that
will
meet
the
budget
and
generate
data
that
are
adequate
for
the
decision.
This
way,
you
can
evaluate
sampling
designs
and
related
costs
in
advance
before
significant
time
and
resources
are
expended
to
collect
and
analyze
samples.

In
a
practical
sense,
the
DQO
Process
offers
a
structured
approach
to
"begin
with
the
end
in
1
In
some
cases,
it
might
be
appropriate
and
cost­
effective
to
collect
data
beyond
that
required
to
support
a
near­
term
decision.
For
example,
if
a
drill
rig
is
mobilized
to
collect
deep
soil
samples
to
determine
the
need
for
remediation,
it
would
be
cost­
effective
to
also
collect
relatively
low­
cost
data
(such
as
geotechnical
parameters,
total
organic
carbon,
moisture
content,
etc.)
needed
by
engineers
to
design
the
remedy.
Otherwise,
unnecessary
costs
might
be
incurred
to
remobilize
a
drill
rig
to
obtain
data
that
could
have
been
obtained
in
the
initial
effort.

31
mind."
It
is
a
framework
for
asking
the
right
questions
and
using
the
answers
to
develop
and
implement
a
cost­
effective
plan
for
data
collection.
The
DQO
Process
does
not
necessarily
proceed
in
a
linear
fashion
or
involve
rigid
procedures;
rather,
it
is
a
thought
process
to
enable
you
to
get
useful
information
in
a
cost­
effective
manner.

Failure
to
establish
DQOs
before
implementing
field
and
laboratory
activities
can
cause
difficulties
in
the
form
of
inefficiencies,
increased
or
unnecessary
costs,
or
the
generation
of
unusable
data.
For
example,
if
the
limit
of
quantitation
for
sample
analysis
is
greater
than
the
Action
Level,
then
the
data
will
not
be
useable
for
its
intended
purpose;
or,
if
you
do
not
collect
enough
samples,
then
you
may
not
be
able
to
draw
conclusions
with
the
desired
level
of
confidence.

When
properly
used,
the
DQO
Process:

°
Provides
a
good
way
to
document
the
key
activities
and
decisions
necessary
to
address
the
problem
and
to
communicate
the
approach
to
others.

°
Involves
key
decision
makers,
other
data
users,
and
technical
experts
in
the
planning
process
before
data
collection
begins
which
helps
lead
to
a
consensus
prior
to
beginning
the
project
and
makes
it
easier
to
change
plans
when
circumstances
warrant
because
involved
parties
share
common
understandings,
goals,
and
objectives.

°
Develops
a
consensus
approach
to
limiting
decision
errors
that
strikes
a
balance
between
the
cost
of
an
incorrect
decision
and
the
cost
of
reducing
or
eliminating
the
possible
mistake.

°
Saves
money
by
greatly
reducing
the
tendency
to
collect
unneeded
data
by
encouraging
the
decision
makers
to
focus
on
data
that
support
only
the
decision(
s)
necessary
to
solve
the
problem(
s).
When
used
with
a
broader
perspective
in
mind,
however,
the
DQO
Process
may
help
identify
opportunities
to
consolidate
multiple
tasks
and
improve
the
efficiency
of
the
data
collection
effort.
1
Systematic
Planning
and
the
DQO
Process:
EPA
References
and
Software
Guidance
for
the
Data
Quality
Objectives
Process,
EPA
QA/
G­
4,
August
2000,
EPA/
600/
R­
96/
055.
Provides
guidance
on
how
to
perform
the
DQO
Process.

Data
Quality
Objectives
Decision
Error
Feasibility
Trials
Software
(DEFT)
­
User's
Guide,
EPA
QA/
G­
4D,
September
2001,
EPA/
240/
B­
01/
007
(User's
Guide
and
Software).
PC­
based
software
for
determining
the
feasibility
of
data
quality
objectives
defined
using
the
DQO
Process.

Guidance
for
the
Data
Quality
Objectives
Process
for
Hazardous
Waste
Sites,
EPA
QA/
G­
4HW,
January
2000,
EPA/
600/
R­
00/
007.
Provides
guidance
on
applying
the
DQO
Process
to
hazardous
waste
site
investigations.
32
DQO
Step
1:
State
the
Problem
Purpose
To
define
the
problem
so
that
the
focus
of
the
study
will
be
unambiguous.

Activities
°
Identify
members
of
the
planning
team.
°
Identify
the
primary
decision
maker(
s).
°
Develop
a
concise
description
of
the
problem.
°
Determine
resources
–
budget,
personnel,
and
schedule.
The
remainder
of
this
section
addresses
how
the
DQO
Process
can
be
applied
to
RCRA
wastecharacterization
studies.
While
the
discussion
is
based
on
EPA's
G­
4
guidance
(USEPA
2000b),
some
steps
have
been
modified
or
simplified
to
allow
for
flexibility
in
their
use.
Keep
in
mind
that
not
all
projects
or
decisions
(such
as
a
hazardous
waste
determination)
will
require
the
full
level
of
activities
described
in
this
section,
but
the
logic
applies
nonetheless.
In
fact,
EPA
encourages
use
of
a
"graded
approach"
to
quality
assurance.
A
graded
approach
bases
the
level
of
management
and
QA/
QC
activities
on
the
intended
use
of
the
results
and
the
degree
of
confidence
needed
in
their
quality
(USEPA
2001f).

4.1
Step
1:
State
the
Problem
Before
developing
a
data
gathering
program,
the
first
step
is
to
state
the
problem
or
determine
what
question
or
questions
are
to
be
answered
by
the
study.
For
many
waste
characterization
or
monitoring
programs
the
questions
are
spelled
out
in
the
applicable
regulations;
however,
in
some
cases,
determining
the
actual
problem
or
question
to
be
answered
may
be
more
complex.
As
part
of
this
step,
perform
the
four
activities
described
in
the
following
sections.

4.1.1
Identify
Members
of
the
Planning
Team
The
planning
team
comprises
personnel
representing
all
phases
of
the
project
and
may
include
stakeholders,
decision
makers,
technical
project
managers,
samplers,
chemists,
process
engineers,
QA/
QC
managers,
statisticians,
risk
assessors,
community
leaders,
grass
roots
organizations,
and
other
data
users.

4.1.2
Identify
the
Primary
Decision
Maker
Identify
the
primary
decision
maker(
s)
or
state
the
process
by
which
the
decision
will
be
made
(for
example,
by
consensus).

4.1.3
Develop
a
Concise
Description
of
the
Problem
Develop
a
problem
description
to
provide
background
information
on
the
fundamental
issue
to
be
addressed
by
the
study.
For
RCRA
waste­
related
studies,
the
"problem"
could
involve
determining
one
of
the
following:
(1)
if
a
solid
waste
should
be
classified
as
a
hazardous
waste,
(2)
if
a
hazardous
waste
is
prohibited
from
land
disposal,
(3)
if
a
treated
hazardous
waste
attains
the
applicable
treatment
standard,
(4)
if
a
cleanup
goal
has
been
attained,
or
(5)
if
hazardous
constituents
have
migrated
from
a
waste
management
unit.

Summarize
existing
information
into
a
"conceptual
model"
or
conceptual
site
model
(CSM)
including
previous
sampling
information,
preliminary
estimates
of
summary
statistics
such
as
the
mean
and
standard
deviation,
process
descriptions
and
materials
used,
and
any
spatial
and
temporal
boundaries
of
the
waste
or
study
area
that
can
be
defined.
A
CSM
is
a
33
DQO
Step
2:
Identify
the
Decision
Purpose
To
define
what
specific
decisions
need
to
be
made
or
what
questions
need
to
be
answered.

Activities
°
Identify
the
principal
study
question.
°
Define
the
alternative
actions
that
could
result
from
resolution
of
the
principal
study
question.
°
Develop
a
decision
statement.
°
Organize
multiple
decisions.
three­
dimensional
"picture"
of
site
conditions
at
a
discrete
point
in
time
(a
snapshot)
that
conveys
what
is
known
or
suspected
about
the
facility,
releases,
release
mechanisms,
contaminant
fate
and
transport,
exposure
pathways,
potential
receptors,
and
risks.
The
CSM
does
not
have
to
be
based
on
a
mathematical
or
computer
model,
although
these
tools
often
help
to
visualize
current
information
and
predict
future
conditions.
The
CSM
should
be
documented
by
written
descriptions
of
site
conditions
and
supported
by
maps,
cross
sections,
analytical
data,
site
diagrams
that
illustrate
actual
or
potential
receptors,
and
any
other
descriptive,
graphical,
or
tabular
illustrations
necessary
to
present
site
conditions.

4.1.4
Specify
Available
Resources
and
Relevant
Deadlines
Identify
available
financial
and
human
resources,
identify
deadlines
established
by
permits
or
regulations,
and
establish
a
schedule.
Allow
time
for
developing
acceptance
and
performance
criteria,
preparing
planning
documents
(such
as
a
QAPP,
sampling
plan,
and/
or
WAP),
collecting
and
analyzing
samples,
and
interpreting
and
reporting
data.

4.2
Step
2:
Identify
the
Decision
The
goal
of
this
step
is
to
define
the
questions
that
the
study
will
attempt
to
answer
and
identify
what
actions
may
be
taken
based
on
the
outcome
of
the
study.
As
part
of
this
step,
perform
the
four
activities
described
in
the
following
sections.

4.2.1
Identify
the
Principal
Study
Question
Based
on
the
problem
identified
in
Step
1,
identify
the
study
question
and
state
it
as
specifically
as
possible.
This
is
an
important
step
because
the
manner
in
which
you
frame
the
study
question
can
influence
whether
sampling
is
even
appropriate,
and
if
so,
how
you
will
evaluate
the
results.
Here
are
some
examples
of
study
questions
that
might
be
posed
in
a
RCRA­
related
waste
study:

°
Does
the
filter
cake
from
the
filter
press
exhibit
the
TC
at
its
point
of
generation?

°
Does
the
treated
waste
meet
the
universal
treatment
standard
(UTS)
for
land
disposal
under
40
CFR
268?

°
Has
the
soil
remediation
at
the
SWMU
attained
the
cleanup
goal
for
benzene?

°
Have
hazardous
constituents
migrated
from
the
land
treatment
unit
to
the
underlying
soil
at
concentrations
significantly
greater
than
background
concentrations?

°
Are
radioactive
and
hazardous
wastes
colocated,
producing
a
mixed
waste
management
scenario?
2
Testing
alone
might
not
be
sufficient
to
determine
if
a
solid
waste
is
hazardous
waste.
You
also
should
apply
knowledge
of
the
waste
generation
process
to
determine
if
the
solid
waste
is
a
hazardous
waste
under
40
CFR
261.

34
DQO
Step
3:
Identify
Inputs
to
the
Decision
Purpose
To
identify
data
or
other
information
required
to
resolve
the
decision
statement.

Activities
°
Identify
the
information
required
to
resolve
the
decision
statement.
°
Determine
the
sources
of
information.
°
Identify
information
needed
to
establish
the
Action
Level.
°
Identify
sampling
and
analysis
methods
that
can
meet
the
data
requirements.
Before
conducting
a
waste­
sampling
and
testing
program
to
comply
with
RCRA,
you
should
review
the
specific
regulatory
requirements
in
40
CFR
in
detail
and
consult
with
staff
from
your
EPA
region
or
the
representative
from
your
State
(if
your
State
is
authorized
to
implement
the
regulation).

4.2.2
Define
the
Alternative
Actions
That
Could
Result
from
Resolution
of
the
Principal
Study
Question
Generally,
two
courses
of
action
will
result
from
the
outcome
of
the
study.
One
that
involves
action,
such
as
deciding
to
classify
a
solid
waste
as
a
hazardous
waste,
and
one
that
requires
an
alternative
action,
such
as
deciding
to
classify
a
solid
waste
as
a
nonhazardous
solid
waste.
2
4.2.3
Develop
a
Decision
Statement
In
performing
this
activity,
simply
combine
the
principal
study
question
and
the
alternative
actions
into
a
"decision
statement."
For
example,
you
may
wish
to
determine
whether
a
waste
exhibits
a
hazardous
waste
characteristic.
The
decision
statement
should
be
in
writing
(for
example,
in
the
QAPP)
and
agreed
upon
by
the
planning
team.
This
approach
will
help
avoid
misunderstandings
later
in
the
process.

4.2.4
Organize
Multiple
Decisions
If
several
separate
decisions
statements
must
be
defined
to
address
the
problem,
then
you
should
list
them
and
identify
the
sequence
in
which
they
should
be
resolved.
For
example,
if
you
classify
a
solid
waste
as
a
nonhazardous
waste,
then
you
will
need
to
make
a
waste
management
decision.
Options
might
include
land
disposal
(e.
g.,
in
an
industrial
landfill
or
a
municipal
solid
waste
landfill),
recycling,
or
some
other
use.
You
might
find
it
helpful
to
document
the
decision
resolution
sequence
and
relationships
in
a
diagram
or
flowchart.

4.3
Step
3:
Identify
Inputs
to
the
Decision
In
most
cases,
it
will
be
necessary
to
collect
data
or
new
information
to
resolve
the
decision
statement.
To
identify
the
type
and
source
of
this
information,
perform
the
activities
outlined
in
the
following
four
sections.

4.3.1
Identify
the
Information
Required
For
RCRA­
related
waste
studies,
information
requirements
typically
will
35
include
samples
to
be
collected,
variables
to
be
measured
(such
as
total
concentrations,
TCLP
results,
or
results
of
tests
for
other
characteristics,
such
as
reactivity,
ignitability,
and
corrosivity),
the
units
of
measure
(such
as
mg/
L),
the
form
of
the
data
(such
as
on
a
dry
weight
basis),
and
waste
generation
or
process
knowledge.

4.3.2
Determine
the
Sources
of
Information
Identify
and
list
the
sources
of
information
needed
and
qualitatively
evaluate
the
usefulness
of
the
data.
Existing
information,
such
as
analytical
data,
can
be
very
valuable.
It
can
help
you
calculate
the
appropriate
number
of
new
samples
needed
(if
any)
and
reduce
the
need
to
collect
new
data
(see
also
Section
5.4).

4.3.3
Identify
Information
Needed
To
Establish
the
Action
Level
The
Action
Level
is
the
threshold
value
that
provides
the
criterion
for
choosing
between
alternative
actions.
Under
RCRA,
there
are
several
types
of
Action
Levels.

The
first
type
of
Action
Level
is
a
fixed
standard
or
regulatory
threshold
(RT)
usually
specified
as
a
concentration
of
a
hazardous
constituent
(e.
g.,
in
mg/
L).
Examples
of
regulatory
thresholds
that
are
Action
Levels
in
the
RCRA
regulations
include
the
TC
Regulatory
Levels
at
40
CFR
261.24
and
the
Land
Disposal
Restrictions
(LDR)
numeric
treatment
standards
at
40
CFR
268.40.

Another
criterion
for
choosing
between
alternative
actions
is
defined
by
the
property
of
a
waste.
Three
such
properties
are
defined
in
the
RCRA
regulations:
ignitability
(§
261.21),
corrosivity
(§
261.22),
and
reactivity
(§
261.23).
The
results
of
test
methods
used
to
determine
if
a
waste
is
ignitable,
corrosive,
or
reactive
are
interpreted
as
either
"pass"
or
"fail"
­­
i.
e.,
the
waste
either
has
the
property
or
it
does
not.
Note
that
a
concentration
measurement,
such
as
a
TCLP
sample
analysis
result,
also
can
be
interpreted
as
either
"pass"
or
"fail"
based
on
whether
the
value
is
less
than
or
greater
than
a
specified
threshold.

A
third
criterion
for
choosing
between
alternative
actions
involves
making
a
comparison
between
constituent
concentrations
at
different
times
or
locations
to
determine
if
there
has
been
a
change
in
process
or
environmental
conditions
over
time.
In
these
situations,
you
need
to
determine
if
the
two
sets
of
data
are
different
relative
to
each
other
rather
than
checking
for
compliance
with
a
fixed
standard.

Finally,
an
Action
Level
can
represent
a
proportion
of
the
population
having
(or
not
having)
some
characteristic.
For
example,
while
it
might
be
desirable
to
have
all
portions
of
a
waste
or
site
comply
with
a
standard,
it
would
be
more
practical
to
test
whether
some
high
proportion
(e.
g.,
0.95)
of
units
of
a
given
size,
shape,
and
orientation
comply
with
the
standard.
In
such
a
case,
the
Action
Level
could
be
set
at
0.95.

For
more
information
on
identifying
the
Action
Level,
see
Section
2
(RCRA
regulatory
drivers
for
waste
sampling
and
testing),
the
RCRA
regulations
in
40
CFR,
ASTM
Standard
D
6250
(Standard
Practice
for
Derivation
of
Decision
Point
and
Confidence
Limit
for
Statistical
Testing
of
Mean
Concentration
in
Waste
Management
Decisions),
or
consult
with
your
State
or
EPA
Regional
staff.
3
The
physical
size
(expressed
as
mass
or
volume),
shape,
and
orientation
of
a
sample
is
known
as
the
sample
support.
Sample
support
plays
an
important
role
in
characterizing
waste
or
environmental
media
and
in
minimizing
variability
caused
by
the
sampling
process.
The
concept
of
support
is
discussed
in
greater
detail
in
Section
6.2.3.

36
DQO
Step
4:
Define
the
Study
Boundaries
Purpose
To
define
the
spatial
and
temporal
boundaries
that
are
covered
by
the
decision
statement.

Activities
°
Define
the
target
population
of
interest.
°
Define
the
"sample
support"
°
Define
the
spatial
boundaries
that
clarify
what
the
data
must
represent.
°
Define
the
time
frame
for
collecting
data
and
making
the
decision.
°
Identify
any
practical
constraints
on
data
collection.
°
Determine
the
smallest
subpopulation,
area,
volume,
or
time
for
which
separate
decisions
must
be
made.
4.3.4
Confirm
That
Sampling
and
Analytical
Methods
Exist
That
Can
Provide
the
Required
Environmental
Measurements
Identify
and
evaluate
candidate
sampling
and
analytical
methods
capable
of
yielding
the
required
environmental
measurements.
You
will
need
to
revisit
this
step
during
Step
7
of
the
DQO
Process
("
Optimize
the
Design
for
Obtaining
the
Data")
after
the
quantity
and
quality
of
the
necessary
data
are
fully
defined.
In
evaluating
sampling
methods,
consider
the
medium
to
be
sampled
and
analyzed,
the
location
of
the
sampling
points,
and
the
size,
shape
and
orientation
of
each
sample
(see
also
Section
6,
"Controlling
Variability
and
Bias
in
Sampling"
and
Section
7,
"Implementation:
Selecting
Equipment
and
Conducting
Sampling").

In
evaluating
analytical
methods,
choose
the
appropriate
candidate
methods
for
sample
analyses
based
on
the
sample
matrix
and
the
analytes
to
be
determined.

Guidance
on
the
selection
of
analytical
methods
can
be
found
in
Chapter
Two
of
SW­
846
("
Choosing
the
Correct
Procedure").
Up­
to­
date
information
on
analytical
methods
can
be
found
at
SW­
846
"On
Line"
at
http://
www.
epa.
gov/
epaoswer/
hazwaste/
test/
main.
htm.

4.4
Step
4:
Define
the
Study
Boundaries
In
this
step
of
the
DQO
Process,
you
should
identify
the
target
population
of
interest
and
specify
the
spatial
and
temporal
features
of
that
population
that
are
pertinent
for
decision
making.

To
define
the
study
boundaries,
perform
the
activities
described
in
the
following
five
sections.

4.4.1
Define
the
Target
Population
of
Interest
It
is
important
for
you
to
clearly
define
the
target
population
to
be
sampled.
Ideally,
the
target
population
coincides
with
the
population
to
be
sampled
(Cochran
1977)
–
that
is,
the
target
population
should
represent
the
total
collection
of
all
possible
sampling
units
that
could
be
drawn.
Note
that
the
"units"
that
make
up
the
population
are
defined
operationally
based
on
their
size,
shape,
orientation,
and
handling
(i.
e.,
the
"sample
support").
3
The
sampling
unit
definition
must
be
considered
when
defining
the
target
population
because
any
changes
in
the
definition
can
affect
the
population
characteristics.
See
Section
6.3.1
for
guidance
on
establishing
the
appropriate
size
(mass)
of
a
sample,
and
see
Section
6.3.2
for
guidance
on
37
establishing
the
appropriate
shape
and
orientation
of
sample.

Define
the
target
population
in
terms
of
sampling
units,
the
decision­
making
volume,
and
the
location
of
that
volume.

Sampling
at
the
point
of
generation
is
required
by
regulation
when
determining
the
regulatory
status
of
a
waste.
See
55
FR
11804,
March
29,
1990,
and
55
FR
22652,
June
1,
1990.

4.4.2
Define
the
Spatial
Boundaries
If
sampling
at
the
point
of
waste
generation
(i.
e.,
before
the
waste
is
placed
in
a
container
or
transport
unit),
then
the
sampling
problem
could
involve
collecting
samples
of
a
moving
stream
of
material,
such
as
from
a
conveyor,
discharge
pipe,
or
as
poured
into
a
container
or
tank.
If
so,
then
physical
features
such
as
the
width
of
the
flow
or
discharge
and
the
rate
of
flow
or
discharge
will
be
of
interest
for
defining
the
spatial
boundary
of
the
problem.

If
the
sampling
problem
involves
collecting
samples
from
a
waste
storage
unit
or
transport
container,
then
the
spatial
boundaries
can
be
defined
by
some
physical
feature,
such
as
volume,
length,
width,
height,
etc.
The
spatial
boundaries
of
most
waste
storage
units
or
containers
can
be
defined
easily.
Examples
of
these
units
follow:

°
Container
such
as
a
drum
or
a
roll­
off
box
°
Tank
°
Surface
Impoundment
°
Staging
Pile
°
Waste
Pile
°
Containment
Building.

In
other
cases,
the
spatial
boundary
could
be
one
or
more
geographic
areas,
such
as
areas
representing
"background"
and
"downgradient"
conditions
at
a
land
treatment
unit.
Another
example
is
a
SWMU
area
that
has
been
subject
to
remediation
where
the
objective
is
verify
that
the
cleanup
goal
has
been
achieved
over
a
specified
area
or
volume
at
the
SWMU.
If
the
study
requires
characterization
of
subsurface
soils
and
ground
water,
then
consult
other
guidance
(for
example,
see
USEPA
1989a,
1989b,
1991d,
1992a,
1993c,
and
1996b).

To
help
the
planning
team
visualize
the
boundary,
it
may
be
helpful
to
prepare
a
drawing,
map,
or
other
graphical
image
of
the
spatial
boundaries,
including
a
scale
and
orientation
(e.
g.,
a
north
arrow).
If
appropriate
and
consistent
with
the
intended
use
of
the
information,
maps
also
should
identify
relevant
surface
features
(such
as
buildings,
structures,
surface
water
bodies,
topography,
etc.)
and
known
subsurface
features
(pipes,
utilities,
wells,
etc.).

If
samples
of
waste
will
be
taken
at
the
point
of
generation
(e.
g.,
when
the
waste
becomes
a
solid
waste),
the
location
of
that
point
should
be
defined
in
this
step
of
the
DQO
Process.

4.4.3
Define
the
Temporal
Boundary
of
the
Problem
A
temporal
boundary
could
be
defined
by
a
permit
or
regulation
(such
as
the
waste
generated
per
day)
or
operationally
(such
as
the
waste
generated
per
"batch"
or
truck
load).
You
should
38
determine
the
time
frame
to
which
the
decision
applies
and
when
to
collect
the
data.
In
some
cases,
different
time
intervals
might
be
established
to
represent
different
populations
(e.
g.,
in
the
case
where
there
is
a
process
change
over
time
that
affects
the
character
of
the
waste).

Waste
characteristics
or
chemistry,
such
as
the
presence
of
volatile
constituents,
also
could
influence
the
time
frame
within
which
samples
are
collected.
For
example,
volatilization
could
occur
over
time.

4.4.4
Identify
Any
Practical
Constraints
on
Data
Collection
Identify
any
constraints
or
obstacles
that
could
potentially
interfere
with
the
full
implementation
of
the
data
collection
design.
Examples
of
practical
constraints
include
physical
access
to
a
sampling
location,
unfavorable
weather
conditions,
worker
health
and
safety
concerns,
limitations
of
available
sampling
devices,
and
availability
of
the
waste
(e.
g.,
as
might
be
the
case
for
wastes
generated
from
batch
processes)
that
could
affect
the
schedule
or
timing
of
sample
collection.

4.4.5
Define
the
Scale
of
Decision
Making
Define
the
smallest,
most
appropriate
subsets
of
the
population
(sub­
populations),
waste,
or
media
to
be
characterized
based
on
spatial
or
temporal
boundaries.
The
boundaries
will
define
the
unit
of
waste
or
media
about
which
a
decision
will
be
made.
The
unit
is
known
as
the
decision
unit.

When
defining
the
decision
unit,
the
consequences
of
making
a
decision
error
should
be
carefully
considered.
The
consequences
of
making
incorrect
decisions
(Step
6)
are
associated
with
the
size,
location,
and
shape
of
the
decision
unit.
For
example,
if
a
decision,
based
on
the
data
collected,
results
in
a
large
volume
of
waste
being
classified
as
nonhazardous,
when
in
fact
a
portion
of
the
waste
exhibits
a
hazardous
waste
characteristic
(e.
g.,
due
to
the
presence
of
a
"hot
spot"),
then
the
waste
generator
could
potentially
be
found
in
violation
of
RCRA
.
To
limit
risk
of
managing
hazardous
waste
with
nonhazardous
waste,
the
waste
handler
should
consider
dividing
the
waste
stream
into
smaller
decision
units
–
such
as
the
volume
of
waste
that
would
be
placed
into
an
individual
container
to
be
shipped
for
disposal
–
and
make
a
separate
waste
classification
decision
regarding
each
decision
unit.

The
planning
team
may
establish
decision
units
based
on
several
considerations:

°Risk
–
The
scale
of
the
decision
making
could
be
defined
based
on
an
exposure
scenario.
For
example,
if
the
objective
is
to
evaluate
exposures
via
direct
contact
with
surface
soil,
each
decision
unit
could
be
defined
based
on
the
geographic
area
over
which
an
individual
is
assumed
to
move
randomly
across
over
time.
In
EPA's
Superfund
program,
such
a
unit
is
known
as
an
"exposure
area"
or
EA
(USEPA
1992c
and
1996f).
An
example
of
an
EA
from
EPA's
Soil
Screening
Guidance:
User's
Guide
(USEPA
1996f)
is
the
top
2
centimeters
of
soil
across
a
0.5­
acre
area.
In
this
example,
the
EA
is
the
size
of
a
suburban
residential
lot
and
the
depth
represents
soil
of
the
greatest
concern
for
incidental
ingestion
of
soil,
dermal
contact,
and
inhalation
of
fugitive
dust.

If
evaluation
of
a
decision
unit
or
EA
for
the
purpose
of
making
a
cleanup
39
DQO
Step
5:
Develop
a
Decision
Rule
Purpose
To
define
the
parameter
of
interest,
specify
the
Action
Level
and
integrate
previous
DQO
outputs
into
a
single
statement
that
describes
a
logical
basis
for
choosing
among
alternative
actions;
i.
e.,
define
how
the
data
will
be
used
to
make
a
decision.

Activities
°
Specify
the
parameter
of
interest
(mean,
median,
percentile).
°
Specify
the
Action
Level
for
the
study.
°
Develop
a
decision
rule.
decision
finds
that
cleanup
is
needed,
then
the
same
decision
unit
or
EA
should
be
used
when
evaluating
whether
the
cleanup
standard
has
been
attained.
Furthermore,
the
size,
shape,
and
orientation
(the
"sample
support")
of
the
samples
used
to
determine
that
cleanup
was
necessary
should
be
the
same
for
samples
used
to
determine
whether
the
cleanup
standard
is
met
(though
this
last
condition
is
not
strictly
necessary
when
the
parameter
of
interest
is
the
mean).

°
Operational
Considerations
–
The
scale
of
the
decision
unit
could
be
defined
based
on
operational
considerations,
such
as
the
need
to
characterize
each
"batch"
of
waste
after
it
has
been
treated
or
the
need
to
characterize
each
drum
as
it
is
being
filled
at
the
point
of
waste
generation.
As
a
practical
matter,
the
scale
for
the
decision
making
often
is
defined
by
the
spatial
boundaries
–
for
example
as
defined
by
a
container
such
as
a
drum,
roll­
off
box,
truck
load,
etc.
or
the
time
required
to
fill
the
container.

°
Other
–
The
possibility
of
"hot
spots"
(areas
of
high
concentration
of
a
contaminant)
may
be
apparent
to
the
planning
team
from
the
history
of
the
facility.
In
cases
where
previous
knowledge
(or
planning
team
judgment)
includes
identification
of
areas
that
have
a
higher
potential
for
contamination,
a
scale
may
be
developed
to
specifically
represent
these
areas.

Additional
information
and
considerations
on
defining
the
scale
of
the
decision
making
can
be
found
in
Guidance
for
the
Data
Quality
Objectives
Process
for
Hazardous
Waste
Site
Operations
EPA
QA/
G­
4HW
(USEPA
2000a)
and
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4
(USEPA
2000b).

4.5
Step
5:
Develop
a
Decision
Rule
A
statement
must
be
developed
that
combines
the
parameter
of
interest
and
the
Action
Levels
with
the
DQO
outputs
already
developed.
The
combination
of
these
three
elements
forms
the
decision
rule
and
summarizes
what
attributes
the
decision
maker
wants
to
study
and
how
the
information
will
assist
in
solving
the
central
problem.
To
develop
the
decision
rule,
perform
the
activities
described
in
the
following
three
sections:

4.5.1
Specify
the
Parameter
of
Interest
A
statistical
"parameter"
is
a
descriptive
measure
of
a
population
such
as
the
population
mean,
median,
or
a
percentile
(see
also
Section
3.2).
See
Table
2.

Some
of
the
RCRA
regulations
specify
the
parameter
of
interest.
For
example,
the
comparable
fuels
sampling
and
analysis
requirements
at
40
CFR
261.38(
c)(
8)(
iii)(
A)
specify
the
mean
as
the
parameter
of
interest,
and
the
ground­
water
monitoring
requirements
at
40
CFR
264.97
specify
the
parameter
of
interest
for
each
statistical
4
EPA
uses
a
narrative
criteria
to
define
most
reactive
wastes,
and
waste
handlers
should
use
their
knowledge
to
determine
if
a
waste
is
sufficiently
reactive
to
be
regulated.

40
test.
Other
RCRA
regulations
do
not
specify
the
parameter
of
interest,
however,
you
can
select
a
parameter
based
on
what
the
Action
Level
is
intended
to
represent.
In
general,
if
an
Action
Level
is
based
on
long­
term
average
health
effects,
the
parameter
of
interest
could
be
the
population
mean
(USEPA
1992a).
If
the
Action
Level
represents
a
value
that
should
never
(or
rarely)
be
exceeded,
then
the
parameter
of
interest
could
be
an
upper
population
percentile,
which
can
serve
as
a
reasonable
approximation
of
the
maximum
value.

If
the
objective
of
the
study
does
not
involve
estimation
of
a
parameter
or
testing
a
hypothesis,
then
specification
of
a
parameter
is
not
necessary.

Table
2.
Population
Parameters
and
Their
Applicability
to
a
Decision
Rule
Parameter
Definition
Appropriate
Conditions
for
Use
Mean
Average
Estimate
central
tendency:
Comparison
of
middle
part
of
population
to
an
Action
Level.

Median
Middle
observation
of
the
distribution;
50
th
percentile;
half
of
data
are
above
and
below
May
be
preferred
to
estimate
central
tendency
if
the
population
contains
many
values
that
are
less
than
the
limit
of
quantitation.
The
median
is
not
a
good
choice
if
more
than
50%
of
the
population
is
less
than
the
limit
of
quantitation
because
a
true
median
does
not
exist
in
this
case.
The
median
is
not
influenced
by
the
extremes
of
the
contaminant
distribution.

Percentile
Specified
percent
of
sample
that
is
equal
to
or
below
the
given
value
For
cases
where
it
is
necessary
to
demonstrate
that,
at
most,
only
a
small
portion
of
a
population
could
exceed
the
Action
Level.
Sometimes
selected
if
the
decision
rule
is
being
developed
for
a
chemical
that
can
cause
acute
health
effects.
Also
useful
when
a
large
part
of
the
population
contains
values
less
than
the
detection
limit.

4.5.2
Specify
the
Action
Level
for
the
Study
You
should
specify
an
Action
Level
or
concentration
limit
that
would
cause
the
decision
maker
to
choose
between
alternative
actions.
Examples
of
Action
Levels
follow:

°
Comparable/
syngas
fuel
constituent
specification
levels
specified
at
§
261.38
°
Land
disposal
restrictions
concentration
level
treatment
standards
at
§
268.40
and
§
268.48
°
Risk­
based
cleanup
levels
specified
in
a
permit
as
part
of
a
corrective
action
°
"Pass"
or
"fail"
thresholds
for
tests
for
ignitability,
corrosivity,
reactivity
4
,
and
toxicity.

Also,
be
sure
the
detection
or
quantitation
limits
for
the
analytical
methods
identified
in
DQO
Step
3
(Section
4.3)
are
below
the
Action
Level,
if
possible.
41
Step
6:
Specify
Limits
on
Decision
Errors
Purpose
To
specify
the
decision
maker's
tolerable
limits
on
decision
error.

Activities
°
Identify
potential
sources
of
variability
and
bias
in
the
sampling
and
measurement
processes
(see
Section
6)
°
Determine
the
possible
range
on
the
parameter
of
interest.
°
Choose
the
null
hypothesis.
°
Consider
the
consequences
of
making
an
incorrect
decision.
°
Specify
a
range
of
values
where
the
consequences
are
minor
(the
"gray
region")
°
Specify
an
acceptable
probability
of
making
a
decision
error.
If
your
objective
is
to
compare
"onsite"
to
"background"
to
determine
if
there
is
a
statistically
significant
increase
above
background
(as
would
be
the
case
for
monitoring
releases
from
a
land
treatment
unit
under
§
264.278),
you
will
not
need
to
specify
an
Action
Level;
rather,
the
Action
Level
is
implicitly
defined
by
the
background
concentration
levels
and
the
variability
in
the
data.
A
summary
of
methods
for
determining
background
concentrations
in
soil
can
be
found
in
USEPA
1995a.
Methods
for
determining
background
concentrations
in
ground
water
can
be
found
in
USEPA
1989b
and
1992b.

Finally,
note
that
some
studies
will
not
require
specification
of
a
regulatory
or
risk­
based
Action
Level.
For
example,
if
the
objective
may
be
to
identify
the
existence
of
a
release,
samples
could
be
obtained
to
verify
the
presence
or
absence
of
a
spill,
leak,
or
other
discharge
to
the
environment.
Identifying
a
potential
release
also
could
include
observation
of
abandoned
or
discarded
barrels,
containers,
and
other
closed
receptacles
containing
hazardous
wastes
or
constituents
(see
61
FR
No.
85,
page
19442).

4.5.3
Develop
a
Decision
Rule
After
you
have
completed
the
above
activities,
you
can
construct
a
decision
rule
by
combining
the
selected
population
parameter
and
the
Action
Level
with
the
scale
of
the
decision
making
(from
DQO
Process
Step
4)
and
the
alternative
action
(from
DQO
Step
2).
Decision
rules
are
expressed
as
"if
(criterion)...,
then
(action)...."
A
hypothetical
example
follows:

"If
the
true
95
th
percentile
of
all
possible
100­
gram
samples
of
the
waste
being
placed
in
the
20­
cubic
yard
container
is
less
than
5.0
mg/
L
TCLP
lead,
then
the
solid
waste
will
be
classified
as
nonhazardous
waste.
Otherwise,
the
solid
waste
will
be
classified
as
a
RCRA
hazardous
waste."

Note
that
this
is
a
functional
decision
rule
based
on
an
ideal
condition
(i.
e.,
knowledge
of
the
true
concentration
that
equals
the
95
th
percentile
of
all
possible
sample
analysis
results).
It
also
identifies
the
boundary
of
the
study
by
specifying
the
sample
unit
(100­
gram
samples
in
accordance
with
the
TCLP)
and
the
size
of
the
decision
unit.
It
does
not,
however,
specify
the
amount
of
uncertainty
the
decision
maker
is
willing
to
accept
in
the
estimate.
You
specify
that
in
the
next
step.

4.6
Step
6:
Specify
Limits
on
Decision
Errors
Because
samples
represent
only
a
portion
of
the
population,
the
information
available
to
make
decisions
will
be
incomplete;
hence,
decision
errors
sometimes
will
be
made.
Decision
errors
occur
because
decisions
are
made
using
estimates
of
the
parameter
of
interest,
rather
than
the
true
(and
unknown)
value.
In
fact,
if
you
repeatedly
sampled
and
analyzed
a
waste
over
and
over
in
an
identical
manner
the
results
would
be
a
little
different
each
time
(see
Figure
8
in
Section
3).
This
variability
5
Statisticians
sometimes
refer
to
a
Type
I
error
as
a
"false
positive,"
and
a
Type
II
error
as
a
"false
negative."
The
terms
refer
to
decision
errors
made
relative
to
a
null
hypothesis,
and
the
terms
may
not
necessarily
have
the
same
meaning
as
those
used
by
chemists
to
describe
analytical
detection
of
a
constituent
when
it
is
not
really
present
("
false
positive")
or
failure
to
detect
a
constituent
when
it
really
is
present
("
false
negative").

6
An
exception
to
this
assumption
is
found
in
"detection
monitoring"
and
"compliance
monitoring"
in
which
underlying
media
(such
as
soil,
pore
water,
or
ground
water)
at
a
new
waste
management
unit
are
presumed
"clean"
until
a
statistically
significant
increase
above
background
is
demonstrated
(in
the
case
of
detection
monitoring)
or
a
statistically
significant
increase
over
a
fixed
standard
is
demonstrated
(in
the
case
of
compliance
or
assessment
monitoring).

42
in
the
results
is
caused
by
the
non­
homogeneity
of
the
waste
or
media,
slight
differences
in
how
the
samples
of
the
waste
were
collected
and
handled,
variability
in
the
analysis
process,
and
the
fact
that
only
a
small
portion
of
the
waste
is
usually
ever
sampled
and
tested.
(See
Section
6.1
for
a
more
detailed
discussion
of
sources
of
variability
and
bias
in
sampling).
For
example,
if
you
conduct
sampling
and
analysis
of
a
solid
waste
and
classify
it
as
"nonhazardous"
based
on
the
results,
when
in
fact
it
is
a
hazardous
waste,
you
will
have
made
a
wrong
decision
or
decision
error.
Alternatively,
if
you
classify
a
solid
waste
as
hazardous,
when
in
fact
it
is
nonhazardous,
you
also
will
have
made
a
decision
error.

There
are
two
types
of
decision
error.
A
"Type
I"
or
"false
rejection"
decision
error
occurs
if
you
reject
the
null
hypothesis
when
it
is
true.
(The
"null
hypothesis"
is
simply
the
situation
presumed
to
be
true
or
the
"working
assumption".)
A
"Type
II"
or
"false
acceptance"
decision
error
occurs
if
you
accept
the
null
hypothesis
when
it
is
false.
5
Table
3
summarizes
the
four
possible
situations
that
might
arise
when
a
hypothesis
is
tested.
The
two
possible
true
conditions
correspond
to
the
two
columns
of
the
table:
the
null
hypothesis
or
"baseline
assumption"
is
either
true
or
the
alternative
is
true.
The
two
kinds
of
decisions
are
shown
in
the
body
of
the
table.
Either
you
decide
the
baseline
is
true,
or
you
decide
the
alternative
is
true.
Associated
with
these
two
decisions
are
the
two
types
of
risk
–
the
risk
of
making
a
Type
I
(false
rejection)
error
(denoted
by
)
and
the
risk
of
making
a
Type
 
II
(false
acceptance)
error
(denoted
by
).
You
can
improve
your
chances
of
making
correct
 
decisions
by
reducing
and
(which
often
requires
more
samples
or
a
different
sampling
 
 
design)
and
by
using
field
sampling
techniques
that
minimize
errors
related
to
sampling
collection
and
handling
(see
also
Sections
6
and
7).

Table
3.
Conclusions
and
Consequences
for
a
Test
of
Hypotheses
True
Condition
Baseline
is
True
Alternative
is
True
Decision
Based
on
Sample
Data
Baseline
is
True
Correct
Decision
Type
II
(false
acceptance)
error
(probability
)   
Alternative
is
True
Type
I
(false
rejection)
error
(probability
)
 
Correct
Decision
For
many
sampling
situations
under
RCRA,
the
most
conservative
(i.
e.,
protective
of
the
environment)
approach
is
to
presume
that
the
constituent
concentration
in
the
waste
or
media
exceeds
the
standard
in
the
absence
of
strong
evidence
to
the
contrary.
6
For
example,
in
43
testing
a
solid
waste
to
determine
if
it
exhibits
the
TC,
the
null
hypothesis
can
be
stated
as
follows:
"the
concentration
is
equal
to
or
greater
than
the
TC
regulatory
level."
The
alternative
hypothesis
is
"the
concentration
is
less
than
the
TC
regulatory
level."
After
completion
of
the
sampling
and
analysis
phase,
you
conduct
an
assessment
of
the
data.
If
your
estimate
of
the
parameter
of
interest
is
less
than
the
threshold
when
the
true
value
of
the
parameter
exceeds
the
threshold,
you
will
make
a
decision
error
(a
Type
I
error).
If
the
estimate
of
the
parameter
of
interest
is
greater
than
the
threshold
when
the
true
value
is
less
than
the
threshold,
you
also
will
make
an
error
(a
Type
II
error)
­­
but
one
that
has
little
potential
adverse
impacts
to
human
health
and
the
environment.

Note
that
during
the
planning
phase
and
during
sampling
you
will
not
know
which
kind
of
error
you
might
make.
Later,
after
a
decision
has
been
made,
if
you
rejected
the
null
hypothesis
then
you
either
made
a
Type
I
(false
rejection)
decision
error
or
not;
you
could
not
have
made
a
Type
II
(false
acceptance)
decision
error.
On
the
other
hand,
if
you
did
not
reject
the
null
hypothesis,
then
you
either
made
a
Type
II
(false
acceptance)
error
or
not;
you
could
not
have
made
a
Type
I
(false
rejection)
error.
In
either
case,
you
will
know
which
type
of
error
you
might
have
made
and
you
will
know
the
probability
that
the
error
was
made.

In
the
RCRA
program,
EPA
is
concerned
primarily
with
controlling
errors
having
the
most
adverse
consequences
for
human
health
and
the
environment.
In
the
interest
of
protecting
the
environment
and
maintaining
compliance
with
the
regulations,
there
is
an
incentive
on
the
part
of
the
regulated
entity
to
minimize
the
chance
of
a
Type
I
decision
error.
The
statistical
methods
recommended
in
this
document
emphasize
controlling
the
Type
I
(false
rejection)
error
rate
and
do
not
necessarily
require
specification
of
a
Type
II
(false
acceptance)
error
rate.

The
question
for
the
decision
maker
then
becomes,
what
is
the
acceptable
probability
(or
chance)
of
making
a
decision
error?
To
answer
this
question,
four
activities
are
suggested.
These
activities
are
based
on
guidance
found
in
Guidance
for
the
Data
Quality
Objectives
Process
QA/
G­
4
(USEPA
2000b)
but
have
been
tailored
for
more
direct
application
to
RCRA
waste­
related
studies.
The
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4
also
provides
detailed
guidance
on
the
use
of
a
graphical
construct
called
a
Decision
Performance
Curve
to
represent
the
quality
of
a
decision
process.

4.6.1
Determine
the
Possible
Range
on
the
Parameter
of
Interest
Establish
the
possible
range
(maximum
and
minimum
values)
of
the
parameter
of
interest
using
data
from
a
pilot
study,
existing
data
for
a
similar
waste
stream,
or
process
knowledge
(e.
g.,
using
a
materials­
balance
approach).
It
is
desirable,
but
not
required,
to
have
an
estimate
of
the
standard
deviation
as
well.

4.6.2
Identify
the
Decision
Errors
and
Choose
the
Null
Hypothesis
Table
4
presents
four
examples
of
decision
errors
that
could
be
made
in
a
RCRA
waste
study.
In
the
first
three
examples,
the
consequences
of
making
a
Type
I
error
could
include
increased
risk
to
human
health
and
the
environment
or
a
potential
enforcement
action
by
a
regulatory
authority.
The
consequences
of
making
a
Type
II
error
could
include
unnecessary
financial
and
administrative
resources
required
to
manage
the
waste
as
hazardous
(when,
in
fact,
it
is
not)
or
continuing
site
cleanup
activities
when,
in
fact,
the
site
is
"clean."
44
Table
4.
Examples
of
Possible
Decision
Errors
in
RCRA
Waste
Studies
Regulatory
Requirement
"Null
Hypothesis"
(baseline
condition)
Possible
Decision
Errors
Type
I
Error
(
)
 
"False
Rejection"
Type
II
Error
(
)
 
"False
Acceptance"

Example
1:
Under
40
CFR
261.11,
conduct
sampling
to
determine
if
a
solid
waste
is
a
hazardous
waste
by
the
TC.
The
solid
waste
contains
TC
constituents
at
concentrations
equal
to
or
greater
than
their
applicable
regulatory
levels
(i.
e.,
the
solid
waste
is
a
hazardous
waste).
Concluding
the
waste
is
not
hazardous
when,
in
fact,
it
is.
Deciding
the
waste
is
hazardous
when,
in
fact,
it
is
not.

Example
2:
Under
40
CFR
268.7,
conduct
sampling
and
testing
to
certify
that
a
hazardous
waste
has
been
treated
so
that
concentrations
of
hazardous
constituents
meet
the
applicable
LDR
treatment
standards.
The
concentration
of
the
hazardous
constituents
exceeds
the
treatment
standard
(i.
e.,
the
treatment
standard
has
not
been
attained).
Concluding
the
treatment
standard
has
been
met
when,
in
fact,
it
has
not.
Concluding
the
treatment
standard
has
not
been
met
when,
in
fact,
it
has.

Example
3:
Under
40
CFR
264.101
(and
proposed
Subpart
S
­
Corrective
Action
at
SWMUs),
a
permittee
conducts
testing
to
determine
if
a
remediation
at
a
SWMU
has
attained
the
risk­
based
cleanup
standard
specified
in
the
permit.*
The
mean
concentration
in
the
SWMU
is
greater
than
the
risk­
based
cleanup
standard
(i.
e.,
the
site
is
contaminated).*
Concluding
the
site
is
"clean"
when,
in
fact,
it
is
contaminated.
Concluding
the
site
is
still
contaminated
when,
in
fact,
it
is
"clean."

Example
4:
Under
40
CFR
264.98(
f),
detection
monitoring,
monitor
ground
water
at
a
regulated
unit
to
determine
if
there
is
a
statistically
significant
increase
of
contamination
above
background.
The
level
of
contamination
in
each
point
of
compliance
well
does
not
exceed
background.
Concluding
the
contaminant
concentration
in
a
compliance
well
exceeds
background
when,
in
fact,
it
does
not.
Concluding
the
contaminant
concentration
in
a
compliance
well
is
similar
to
background
when,
in
fact,
it
is
higher.

*
If
the
cleanup
standard
is
based
on
"background"
rather
than
a
risk­
based
cleanup
standard,
then
the
hypotheses
would
be
framed
in
reverse
where
the
mean
background
and
on­
site
concentrations
are
presumed
equal
unless
there
is
strong
evidence
that
the
site
concentrations
are
greater
than
background.
*
A
parameter
other
than
the
mean
may
be
used
to
evaluate
attainment
of
a
cleanup
standard
(e.
g.,
see
USEPA
1989a).

In
Example
4,
however,
the
null
hypothesis
is
framed
in
reverse
of
Examples
1
through
3.
When
conducting
subsurface
monitoring
to
detect
contamination
at
a
new
unit
(such
as
in
detection
monitoring
in
the
RCRA
ground­
water
monitoring
program),
the
natural
subsurface
environment
is
presumed
uncontaminated
until
statistically
significant
increases
over
the
background
concentrations
are
detected.
Accordingly,
the
null
hypothesis
is
framed
such
that
the
downgradient
conditions
are
consistent
with
the
background.
In
this
case,
EPA's
emphasis
on
the
protection
of
human
health
and
the
environment
calls
for
minimizing
the
Type
II
error

the
mistake
of
judging
downgradient
concentrations
the
same
as
the
background
when,
in
fact,
45
they
are
higher.
Detailed
guidance
on
detection
and
compliance
monitoring
can
be
found
in
RCRA
Ground­
Water
Monitoring:
Draft
Technical
Guidance
(USEPA
1992c)
and
EPA's
guidance
on
the
statistical
analysis
of
ground­
water
monitoring
data
at
RCRA
facilities
(USEPA
1989b
and
1992b).

4.6.3
Specify
a
Range
of
Possible
Parameter
Values
Where
the
Consequences
of
a
False
Acceptance
Decision
Error
are
Relatively
Minor
(Gray
Region)

The
"gray
region"
is
one
component
of
the
quantitative
decision
performance
criteria
the
planning
team
establishes
during
the
DQO
Process
to
limit
impractical
and
infeasible
sample
sizes.
The
gray
region
is
a
range
of
possible
parameter
values
near
the
action
level
where
it
is
"too
close
to
call."
This
gray
area
is
where
the
sample
data
tend
toward
rejecting
the
baseline
condition,
but
the
evidence
(data
statistics)
is
not
sufficient
to
be
overwhelming.
In
essence,
the
gray
region
is
an
area
where
it
will
not
be
feasible
to
control
the
false
acceptance
decision
error
limits
to
low
levels
because
the
high
costs
of
sampling
and
analysis
outweigh
the
potential
consequences
of
choosing
the
wrong
course
of
action.

In
statistical
language,
the
gray
region
is
called
the
"minimum
detectable
difference"
and
is
often
expressed
as
the
Greek
letter
delta
(
).
This
value
is
an
essential
part
of
the
calculations
for
 
determining
the
number
of
samples
that
need
to
be
collected
so
that
the
decision
maker
may
have
confidence
in
the
decision
made
based
on
the
data
collected.

The
first
boundary
of
the
gray
region
is
the
Action
Level.
The
other
boundary
of
the
gray
region
is
established
by
evaluating
the
consequences
of
a
false
acceptance
decision
error
over
the
range
of
possible
parameter
values
in
which
this
error
may
occur.
This
boundary
corresponds
to
the
parameter
value
at
which
the
consequences
of
a
false
acceptance
decision
error
are
significant
enough
to
have
to
set
a
limit
on
the
probability
of
this
error
occurring.
The
gray
region
(or
"area
of
uncertainty")
establishes
the
minimum
distance
from
the
Action
Level
where
the
decision
maker
would
like
to
begin
to
control
false
acceptance
decision
errors.

In
general,
the
narrower
the
gray
region,
the
greater
the
number
of
samples
needed
to
meet
the
criteria
because
the
area
of
uncertainty
has
been
reduced.

The
quality
of
the
decision
process,
including
the
boundaries
of
the
gray
region,
can
be
depicted
graphically
using
a
Decision
Performance
Goal
Diagram
(DPGD).
Detailed
guidance
on
the
construction
and
use
of
DPGDs
is
given
in
EPA
DQO
guidance
documents
(e.
g.,
USEPA
2000a
and
2000b)
and
in
Data
Quality
Objectives
Decision
Error
Feasibility
Trials
Software
(DEFT)
User's
Guide
(USEPA
2001a).
Figure
12(
a)
and
Figure
12(
b)
show
how
some
of
the
key
outputs
of
Step
6
of
the
DQO
Process
are
depicted
in
a
DPGD
when
the
parameter
of
interest
is
the
mean
(Figure
12(
a))
and
a
percentile
(Figure
12(
b)
.

The
DPGD
given
in
Figure
12(
a)
shows
how
the
boundaries
of
the
gray
region
are
set
when
the
null
hypothesis
is
established
as
"the
true
mean
concentration
exceeds
the
standard."
Notice
that
the
planning
team
has
set
the
action
level
at
5
ppm
and
the
other
boundary
of
the
gray
region
at
4
ppm.
This
implies
that
when
the
mean
calculated
from
the
sample
data
is
less
than
4
ppm
(and
the
planning
assumptions
regarding
variability
hold
true),
then
the
data
will
be
considered
to
provide
"overwhelming
evidence"
that
the
true
mean
(unknown,
of
course)
is
below
the
action
level.
46
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Gray
Region
(Relatively
large
decision
error
rates
are
considered
tolerable.)

Action
Level
True
value
of
the
parameter
(mean
concentration,
ppm)
Low
High
0
1
2
3
4
5
6
7
Tolerable
false
rejection
decision
error
rate
Tolerable
false
acceptance
decision
error
rate
Baseline
Alternative
Probability
of
Deciding
that
the
Parameter
Exceeds
the
Action
Level
Figure
12(
a).
Decision
Performance
Goal
Diagram
where
the
mean
is
the
parameter
of
interest.
Null
hypothesis
(baseline
condition):
the
true
mean
exceeds
the
action
level.

1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Gray
Region
(Relatively
large
decision
error
rates
are
considered
tolerable.)

Action
Level
(P0)

True
value
of
the
parameter
(true
proportion
of
all
possible
samples
of
a
given
support
that
have
concentrations
less
than
the
applicable
standard)
Low
High
0.
80
0.875
0.
95
0.
975
Tolerable
false
acceptance
decision
error
rate
Tolerable
false
rejection
decision
error
rate
Alternative
Baseline
Probability
of
Deciding
that
the
Parameter
Exceeds
the
Action
Level
0.90
0.825
0.775
1.
00
0.925
0.85
Figure
12(
b).
Decision
Performance
Goal
Diagram
where
a
percentile
is
the
parameter
of
interest.
Null
hypothesis
(baseline
condition):
true
proportion
­­
of
all
possible
samples
of
a
given
support
that
are
less
than
the
applicable
standard
­­
is
less
than
0.90.
47
Now
consider
the
DPGD
given
in
Figure
12(
b).
The
figure
shows
how
the
gray
region
is
set
when
the
null
hypothesis
is
established
as
"the
true
proportion
of
samples
below
the
concentration
standard
is
less
than
0.90."
Notice
in
this
example
the
planning
team
has
set
the
action
level
at
0.90
and
the
other
boundary
of
the
gray
region
at
0.95.
This
implies
that
when
the
proportion
of
samples
that
comply
with
the
standard
is
greater
than
0.95,
then
the
data
will
be
considered
to
provide
"overwhelming
evidence"
that
the
true
proportion
(unknown,
of
course)
is
greater
than
the
action
level
of
0.90.

The
term
"samples"
refers
to
all
possible
samples
of
a
specified
size,
shape,
and
orientation
(or
sample
support)
drawn
from
the
DQO
decision
unit.
Sampling
procedures
and
sample
support
can
affect
the
measurement
value
obtained
on
individual
samples
and
have
a
profound
effect
on
the
shape
of
the
sampling
distribution.
Thus,
the
outcome
of
statistical
procedures
that
examine
characteristics
of
the
upper
tail
of
the
distribution
can
be
influenced
by
the
sample
support
–
more
so
than
when
the
mean
is
the
parameter
of
interest.
Accordingly,
when
testing
for
a
proportion,
a
complete
statement
of
the
null
hypothesis
should
include
specification
of
the
sample
support.
See
Sections
6.3.1
and
6.3.2
for
guidance
on
establishing
the
appropriate
sample
support
as
part
of
the
DQO
Process.

4.6.4
Specify
an
Acceptable
Probability
of
Making
a
Decision
Error
You
can
never
completely
eliminate
decision
errors
or
even
know
when
they
have
occurred,
but
you
can
quantify
the
probability
of
making
such
errors.
In
this
activity,
you
establish
the
acceptable
probability
of
making
a
decision
error.

The
Type
I
error
rate
(
)
is
a
measure
of
the
amount
of
"mistrust"
you
have
in
the
conclusion
 
(Myers
1997)
and
is
also
known
as
the
significance
level
for
a
test.
The
flip
side
of
this
is
the
amount
of
faith
or
confidence
you
have
in
the
conclusion.
The
confidence
level
is
denoted
mathematically
as
.
As
stated
previously,
the
Type
I
error
(the
error
of
falsely
rejecting
1   
 
the
null
hypothesis)
is
of
greatest
concern
from
the
standpoint
of
environmental
protection
and
regulatory
compliance.

The
probability
of
making
a
Type
II
error
(the
error
of
falsely
accepting
the
null
hypothesis)
also
can
be
specified.
For
example,
if
the
sample
data
lead
you
to
conclude
that
a
waste
does
not
qualify
for
the
comparable
fuels
exclusion
(40
CFR
261.38),
when
the
true
mean
concentration
in
the
waste
is
in
fact
below
the
applicable
standard,
then
a
Type
II
(false
acceptance
error)
has
been
made.
(Note
that
some
of
the
statistical
methods
given
in
this
document
do
not
require
specification
of
a
Type
II
error
rate).

As
a
general
rule,
the
lower
you
set
the
probability
of
making
a
decision
error,
the
greater
the
cost
in
terms
of
the
number
of
samples
required,
time
and
personnel
required
for
sampling
and
analysis,
and
financial
resources
required.

An
acceptable
probability
level
for
making
a
decision
error
should
be
established
by
the
planning
team
after
consideration
of
the
RCRA
regulatory
requirements,
guidance
from
EPA
or
the
implementing
agency,
the
size
(volume
or
weight)
of
the
decision
unit,
and
the
consequences
of
making
a
decision
error.
In
some
cases,
the
RCRA
regulations
specify
the
Type
I
or
Type
II
(or
both)
error
rates
that
should
be
used.
For
example,
when
testing
a
waste
to
determine
whether
it
qualifies
for
the
comparable/
syngas
fuel
exclusion
under
40
CFR
261.38,
the
regulations
require
that
the
determination
be
made
with
a
Type
I
error
rate
set
at
5
7
Under
§261.38(
c)(
8)(
iii)(
A),
a
generator
must
demonstrate
that
"each
constituent
of
concern
is
not
present
in
the
waste
above
the
specification
level
at
the
95%
upper
confidence
limit
around
the
mean."

48
percent
(i.
e.,
).
7
 =
0
05
.

In
other
cases,
the
regulations
do
not
specify
any
decision
error
limits.
The
planning
team
must
specify
the
decision
error
limits
based
on
their
knowledge
of
the
waste;
impacts
on
costs,
human
health,
and
ecological
conditions;
and
the
potential
consequences
of
making
a
decision
error.
For
example,
if
the
quantity
of
waste
(that
comprises
a
decision
unit)
is
large
and/
or
heterogeneous,
then
a
waste
handler
may
require
high
confidence
(e.
g.,
95
or
99
percent)
that
a
high
proportion
of
the
waste
or
media
complies
with
the
applicable
standard.
On
the
other
hand,
if
the
waste
quantity
is
a
relatively
small
(e.
g.,
a
drum)
and
sampling
and
measurement
error
can
be
minimized,
then
the
waste
handler
may
be
willing
to
relax
the
confidence
level
required
or
simply
use
a
nonstatistical
(e.
g.,
judgmental)
sampling
design
and
reduce
the
number
of
samples
to
be
taken.

For
additional
guidance
on
controlling
errors
Section
6
and
EPA's
DQO
guidance
(USEPA
2000a
and
2000b).

4.7
Outputs
of
the
First
Six
Steps
of
the
DQO
Process
Table
5
provides
a
summary
of
the
outputs
of
the
first
six
steps
of
the
DQO
Process.
Typically,
this
information
will
be
incorporated
into
a
QAPP,
WAP,
or
other
similar
planning
document
(as
described
in
Section
5.7).
The
DQOs
can
be
simple
and
straight
forward
for
simple
projects
and
can
be
documented
in
just
a
few
pages
with
little
or
no
supporting
data.
For
more
complex
projects,
the
DQOs
can
be
more
lengthy,
and
the
supporting
data
may
take
up
volumes.
The
team
that
will
be
optimizing
the
sample
design(
s)
will
need
the
information
to
support
their
plan
development.
The
project
manager
and
the
individuals
who
assess
the
overall
outcome
of
the
project
also
will
need
the
information
to
determine
if
the
DQOs
were
achieved.

Keep
in
mind
that
the
DQO
Process
is
an
iterative
one;
it
might
be
necessary
to
return
to
earlier
steps
to
modify
inputs
when
new
data
become
available
or
to
change
assumptions
if
achieving
the
original
DQOs
is
not
realistic
or
practicable.

The
last
step
(Step
7)
in
the
DQO
Process
is
described
in
detail
in
the
next
section
of
this
document.
Example
applications
of
the
full
DQO
Process
are
presented
in
Appendix
"I."
49
Table
5.
Summary
of
Outputs
of
the
First
Six
Steps
of
the
DQO
Process
DQO
Step
Expected
Outputs
1.
State
the
Problem
°
List
of
members
of
the
planning/
scoping
team
and
their
role/
expertise
in
the
project.
Identify
individuals
or
organizations
participating
in
the
project
(e.
g.
facility
name)
and
discuss
their
roles,
responsibilities,
and
organization.
°
A
concise
description
of
the
problem.
°
Summary
of
available
resources
and
relevant
deadlines.

2.
Identify
the
Decision
°
A
decision
statement
that
links
the
principal
study
question
to
possible
actions
that
will
solve
the
problem
or
answer
the
question.

3.
Identify
Inputs
to
the
Decision
°
A
list
of
informational
inputs
needed
to
resolve
the
decision
statement,
how
the
information
will
be
used,
sources
of
that
information,
and
an
indication
of
whether
the
information
is
available
for
will
need
to
be
obtained.
°
A
list
of
environmental
variables
or
characteristics
that
will
be
measured.

4.
Define
the
Boundaries
°
A
detailed
description
of
the
spatial
and
temporal
boundaries
of
the
problem
(i.
e.,
define
the
population,
each
decision
unit,
and
the
sample
support).
°
Options
for
stratifying
the
population
under
study.
°
Any
practical
constraints
that
may
interfere
with
the
study.

5.
Develop
a
Decision
Rule
°
The
parameter
of
interest
that
characterizes
the
population.
°
The
Action
Level
or
other
method
for
testing
the
decision
rule.
°
An
"if
...
then..."
statement
that
defines
the
conditions
that
would
cause
the
decision
maker
to
choose
among
alternative
actions.

6.
Specify
Limits
on
Decision
Errors
°
Potential
variability
and
bias
in
the
candidate
sampling
and
measurement
methods
°
The
baseline
condition
(null
hypothesis)
°
The
boundaries
of
the
gray
region
°
The
decision
maker's
tolerable
decision
error
rates
based
on
a
consideration
of
consequences
of
making
an
incorrect
decision.
50
5
OPTIMIZING
THE
DESIGN
FOR
OBTAINING
THE
DATA
This
section
describes
DQO
Process
Step
7,
the
last
step
in
the
DQO
Process.
The
purpose
of
this
step
is
to
identify
an
optimal
design
for
obtaining
the
data.
An
optimal
sampling
design
is
one
that
obtains
the
requisite
information
from
the
samples
for
the
lowest
cost
and
still
satisfies
the
DQOs.

You
can
optimize
the
sampling
design
by
performing
five
activities
that
are
described
in
detail
in
this
section.
These
activities
are
based
on
those
described
in
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4
(USEPA
2000b),
but
they
have
been
modified
to
more
specifically
address
RCRA
waste­
related
studies.

In
this
final
planning
step,
combine
the
data
collection
design
information
with
the
other
outputs
of
the
DQO
Process
and
document
the
approach
in
a
planning
document
such
as
a
QAPP,
WAP,
or
similar
planning
document.
As
part
of
this
step,
it
may
be
necessary
to
work
through
Step
7
more
than
once
after
revisiting
the
first
six
steps
of
the
DQO
Process.

5.1
Review
the
Outputs
of
the
First
Six
Steps
of
the
DQO
Process
Each
of
the
steps
in
the
DQO
Process
has
a
series
of
outputs
that
include
qualitative
and
quantitative
information
about
the
study.
The
outputs
of
the
first
six
steps
of
the
DQO
Process,
as
described
in
Section
4,
serve
as
inputs
to
DQO
Step
7.

Review
the
existing
information
and
DQO
outputs
(see
Table
5).
Determine
if
any
data
gaps
exist
and
determine
whether
filling
those
gaps
is
critical
to
completion
of
the
project.
Data
gaps
can
be
filled
by
means
of
a
"preliminary
study"
or
"pilot
study."
A
preliminary
study
or
pilot
can
include
collection
of
samples
to
obtain
preliminary
estimates
of
the
mean
and
standard
deviation.
In
addition,
a
preliminary
study
can
help
you
verify
waste
or
site
conditions,
identify
unexpected
conditions
or
materials
present,
gain
familiarization
with
the
waste
and
facility
operations,
identify
how
the
waste
can
be
accessed,
check
and
document
the
physical
state
of
the
material
to
be
sampled,
and
identify
potential
health
and
safety
hazards
that
may
be
present.

Review
the
potential
sources
of
variability
and
bias
("
error")
that
might
be
introduced
in
the
sampling
design
and
measurement
processes.
See
Section
6
for
a
discussion
of
sources
of
error
in
sampling
and
analysis.
Step
7:
Optimize
the
Design
for
Collecting
the
Data
Purpose
To
identify
a
resource­
effective
data
collection
design
for
generating
data
that
are
expected
to
satisfy
the
DQOs.

Activities
°
Review
the
outputs
of
the
first
six
steps
of
the
DQO
Process
(see
Section
5.1).
°
Consider
various
data
collection
design
options,
including
sampling
and
analytical
design
alternatives
(see
Section
5.2),
and
composite
sampling
options
(see
Section
5.3).
°
For
each
data
collection
design
alternative,
determine
the
appropriate
number
of
samples
(see
Section
5.4
or
5.5).
°
Select
the
most
resource­
effective
design
that
satisfies
all
of
the
data
needs
for
the
least
costs
(see
Section
5.6).
°
Prepare
a
QAPP,
WAP,
or
similar
planning
document
as
needed
to
satisfy
the
project
and
regulatory
requirement
(see
Section
5.7).
51
5.2
Consider
Data
Collection
Design
Options
Data
collection
design
incorporates
two
interdependent
activities
­­
the
sample
collection
design
and
analytical
design.

Sampling
Design:
In
developing
a
sampling
design,
you
consider
various
strategies
for
selecting
the
locations,
times,
and
components
for
sampling,
and
you
define
appropriate
sample
support.
Examples
of
sampling
designs
include
simple
random,
stratified
random,
systematic,
and
judgmental
sampling.
In
addition
to
sampling
designs,
make
sure
your
organization
has
documented
standard
operation
procedures
(SOPs)
that
describe
the
steps
to
be
followed
when
implementing
a
sampling
activity
(e.
g.,
equipment
preparation,
sample
collection,
decontamination).
For
guidance
on
suggested
content
and
format
for
SOPs,
refer
to
Guidance
for
the
Preparing
Standard
Operating
Procedures
(SOPs)
EPA
QA/
G­
6
(USEPA
2001c).
Sampling
QA/
QC
activities
also
should
be
part
of
sampling
design.
Activities
used
to
document,
measure,
and
control
data
quality
include
project­
specific
quality
controls
(e.
g.,
duplicate
samples,
equipment
blanks,
field
blanks,
and
trip
blanks)
and
the
associated
quality
assessments
(e.
g.,
audits,
reviews)
and
assurances
(e.
g.,
corrective
actions,
reports
to
management).
These
activities
typically
are
documented
in
the
QAPP
(see
Section
5.7
and
USEPA
1998a).

Analytical
Design:
In
DQO
Steps
3
and
5,
an
Action
Level
and
candidate
analytical
methods
were
identified.
The
information
should
be
used
to
develop
analytical
options
in
terms
of
cost,
method
performance,
available
turnaround
times,
and
QA/
QC
requirements.
The
analytical
options
can
be
used
as
the
basis
for
designing
a
performance­
based
cost­
effective
analytical
plan
(e.
g.,
deciding
between
lower­
cost
field
analytical
methods
and/
or
higher
cost
laboratory
methods).
Candidate
laboratories
should
have
adequate
SOPs
that
describe
the
steps
to
be
followed
when
implementing
an
analytical
activity
(e.
g.,
sample
receipt
procedures,
subsampling,
sample
preparation,
cleanup,
instrumental
analysis,
data
generation
and
handling).
If
field
analytical
techniques
are
used,
hard
copies
of
the
analytical
methods
or
SOPs
should
be
available
in
the
field.
Refer
to
Chapter
Two
of
SW­
846
for
guidance
on
the
selection
of
analytical
methods.

The
goal
of
this
step
is
to
find
cost­
effective
design
alternatives
that
balance
the
number
of
samples
and
the
measurement
performance,
given
the
feasible
choices
for
sample
designs
and
measurement
methods.

Sampling
design
is
the
"where,
when,
and
how"
component
of
the
planning
process.
In
the
context
of
waste
sampling
under
RCRA,
there
are
two
categories
of
sampling
designs:
(1)
probability
sampling
and
(2)
authoritative
(nonprobability)
sampling.
The
choice
of
a
sampling
design
should
be
made
after
consideration
of
the
DQOs
and
the
regulatory
requirements.

Probability
sampling
refers
to
sampling
designs
in
which
all
parts
of
the
waste
or
media
under
study
have
a
known
probability
of
being
included
in
the
sample.
In
cases
in
which
all
parts
of
the
waste
or
media
are
not
accessible
for
sampling,
the
situation
should
be
documented
so
its
potential
impacts
can
be
addressed
in
the
assessment
phase.
Probability
samples
can
be
of
various
types,
but
in
some
way,
they
all
make
use
of
randomization,
which
allows
probability
statements
to
be
made
about
the
quality
of
estimates
derived
from
the
resultant
data.
52
Probability
sampling
designs
provide
the
ability
to
reliably
estimate
variability,
the
reproducibility
of
the
study
(within
limits),
and
the
ability
to
make
valid
statistical
inferences.
Five
types
of
probability
sampling
designs
are
described
in
Sections
5.2.1
through
5.2.5:

°
Simple
random
sampling
°
Stratified
random
sampling
°
Systematic
sampling
°
Ranked
set
sampling
°
Sequential
sampling.

A
strategy
that
can
be
used
to
improve
the
precision
(reproducibility)
of
most
sampling
designs
is
composite
sampling.
Composite
sampling
is
not
a
sampling
design
in
and
of
itself,
rather
composite
sampling
is
a
strategy
used
as
part
of
a
probability
sampling
design
or
an
authoritative
sampling
design.
Composite
sampling
is
discussed
in
Section
5.3.

One
common
misconception
of
probability
sampling
procedures
is
that
these
procedures
preclude
the
use
of
important
prior
information.
Indeed,
just
the
opposite
is
true.
An
efficient
sampling
design
is
one
that
uses
all
available
prior
information
to
help
design
the
study.
Information
obtained
during
DQO
Step
3
("
Identify
Inputs
to
the
Decision")
and
DQO
Step
4
("
Define
the
Study
Boundaries")
should
prove
useful
at
this
stage.
One
of
the
activities
suggested
in
DQO
Step
4
is
to
segregate
the
waste
stream
or
media
into
less
heterogeneous
subpopulations
as
a
means
of
segregating
variability.
To
determine
if
this
activity
is
appropriate,
it
is
critical
to
have
an
understanding
of
the
various
kinds
of
heterogeneity
the
constituent
of
concern
exhibits
within
the
waste
or
media
(Pitard
1993).
Making
assumptions
that
a
waste
stream
is
homogeneous
can
result
in
serious
sampling
errors.
In
fact,
some
authors
suggest
the
word
"homogeneous"
be
removed
from
our
sampling
vocabulary
(Pitard
1993,
Myers
1997).

Table
6
provides
a
summary
of
sampling
designs
discussed
in
this
guidance
along
with
conditions
for
their
use,
their
advantages,
and
their
disadvantages.
Figure
13
provides
a
graphical
representation
of
the
probability
sampling
designs
described
in
this
guidance.
A
number
of
other
sampling
designs
are
available
that
might
perform
better
for
your
particular
situation.
Examples
include
cluster
sampling
and
double
sampling.
If
an
alternative
sampling
design
is
required,
review
other
publications
such
as
Cochran
(1977),
Gilbert
(1987),
USEPA
(2000c)
and
consult
a
professional
statistician.
Sampling
Over
Time
or
Space?

An
important
feature
of
probability
sampling
designs
is
that
they
can
be
applied
along
a
line
of
time
or
in
space
(see
Figure
13)
or
both
(Gilbert
1987):

Time
Sampling
designs
applied
over
time
can
be
described
by
a
one­
dimensional
model
that
corresponds
to
flowing
streams
such
as
the
following:

°
Solid
materials
on
a
conveyor
belt
°
A
liquid
stream,
pulp,
or
slurry
moving
in
a
pipe
or
from
a
discharge
point
(e.
g.,
from
the
point
of
waste
generation)
°
Continuous
elongated
piles
(Pitard
1993).

Space
For
practical
reasons,
sampling
of
material
over
a
threedimensional
space
is
best
addressed
as
though
the
material
consists
of
a
series
of
overlapping
twodimensional
planes
of
more­
or­
less
uniform
thickness
(Pitard
1993,
Gy
1998).
This
is
the
case
for
obtaining
samples
from
units
such
as
the
following:

°
Drums,
tanks,
or
impoundments
containing
single
or
multi­
phasic
liquid
wastes
°
Roll­
off
bins,
relatively
flat
piles,
or
other
storage
units
°
Landfills,
soil
at
a
land
treatment
unit,
or
a
SWMU.
53
Table
6.
Guidance
for
Selection
of
Sampling
Designs
Sampling
Design
Appropriate
Conditions
for
Use
Advantages
Limitations
Probability
Sampling
Simple
Random
Sampling
(Section
5.2.1)
Useful
when
the
population
of
interest
is
relatively
homogeneous
(i.
e.,
there
are
no
major
patterns
or
"hot
spots"
expected).
°
Provides
statistically
unbiased
estimates
of
the
mean,

proportions,
and
the
variability.

°
Easy
to
understand
and
implement.
°
Least
preferred
if
patterns
or
trends
are
known
to
exist
and
are
identifiable.

°
Localized
clustering
of
sample
points
can
occur
by
random
chance.

Stratified
Random
Sampling
(Section
5.2.2)
Most
useful
for
estimating
a
parameter
(e.
g.,
the
mean)
of
wastes
exhibiting
high
heterogeneity
(e.
g.,

there
are
distinct
portions
or
components
of
the
waste
with
high
and
low
constituent
concentrations
or
characteristics).
°
Ensures
more
uniform
coverage
of
the
entire
target
population.

°
Potential
for
achieving
greater
precision
in
estimates
of
the
mean
and
variance.

°
May
reduce
costs
over
simple
random
and
systematic
sampling
designs
because
fewer
samples
may
be
required.

°
Enables
computation
of
reliable
estimates
for
population
subgroups
of
special
interest.
°
Requires
some
prior
knowledge
of
the
waste
or
media
to
define
strata
and
to
obtain
a
more
precise
estimate
of
the
mean.

°
Statistical
procedures
for
calculating
the
number
of
samples,
the
mean,
and
the
variance
are
more
complicated
than
for
simple
random
sampling.

Systematic
Sampling
(Section
5.2.3)
Useful
for
estimating
spatial
patterns
or
trends
over
time.
°
Preferred
over
simple
random
when
sample
locations
are
random
within
each
systematic
block
or
interval.

°
Practical
and
easy
method
for
designating
sample
locations.

°
Ensures
uniform
coverage
of
site,

unit,
or
process.

°
May
be
lower
cost
than
simple
random
sampling
because
it
is
easier
to
implement.
°
May
be
misleading
if
the
sampling
interval
is
aligned
with
the
pattern
of
contamination,
which
could
happen
inadvertently
if
there
is
inadequate
prior
knowledge
of
the
pattern
of
contamination.

°
Not
truly
random,
but
can
be
modified
through
use
of
the
"random
within
blocks"
design.
54
Table
6.
Guidance
for
Selection
of
Sampling
Designs
(Continued)

Sampling
Design
Appropriate
Conditions
for
Use
Advantages
Limitations
Probability
Sampling
(continued)

Ranked
Set
Sampling
(Section
5.2.4)
°
Useful
for
reducing
the
number
of
samples
required.

°
Useful
when
the
cost
of
analysis
is
much
greater
than
the
cost
of
collecting
samples.

°
Inexpensive
auxiliary
variable
(based
on
expert
knowledge
or
measurement)
is
needed
and
can
be
used
to
rank
randomly
selected
population
units
with
respect
to
the
variable
of
interest.

°
Useful
if
the
ranking
method
has
a
strong
relationship
with
accurate
measurements.
°
Can
reduce
analytical
costs.
°
Requires
expert
knowledge
of
waste
or
process
or
use
of
auxiliary
quantitative
measurements
to
rank
population
units.

Sequential
Sampling
(Section
5.2.5)
°
Applicable
when
sampling
and/
or
analysis
are
quite
expensive,

when
information
concerning
sampling
and/
or
measurement
variability
is
lacking,
when
the
waste
and
site
characteristics
of
interest
are
stable
over
the
time
frame
of
the
sampling
effort,
or
when
the
objective
of
the
sampling
effort
is
to
test
a
specific
hypothesis.

°
May
not
be
especially
useful
if
multiple
waste
characteristics
are
of
interest
or
if
rapid
decision
making
is
necessary.
°
Can
reduce
the
number
of
samples
required
to
make
a
decision.

°
Allows
a
decision
to
be
made
with
less
sampling
if
there
is
a
large
difference
between
the
two
populations
or
between
the
true
value
of
the
parameter
of
interest
and
the
standard.
°
If
the
concentration
of
the
constituent
of
concern
is
only
marginally
different
from
the
action
level,
sequential
procedures
will
require
an
increasing
number
of
samples
approaching
that
required
for
other
designs
such
as
simple
random
or
systematic
sampling.
55
Table
6.
Guidance
for
Selection
of
Sampling
Designs
(Continued)

Sampling
Design
Appropriate
Conditions
for
Use
Advantages
Limitations
Authoritative
Sampling
Judgmental
(Section
5.2.6.1)
°
Useful
for
generating
rough
estimates
of
the
average
concentration
or
typical
property.

°
To
obtain
preliminary
information
about
a
waste
stream
or
site
to
facilitate
planning
or
to
gain
familiarity
with
the
waste
matrix
for
analytical
purposes.

°
To
assess
the
usefulness
of
samples
drawn
from
a
small
portion
of
the
waste
or
site.

°
To
screen
samples
in
the
field
to
identify
"hot"
samples
for
subsequent
analysis
in
a
laboratory.
°
Can
be
very
efficient
with
sufficient
knowledge
of
the
site
or
waste
generation
process.

°
Easy
to
do
and
explain.
°
The
utility
of
the
sampling
design
is
highly
dependent
on
expert
knowledge
of
waste.

°
Nonprobability­
based
so
inference
to
the
general
population
is
difficult.

°
Cannot
determine
reliable
estimates
of
variability.

Biased
(Section
5.2.6.2)
°
Useful
to
estimate
"worst­
case"
or
"best­
case"
conditions
(e.
g.,
to
identify
the
composition
of
a
leak,

spill,
or
waste
of
unknown
composition).
56
Sampling
Over
Space
(two­
dimensional
plan
view)
Sampling
Over
Time
or
Along
a
Transect
onedimensional

Simple
Random
Sampling
(a)
Simple
Random
Sampling
(b)

Stratified
Random
Sampling
Strata
high
medium
low
(c)
Strata
Stratified
Random
Sampling
high
medium
low
(d)

Systematic
Grid
Sampling
(e)
Systematic
Sampling
(f)

Random
Sampling
Within
Blocks
(g)
Random
Sampling
Within
Segments
(h)

Figure
13.
Probability
sampling
designs
over
space
or
along
an
interval
(modified
after
Cochran
1977
and
Gilbert
1987)
57
Box
3.
Simple
Random
Sampling:
Procedure
1.
Divide
the
area
of
the
study
into
N
equal­
size
grids,
intervals
(if
sampling
over
time),
or
other
units.
The
spacing
between
adjacent
sampling
locations
should
be
established
in
the
DQOs,
but
the
length
should
be
measurable
in
the
field
with
reasonable
accuracy.
The
total
number
of
possible
sampling
locations
(N)
should
be
much
larger
than
n
(the
number
of
samples
to
be
collected).*
2.
Assign
a
series
of
consecutive
numbers
to
each
location
between
1
and
N.
3.
Draw
n
integers
between
1
and
N
from
a
random
number
table
or
use
the
random
number
function
on
a
hand­
held
calculator
(i.
e.,
generate
a
random
number
between
0
and
1
and
multiply
the
number
by
N).
4.
Collect
samples
at
each
of
the
n
locations
or
intervals.

*
For
additional
guidance
on
calculating
spacing
between
sampling
locations,
see
Methods
for
Evaluating
the
Attainment
of
Cleanup
Standards,
Volume
I:
Soil
and
Solid
Media
(USEPA
1989a).
5.2.1
Simple
Random
Sampling
The
simplest
type
of
probability
sampling
is
simple
random
sampling
(without
replacement),
in
which
every
possible
sampling
unit
in
the
target
population
has
an
equal
chance
of
being
selected.
Simple
random
samples,
like
the
other
samples,
can
be
either
samples
in
space
(Figure
13(
a))
or
in
time
(Figure
13(
b))
and
are
often
appropriate
at
an
early
stage
of
an
investigation
in
which
little
is
known
about
nonrandom
variation
within
the
waste
generation
process
or
the
site.
All
of
the
sampling
units
should
have
equal
volume
or
mass,
and
ideally
be
of
the
same
shape
and
orientation
if
applicable
(i.
e.,
they
should
have
the
same
"sample
support").

With
a
simple
random
sample,
the
term
"random"
should
not
be
interpreted
to
mean
haphazard;
rather,
it
has
the
explicit
meaning
of
equiprobable
selection.
Simple
random
samples
are
generally
developed
through
use
of
a
random
number
table
(found
in
many
statistical
text
books),
a
random
number
function
on
a
hand­
held
calculator,
or
by
a
computer.

One
possible
disadvantage
of
pure
random
sampling
is
that
localized
clustering
of
sample
points
can
occur.
If
this
occurs,
one
option
is
to
select
a
new
random
time
or
location
for
the
sample.
Spatial
or
temporal
biases
could
result
if
unknown
trends,
patterns,
or
correlations
are
present.
In
such
situations,
stratified
random
sampling
or
systematic
sampling
are
better
options.

5.2.2
Stratified
Random
Sampling
In
stratified
random
sampling,
a
heterogeneous
unit,
site,
or
process
is
divided
into
nonoverlapping
groups
called
strata.
Each
stratum
should
be
defined
so
that
internally
it
is
relatively
homogeneous
(that
is,
the
variability
within
each
stratum
is
less
than
the
variability
observed
over
the
entire
population)
(Gilbert
1987).
After
each
stratum
is
defined,
then
simple
random
sampling
is
used
within
each
stratum
(see
Figure
13(
c)
and
15(
d)).
For
very
heterogeneous
wastes,
stratified
random
sampling
can
be
used
to
obtain
a
more
efficient
estimate
of
the
parameter
of
interest
(such
as
the
mean)
than
can
be
obtained
from
simple
random
sampling.

It
is
important
to
note
that
stratified
random
sampling,
as
described
in
this
guidance,
can
be
used
when
the
objective
is
to
make
a
decision
about
the
whole
population
or
decision
unit.
If
the
objective
is
to
determine
of
a
solid
waste
is
a
hazardous
waste
or
to
measure
attainment
of
a
treatment
standard
for
a
hazardous
waste,
then
any
obvious
"hot
spots"
or
high
concentration
wastes
should
be
characterized
separately
from
low
concentration
wastes
to
minimize
mixing
of
58
Box
4.
Stratified
Random
Sampling:
Procedure
1.
Use
prior
knowledge
of
the
waste
stream
or
site
to
divide
the
target
population
into
L
nonoverlapping
strata
such
that
the
variability
within
stratum
is
less
than
the
variability
of
the
entire
population
(for
example,
see
Figure
13c
and
Figure
13d).
The
strata
can
represent
area,
volume,
mass,
or
time
intervals.
2.
Assign
a
weight
to
each
stratum.
The
value
Wh
hth
of
each
should
be
determined
based
on
its
relative
Wh
importance
to
the
data
user,
or
it
can
be
the
proportion
of
the
volume,
mass,
or
area
of
the
waste
that
is
in
stratum
.
h
3.
Conduct
random
sampling
within
each
stratum.
hazardous
waste
with
nonhazardous
wastes
and
to
prevent
impermissible
dilution
(see
also
Appendix
C).
If
the
objective
of
the
sampling
effort
is
to
identify
nonrandom
spatial
patterns
(for
example,
to
create
a
map
of
contamination
in
shallow
soils),
then
consider
the
use
of
a
geostatistical
technique
to
evaluate
the
site.

In
stratified
random
sampling
it
is
usually
necessary
to
incorporate
prior
knowledge
and
professional
judgment
into
a
probabilistic
sampling
design.
Generally,
wastes
or
units
that
are
"alike"
or
anticipated
to
be
"alike"
are
placed
together
in
the
same
stratum.
Units
that
are
contiguous
in
space
(e.
g.,
similar
depths)
or
time
are
often
grouped
together
into
the
same
stratum,
but
characteristics
other
than
spatial
or
temporal
proximity
can
be
employed.
For
example,
you
could
stratify
a
waste
based
on
particle
size
(such
that
relatively
large
pieces
of
contaminated
debris
are
assigned
to
one
stratum
and
unconsolidated
fines
assigned
to
a
separate
stratum).
This
is
called
stratification
by
component.
See
Appendix
C
of
this
guidance
for
additional
information
on
stratification,
especially
as
a
strategy
for
sampling
heterogeneous
wastes,
such
as
debris.

In
stratified
random
sampling
a
decision
must
be
made
regarding
the
allocation
of
samples
among
strata.
When
chemical
variation
within
each
stratum
is
known,
samples
can
be
allocated
among
strata
using
optimum
allocation
in
which
more
samples
are
allocated
to
strata
that
are
large,
more
variable
internally,
or
cheaper
to
sample
(Cochran
1977,
Gilbert
1987).
An
alternative
is
to
use
proportional
allocation.
In
proportional
allocation,
the
sampling
effort
in
each
stratum
is
directly
proportional
to
the
size
(for
example,
the
mass)
of
the
stratum.
See
Section
5.4.2
for
guidance
on
determining
optimum
and
proportional
allocation
of
samples
to
strata.

There
are
several
advantages
to
stratified
random
sampling.
Stratified
random
sampling:

°
Ensures
more
uniform
coverage
of
the
entire
target
population
°
Ensures
that
subareas
that
contribute
to
overall
variability
are
included
in
the
sample
°
Achieves
greater
precision
in
certain
estimation
problems
°
Generally
will
be
more
cost­
effective
than
simple
random
sampling
even
when
imperfect
information
is
used
to
form
the
strata.

There
are
also
some
disadvantages
to
stratified
random
sampling.
Stratified
random
sampling
is
slightly
more
difficult
to
implement
in
the
field
and
statistical
calculations
for
stratified
sampling
are
more
complex
than
for
simple
random
sampling
(e.
g.,
due
to
the
use
of
weighting
factors
and
more
complex
equations
for
the
appropriate
number
of
samples).
59
Box
5:
Systematic
Sampling:
Procedure
Sampling
Over
Space
1.
Determine
the
size
of
the
area
to
be
sampled.
2.
Denote
the
surface
area
of
the
sample
area
by
.
A
3.
Assuming
a
square
grid
is
used,
calculate
the
length
of
spacing
between
grid
nodes
(L)

L
A
n
=

where
n
is
the
number
of
samples.
The
distance
L
should
be
rounded
to
the
nearest
unit
that
can
be
easily
measured
in
the
field.
4.
To
determine
the
sampling
locations,
randomly
select
an
initial
sampling
point
within
the
area
to
be
sampled.
Using
this
location
as
one
intersection
of
two
gridlines,
construct
gridlines
parallel
to
the
original
grid
and
separated
by
distance
L.
5.
Collect
samples
at
each
grid
node
(line
intersection)
(see
Figure
13e).
Alternatively,
randomly
select
a
sampling
point
within
each
grid
block
(see
Figure
13g).

Sampling
Along
a
Line
(e.
g.,
Over
Time)
1.
Determine
the
start
time
and
point
and
the
total
length
of
time
(N)
over
which
the
samples
will
be
collected.
2.
Decide
how
many
samples
(n)
will
be
collected
over
the
sampling
period.

3.
Calculate
a
sampling
interval
where
.
k
N
n
=

4.
Randomly
select
a
start
time
and
collect
a
sample
every
kth
interval
until
n
samples
have
been
obtained
(see
Figure
13f).
Alternatively,
randomly
select
a
sampling
point
within
each
interval
(Figure
13h).
5.2.3
Systematic
Sampling
Systematic
sampling
entails
taking
samples
at
a
preset
interval
of
time
or
in
space
and
using
a
randomly
selected
time
or
location
as
the
first
sampling
point
(Gilbert
1987).

Systematic
sampling
over
space
involves
establishing
a
two­
dimensional
grid
of
the
unit
or
waste
under
investigation
(Figure
13(
e)).
The
orientation
of
the
grid
is
sometimes
chosen
randomly
and
various
types
of
systematic
samples
are
possible.
For
example,
points
may
be
arranged
in
a
pattern
of
squares
(rectangular
grid
sampling)
or
a
pattern
of
equilateral
triangles
(triangular
grid
sampling).
The
result
of
either
approach
is
a
simple
pattern
of
equally
spaced
points
at
which
sampling
is
to
be
performed.
As
shown
in
Figure
13(
f),
systematic
sampling
also
can
be
conducted
along
a
transect
(every
five
feet,
for
example),
along
time
intervals
(every
hour,
for
example),
or
by
flow
or
batches
(every
10,000
gallons,
for
example)
(King
1993).

The
systematic
sampling
approach
is
attractive
because
it
can
be
easily
implemented
in
the
field,
but
it
has
some
limitations
such
as
not
being
truly
random.
You
can
improve
on
this
sampling
design
by
using
random
sampling
within
each
grid
block
(Figure
13(
g))
or
within
each
time
interval
(Figure
13(
h)).
This
approach
maintains
the
condition
of
equiprobability
during
the
sampling
event
(Myers
1997)
and
can
be
considered
a
form
of
stratified
random
sampling
in
which
each
of
the
boundaries
of
the
strata
are
arbitrarily
defined
(rather
than
using
prior
information)
and
only
one
random
sample
is
taken
per
stratum
(Gilbert
1987).
This
approach
is
advantageous
because
it
avoids
potential
problems
caused
by
cycles
or
trends.

Systematic
sampling
also
is
preferred
when
one
of
the
objectives
is
to
locate
"hot
spots"
within
a
site
or
otherwise
map
the
pattern
of
concentrations
over
an
area
(e.
g.,
using
geostatistical
techniques).
Even
without
using
geostatistical
methods,
"hot
spots"
or
other
patterns
could
be
identified
by
using
a
systematic
design
(see
"ELIPGRID"
software
in
Appendix
H
and
Gilbert
1987,
page
119).
On
the
other
hand,
the
systematic
sampling
design
should
be
used
with
caution
whenever
there
is
a
possibility
of
some
type
of
cyclical
pattern
in
the
waste
unit
or
60
A
AAA
t1
t2
t3
Time
Concentration
0
Period
*
*
*
**
B
BB
B
BB
*
Mean
Concentration
Figure
14.
Potential
pitfall
of
systematic
sampling
over
time:
cyclic
trend
combined
with
a
systematic
sampling
design
(after
Cochran
1977
and
Gilbert
1987)
process
that
might
match
the
sampling
frequency,
especially
processes
being
measured
over
time
(such
as
discharges
from
a
pipe
or
material
on
a
conveyor).

Figure
14
illustrates
the
potential
disadvantage
of
using
systematic
sampling
when
cyclic
trends
are
present.
When
there
is
a
cyclic
trend
in
a
waste
generation
process,
using
a
uniform
pattern
of
sampling
points
can
result
in
samples
with
very
unusual
properties.
The
sets
of
points
labeled
"A"
and
"B"
are
systematic
samples
for
which
the
sampling
intervals
are
one
period
and
onehalf
period,
respectively.
The
points
labeled
"A"
would
result
in
a
biased
estimate
of
the
mean
but
a
sampling
variance
of
zero.
The
points
labeled
"B"
would
result
in
an
unbiased
estimate
of
the
mean
with
very
small
variance,
even
a
zero
variance
if
the
starting
point
happened
to
be
aligned
exactly
with
the
mean.

5.2.4
Ranked
Set
Sampling
Ranked
set
sampling
(RSS)
(McIntyre
1952)
can
create
a
set
of
samples
that
at
a
minimum
is
equivalent
to
a
simple
random
sample,
but
can
be
as
much
as
two
to
three
times
more
efficient
than
simple
random
sampling.
This
is
because
RSS
uses
the
availability
of
expert
knowledge
or
an
inexpensive
surrogate
measurement
or
auxiliary
variable
that
is
correlated
with
the
more
expensive
measurement
of
interest.
The
auxiliary
variable
can
be
a
qualitative
measure,
such
as
visual
inspection
for
color
or
an
inexpensive
quantitative
(or
semi­
quantitative)
measure
that
can
be
obtained
from
a
field
instrument
such
as
a
photoionization
detector
for
volatile
organics
or
an
X­
ray
fluorescence
analyzer
for
elemental
analysis.
RSS
exploits
this
correlation
to
obtain
a
sample
that
is
more
representative
of
the
population
than
would
be
obtained
by
random
sampling,
thereby
leading
to
more
precise
estimates
of
the
population
parameters
than
random
sampling.
RSS
is
similar
to
other
probabilistic
sampling
designs
such
as
simple
random
sampling
in
that
sampling
points
are
identified
and
samples
are
collected.
In
RSS,
however,
only
a
subset
of
the
samples
are
selected
for
analysis.

RSS
consists
of
creating
m
groups,
each
of
size
m
(for
a
total
of
"m
x
m"
initial
samples),
then
ranking
the
surrogate
from
largest
to
smallest
within
each
group.
One
sample
from
each
group
is
then
selected
according
to
a
specified
procedure
and
these
m
samples
are
analyzed
for
the
more
expensive
measurement
of
interest
(see
Box
6
and
Figure
15).

The
true
mean
concentration
of
the
characteristic
of
interest
is
estimated
by
the
arithmetic
sample
mean
of
the
measured
samples
(e.
g.,
by
Equation
1).
The
population
variance
and
standard
deviation
also
are
estimated
by
the
traditional
equations
(e.
g.,
by
Equations
2
and
3).
For
additional
information
on
RSS,
see
USEPA
1995b,
USEPA
2000c,
and
ASTM
D
6582
Standard
Guide
for
Ranked
Set
Sampling:
Efficient
Estimation
of
a
Mean
Concentration
in
Environmental
Sampling.
61
5.2.5
Sequential
Sampling
In
sequential
testing
procedures
(Wald
1973),
sampling
is
performed
by
analyzing
one
(or
more)
sample(
s)
at
a
time
until
enough
data
have
been
collected
to
meet
the
statistical
confidence
level
that
the
material
does
not
exceed
the
critical
level.
The
expected
sample
size,
using
this
sequential
procedure,
can
be
approximately
30­
to
60­
percent
lower
than
a
corresponding
fixed
sample
size
test
with
the
same
power.
The
sequential
procedure
is
especially
helpful
in
situations
in
which
the
contamination
is
very
high
or
very
low
relative
to
the
action
level.
In
these
situations,
the
sequential
procedure
will
quickly
accumulate
enough
evidence
to
conclude
that
the
waste
or
site
either
meets
or
fails
to
meet
the
standard.

Figure
16
shows
how
the
procedure
operates
in
a
simple
example
for
determining
the
mean
concentration
of
a
constituent
of
concern
in
soil.
This
particular
example
involves
clean
closure
of
a
waste
management
unit,
however,
the
approach
could
be
used
for
other
situations
in
which
the
mean
is
the
parameter
of
interest.
The
procedure
consists
of
analyzing
groups
of
samples
and
calculating
the
mean
and
80­
percent
confidence
interval
(or
upper
90­
percent
confidence
limit)
for
the
mean
after
analysis
of
each
group
of
samples.
The
horizontal
axis
represents
the
number
of
sample
units
evaluated.
The
vertical
axis
represents
the
concentration
of
the
contaminant;
plotted
are
the
mean
and
80­
percent
confidence
interval
after
analysis
of
n
samples.
The
,
against
which
the
sample
is
to
be
judged,
is
shown
as
a
horizontal
line.
AL
The
sampled
units
are
analyzed
first
in
a
small
lot
(e.
g.,
five
samples).
After
each
evaluation
the
mean
and
confidence
interval
on
the
mean
are
determined
(point
"a").
If
the
90­
percent
UCL
on
the
mean
value
stays
above
the
critical
value,
,
after
successive
increments
are
analyzed,
AL
the
soil
in
the
unit
cannot
be
judged
to
attain
the
action
level
(point
"b").
If
the
UCL
goes
below
Set
1
Set
2
Set
3
Set
4
Rank
1
234
Sample
sent
for
analysis
Sample
ignored
m
=
4
For
example,
if
12
samples
are
needed,
the
process
is
repeated
2
more
times
using
fresh
samples.

Figure
15.
Ranked
set
sampling.
After
the
samples
are
ranked
in
order
from
lowest
to
highest,
a
sample
is
selected
for
analysis
from
Set
1
with
Rank
1,
from
Set
2
with
Rank
2,
etc.
Box
6.
Ranked
Set
Sampling:
Procedure
1.
Identify
some
auxiliary
characteristic
by
which
samples
can
be
ranked
in
order
from
lowest
to
highest
(e.
g.,
by
use
of
a
low­
cost
field
screening
method).
2.
Randomly
select
samples
m
m
×
from
the
population
(e.
g.,
by
using
simple
random
sampling).
3.
Arrange
these
samples
into
sets
of
m
size
.
m
4.
Within
each
set,
rank
the
samples
by
using
only
the
auxiliary
information
on
the
samples.
5.
Select
the
samples
to
be
analyzed
as
follows
(see
Figure
17):
°
In
Set
1,
select
the
sample
with
rank
1
°
In
Set
2,
select
the
sample
with
rank
2,
etc
...
°
In
Set
,
select
the
unit
with
rank
m
.
m
6.
Repeat
Steps
1
through
5
for
cycles
to
obtain
a
total
of
samples
for
analysis.
r
n
mr
=
 
62
a
b
c
d
Mean
calculated
from
n
samples
AL
­
Risk­
based
action
level
Confidence
Interval
5
10
20
40
Concentration
AL
Soil
does
not
attain
AL
Soil
attains
AL
Cumulative
number
of
samples
(n)

Figure
16.
Example
of
sequential
testing
for
determining
if
concentrations
of
a
constituent
of
concern
in
soil
at
a
closed
waste
management
unit
are
below
a
risk­
based
action
level
(AL).
the
critical
value
line,
it
may
be
concluded
that
the
soil
attains
the
standard.
In
the
figure,
the
total
number
of
samples
is
successively
increased
until
the
90­
percent
UCL
falls
below
the
critical
level
(points
"c"
and
"d").

A
sequential
sampling
approach
also
can
be
used
to
test
a
percentile
against
a
standard.
A
detailed
description
of
this
method
is
given
in
Chapter
8
of
Methods
for
Evaluating
the
Attainment
of
Cleanup
Standards
Volume
1:
Soil
and
Solid
Media
(USEPA
1989a).

In
sequential
sampling,
the
number
of
samples
is
not
fixed
a
priori;
rather,
a
statistical
test
is
performed
after
each
analysis
to
arrive
at
one
of
three
possible
decisions:
reject
the
hypothesis,
accept
the
hypothesis,
or
perform
another
analysis.
This
strategy
is
applicable
when
sampling
and/
or
analyses
are
quite
expensive,
when
information
concerning
sampling
and/
or
measurement
variability
is
lacking,
when
the
waste
and
site
characteristics
of
interest
are
stable
over
the
time
frame
of
the
sampling
effort,
or
when
the
objective
of
the
sampling
effort
is
to
test
a
specific
hypothesis.
It
may
not
be
especially
useful
if
multiple
waste
characteristics
are
of
interest
or
if
rapid
decision
making
is
necessary.

In
planning
for
a
sequential
sampling
program,
the
following
considerations
are
important:

°
Pre­
planning
the
effort
between
the
field
and
laboratory,
including
developing
a
system
of
pre­
planned
paperwork
and
sample
containers
°
Arranging
for
a
system
of
rapid
delivery
of
samples
to
the
laboratory
°
Providing
rapid
turnaround
in
the
laboratory
°
Rapidly
returning
data
to
the
planners,
supervisors,
and
others
responsible
for
decision
making.

If
the
sequential
sampling
program
is
carried
out
using
field
methods
(e.
g.,
portable
detectors),
much
of
the
inconvenience
involved
with
shipping
and
return
of
results
can
be
avoided.

5.2.6
Authoritative
Sampling
Authoritative
sampling
is
a
nonstatistical
sampling
design
because
it
does
not
assign
an
equal
probability
of
being
sampled
to
all
portions
of
the
population.
This
type
of
sampling
should
be
considered
only
when
the
objectives
of
the
investigation
do
not
include
the
estimation
of
a
population
parameter.
For
example,
authoritative
sampling
might
be
appropriate
when
the
objective
of
a
study
is
to
identify
specific
locations
of
leaks,
or
when
the
study
is
focused
solely
63
on
the
sampling
locations
themselves.
The
validity
of
the
data
gathered
with
authoritative
sampling
is
dependent
on
the
knowledge
of
the
sampler
and,
although
valid
data
sometimes
can
be
obtained,
it
is
not
recommended
for
the
chemical
characterization
of
wastes
when
the
parameter
of
interest
(such
as
the
mean)
is
near
the
action
level.

Authoritative
sampling
(also
known
as
judgmental
sampling,
biased
sampling,
nonprobability
sampling,
nonstatistical
sampling,
purposive
sampling,
or
subjective
sampling)
may
be
appropriate
under
circumstances
such
as
the
following:

°
You
need
preliminary
information
about
a
waste
stream
or
site
to
facilitate
planning
or
to
gain
familiarity
with
the
waste
matrix
for
analytical
purposes.

°
You
are
conducting
sampling
for
a
RCRA
Facility
Assessment
(RFA)
to
identify
a
potential
or
actual
release
to
the
environment.

°
You
have
encountered
a
spill
of
an
unknown
chemical
and
need
to
determine
the
chemical
makeup
of
the
spilled
material.

°
You
have
access
to
only
small
portions
of
the
population
and
judgment
is
applied
to
assess
the
usefulness
of
samples
drawn
from
the
small
portion.

°
You
are
screening
samples
in
the
field,
using
an
appropriate
field
method,
to
identify
"hot"
samples
for
subsequent
analysis
in
a
laboratory.

°
You
are
sampling
to
support
case
development
for
an
enforcement
agency
or
to
"prove
the
positive"
(see
also
Section
2.2.4).

With
authoritative
sampling,
it
is
not
possible
to
accurately
estimate
the
population
variance.
Also,
due
to
its
subjective
nature,
the
use
of
authoritative
sampling
by
the
regulated
community
to
demonstrate
compliance
with
regulatory
standards
generally
is
not
advisable
except
in
those
cases
in
which
a
small
volume
of
waste
is
in
question
or
where
the
concentration
is
either
well
above
or
well
below
the
regulatory
threshold.

The
ASTM
recognizes
two
types
of
authoritative
sampling:
judgmental
sampling
and
biased
sampling
(ASTM
D
6311).

5.2.6.1
Judgmental
Sampling
Judgmental
sampling
is
a
type
of
authoritative
sampling.
The
goal
of
judgmental
sampling
is
to
use
process
or
site
knowledge
to
choose
one
or
more
sampling
locations
to
represent
the
"average"
concentration
or
"typical"
property.

Judgmental
sampling
designs
can
be
cost­
effective
if
the
people
choosing
the
sampling
locations
have
sufficient
knowledge
of
the
waste.
If
the
people
choosing
the
sampling
locations
intentionally
distort
the
sampling
by
a
prejudiced
selection,
or
if
their
knowledge
is
wanting,
judgmental
sampling
can
lead
to
incorrect
and
sometimes
very
costly
decisions.
Accurate
and
useful
data
can
be
generated
from
judgmental
sampling
more
easily
if
the
population
is
relatively
homogeneous
and
the
existence
of
any
strata
and
their
boundaries
is
known.
The
disadvantages
of
judgmental
sampling
designs
follow:
1
Some
authors
use
the
term
"discrete
sample"
to
refer
to
an
individual
sample
that
is
used
to
form
a
composite
sample.
The
RCRA
regulations
often
use
the
term
"grab
sample."
For
the
purpose
of
this
guidance,
the
terms
"discrete,"
"grab,"
and
"individual"
sample
have
the
same
meaning.

64
°
It
can
be
difficult
to
demonstrate
that
prejudice
was
not
employed
in
sampling
location
selection
°
Variances
calculated
from
judgmental
samples
may
be
poor
estimates
of
the
actual
population
variance
°
Population
statistics
cannot
be
generated
from
the
data
due
to
the
lack
of
randomness.

An
example
application
of
judgement
sampling
is
given
in
Appendix
C
of
Guidance
for
the
Data
Quality
Objectives
Process
for
Hazardous
Waste
Site
Operations
(USEPA
2000a).

5.2.6.2
Biased
Sampling
Biased
sampling
is
the
type
of
authoritative
sampling
that
intends
not
to
estimate
average
concentrations
or
typical
properties,
but
to
estimate
"worst"
or
"best"
cases
(ASTM
D
6051­
96).
The
term
"biased,"
as
used
here,
refers
to
the
collection
of
samples
with
expected
very
high
or
very
low
concentrations.
For
example,
a
sample
taken
at
the
source
of
a
release
could
serve
as
an
estimate
of
the
"worst­
case"
concentration
found
in
the
affected
media.
This
information
would
be
useful
in
identifying
the
constituent
of
concern
and
estimating
the
maximum
level
of
contamination
likely
to
be
encountered
during
a
cleanup.

At
times,
it
may
be
helpful
to
employ
a
"best
case"
or
both
a
"best­
case"
and
"worst­
case"
biased
sampling
approach.
For
example,
if
there
is
a
range
of
wastes
and
process
knowledge
can
be
used
to
identify
the
wastes
likely
to
have
the
lowest
and
highest
contamination
levels,
then
these
two
extremes
could
be
sampled
to
help
define
the
extent
of
the
problem.

Biased
sampling,
while
having
the
ability
to
cost­
effectively
generate
information,
has
similar
disadvantages
to
that
of
judgmental
sampling.

5.3
Composite
Sampling
Composite
sampling
is
a
strategy
in
which
multiple
individual
or
"grab"
samples
(from
different
locations
or
times)
are
physically
combined
and
mixed
into
a
single
sample
so
that
a
physical,
rather
than
a
mathematical,
averaging
takes
place.
1
Figure
17
illustrates
the
concept
of
composite
samples.
For
a
well­
formed
composite,
a
single
measured
value
should
be
similar
to
the
mean
of
measurements
of
the
individual
components
of
the
composite
(Fabrizio,
et
al.
1995).
Collection
of
multiple
composite
samples
can
provide
improved
sampling
precision
and
reduce
the
total
number
of
analyses
required
compared
to
noncomposite
sampling.
This
strategy
is
sometimes
employed
to
reduce
analysis
costs
when
analysis
costs
are
large
relative
to
sampling
costs.
The
appropriateness
of
using
composite
sampling
will
be
highly
dependent
on
the
DQOs
(Myers
1997),
the
constituent
of
concern,
and
the
regulatory
requirements.
To
realize
the
full
benefits
of
composite
sampling,
field
and
laboratory
personnel
must
carefully
65
Composite
Individual
Field
Samples
Composite
Figure
17.
Forming
composite
samples
from
individual
samples
(from
USEPA
1995c).
follow
correct
procedures
for
sample
collection,
mixing,
and
subsampling
(see
Sections
6
and
7).

5.3.1
Advantages
and
Limitations
of
Composite
Sampling
A
detailed
discussion
of
the
advantages
and
limitations
of
composite
sampling
is
presented
in
the
Standard
Guide
for
Composite
Sampling
and
Field
Subsampling
for
Environmental
Waste
Management
Activities
(ASTM
D
6051­
96)
and
EPA's
Guidance
for
Choosing
a
Sampling
Design
for
Environmental
Data
Collection,
EPA
QA/
G­
5S
(USEPA
2000c).
Additional
information
on
composite
sampling
can
be
found
in
Edland
and
van
Belle
(1994),
Gilbert
(1987),
Garner,
et
al.
(1988
and
1989),
Jenkins,
et
al.
(1996
and
1997),
Myers
(1997),
and
USEPA
(1995c).

Advantages
Three
principal
advantages
to
using
composite
sampling
(see
ASTM
D
6051­
96)
follow:

°
It
can
improve
the
precision
(i.
e.,
reduce
between­
sample
variance)
of
the
estimate
of
the
mean
concentration
of
a
constituent
in
a
waste
or
media
(see
Section
5.3.5)

°
It
can
reduce
the
cost
of
estimating
a
mean
concentration,
especially
in
cases
in
which
analytical
costs
greatly
exceed
sampling
costs
or
in
which
analytical
capacity
is
limited
°
A
"local"
composite
sample,
formed
from
several
increments
obtained
from
a
localized
area,
is
an
effective
way
to
increase
the
sample
support,
which
reduces
grouping
and
segregation
errors
(see
also
Section
6.2.2.2)

°
It
can
be
used
to
determine
whether
the
concentration
of
a
constituent
in
one
or
more
individual
samples
used
to
form
a
composite
might
exceed
a
fixed
standard
(i.
e.,
is
there
a
"hot
spot"?)
(see
Section
5.3.6).

Limitations
Composite
sampling
should
not
be
used
if
the
integrity
of
the
individual
sample
values
changes
because
of
the
physical
mixing
of
samples
(USEPA
1995c).
The
integrity
of
individual
sample
values
could
be
affected
by
chemical
precipitation,
exsolvation,
or
volatilization
during
the
pooling
and
mixing
of
samples.
For
example,
volatile
constituents
can
be
lost
upon
mixing
of
samples
or
interactions
can
occur
among
sample
constituents.
In
the
case
of
volatile
constituents,
compositing
of
individual
sample
extracts
within
a
laboratory
environment
may
be
a
reasonable
alternative
to
mixing
individual
samples
as
they
are
collected.
66
Listed
below
are
some
additional
conditions
under
which
compositing
usually
is
not
advantageous:

°
When
regulations
require
the
use
of
discrete
or
grab
samples.
For
example,
compliance
with
the
LDR
numeric
treatment
standards
for
non­
wastewaters
typically
is
to
be
determined
using
"grab"
samples
rather
than
composite
samples.
Grab
samples
processed,
analyzed,
and
evaluated
individually
normally
reflect
maximum
process
variability,
and
thus
reasonably
characterize
the
range
of
treatment
system
performance.
Typically,
grab
samples
are
used
to
evaluate
LDR
non­
wastewaters
and
composite
samples
are
used
to
evaluate
LDR
wastewaters,
except
when
evaluating
wastewaters
for
metals
(D004
through
D011)
for
which
grab
samples
are
required
[40
CFR
268.40(
b)].

°
When
data
users
require
specific
data
points
to
generate
high­
end
estimates
or
to
calculate
upper
percentiles
°
When
sampling
costs
are
much
greater
than
analytical
costs
°
When
analytical
imprecision
outweighs
sampling
imprecision
and
population
heterogeneity
°
When
individual
samples
are
incompatible
and
may
react
when
mixed
°
When
properties
of
discrete
samples,
such
as
pH
or
flash
point,
may
change
qualitatively
upon
mixing.
(Compositing
of
individual
samples
from
different
locations
to
be
tested
for
hazardous
waste
characteristic
properties,
such
as
corrosivity,
reactivity,
ignitability,
and
toxicity,
is
not
recommended)

°
When
analytical
holding
times
are
too
short
to
allow
for
analysis
of
individual
samples,
if
testing
of
individual
samples
is
required
later
(for
example,
to
identify
a
"hot"
sample)
(see
Section
5.3.6)

°
When
the
sample
matrix
impedes
correct
homogenization
and/
or
subsampling
°
When
there
is
a
need
to
evaluate
whether
the
concentrations
of
different
contaminants
are
correlated
in
time
or
space.

5.3.2
Basic
Approach
To
Composite
Sampling
The
basic
approach
to
composite
sampling
involves
the
following
steps:

°
Identify
the
boundaries
of
the
waste
or
unit.
The
boundaries
may
be
spatial,
temporal,
or
based
on
different
components
or
strata
in
the
waste
(such
as
battery
casings
and
soil)

°
Conduct
sampling
in
accordance
with
the
selected
sampling
design
and
collect
a
set
of
n
x
g
individual
samples
where
g
is
the
number
of
individual
samples
used
to
form
each
composite
and
n
is
the
number
of
such
composites
2
By
the
Central
Limit
Theorem
(CLT),
we
expect
composite
samples
to
generate
normally
distributed
data.
The
CLT
states
that
if
a
population
is
repeatedly
sampled,
the
means
of
all
the
sampling
events
will
tend
to
form
a
normal
distribution,
regardless
of
the
shape
of
the
underlying
distribution.

67
A
B
C
C
B
A
B
A
C
n
 
g
=
9
individual
field
samples
na
nb
nc
n
=
3
composite
samples
Decision
Unit
Boundary
Subsamples
analyzed
xa
xb
xc
Figure
18.
A
basic
approach
to
composite
sampling.
The
figure
shows
how
composite
sampling
can
be
integrated
into
a
simple
random
sampling
design.
Random
samples
with
the
same
letter
are
randomly
grouped
into
composite
samples
to
obtain
an
estimate
of
the
unit­
wide
mean.
°
Group
either
randomly
or
systematically
the
set
of
n
x
g
individual
samples
into
n
composite
samples
and
thoroughly
mix
and
homogenize
each
composite
sample
°
Take
one
or
more
subsamples
from
each
composite
°
Analyze
each
subsample
for
the
constituent(
s)
of
concern.

The
n
composite
samples
can
then
be
used
to
estimate
the
mean
and
variance
(see
Section
5.3.5)
or
identify
"hot
spots"
in
the
waste
(see
Section
5.3.6).

5.3.3
Composite
Sampling
Designs
Composite
sampling
can
be
implemented
as
part
of
a
statistical
sampling
design,
such
as
simple
random
sampling
and
systematic
sampling.
The
choice
of
a
sampling
design
to
use
with
compositing
will
depend
upon
the
study
objectives.

5.3.3.1
Simple
Random
Composite
Sampling
Figure
18
shows
how
composite
sampling
can
be
integrated
into
a
simple
random
sampling
design.
In
this
figure,
the
decision
unit
could
represent
any
waste
or
media
about
which
a
decision
must
be
made
(such
as
a
block
of
contaminated
soil
at
a
SWMU).
Randomly
positioned
field
samples
are
randomly
grouped
together
into
composite
samples.
The
set
of
composite
samples
can
then
be
used
to
estimate
the
mean
and
the
variance.

Because
the
compositing
process
is
a
mechanical
way
of
averaging
out
variabilities
in
concentrations
from
location
to
location
over
a
unit,
the
resulting
concentration
data
should
tend
to
be
more
normally
distributed
than
individual
samples
(Exner,
et
al.
1985).
This
is
especially
advantageous
because
the
assumption
of
many
statistical
tests
is
that
the
underlying
data
exhibit
an
approximately
normal
distribution.
2
68
A
B
C
D
A
B
C
D
A
B
C
D
A
B
C
D
A
B
C
D
A
B
C
D
Decision
Unit
Boundary
Figure
19.
Systematic
composite
sampling
across
a
unit
or
site.
Samples
with
the
same
letter
are
pooled
into
composites.

A
A
A
A
B
B
B
B
C
C
C
C
D
D
D
D
E
E
E
E
F
F
F
F
Decision
Unit
Boundary
Figure
20.
Systematic
sampling
within
grid
blocks
or
intervals.
Samples
with
the
same
letter
are
pooled
into
a
composite
sample.
5.3.3.2
Systematic
Composite
Sampling
A
systematic
composite
sampling
design
is
shown
in
Figure
19.
The
design
can
be
used
to
estimate
the
mean
concentration
because
each
composite
sample
is
formed
from
field
samples
obtained
across
the
entire
unit.
For
example,
each
field
sample
collected
at
the
"A"
locations
is
pooled
and
mixed
into
one
composite
sample.
The
process
is
then
repeated
for
the
"B,"
"C,"
and
"D"
locations.
The
relative
location
of
each
individual
field
sample
(such
as
"A")
should
be
the
same
within
each
block.

This
design
is
particularly
advantageous
because
it
is
easy
to
implement
and
explain
and
it
provides
even
coverage
of
the
unit.
Exner,
et
al.
(1985)
demonstrated
how
this
design
was
used
to
make
cleanup
decisions
for
blocks
of
soil
contaminated
with
tetrachlorodibenzo­
p­
dioxin.

A
second
type
of
systematic
composite
involves
collecting
and
pooling
samples
from
within
grid
blocks,
time
intervals,
or
batches
of
waste
grouped
together
(see
Figure
20).

If
there
is
spatial
correlation
between
the
grid
blocks,
compositing
within
grids
can
be
used
to
estimate
block­
to­
block
variability
(Myers
1997)
or
improve
the
estimate
of
the
mean
within
a
block
or
interval
(if
multiple
composite
samples
are
collected
within
each
block).
In
fact,
compositing
samples
collected
from
localized
areas
is
an
effective
means
to
control
"short­
range"
(small­
scale)
heterogeneity
(Pitard
1993).
When
this
type
of
compositing
is
used
on
localized
areas
in
lieu
of
"grab"
sampling,
it
is
an
attractive
option
to
improve
representativeness
of
individual
samples
(Jenkins,
et
al.
1996).

Systematic
sampling
within
time
intervals
could
be
used
in
cases
in
which
compositing
occurs
as
part
of
sample
collection
(such
as
sampling
of
liquid
effluent
with
an
autosampling
device
into
a
single
sample
container
over
a
specified
time
period).
3
ASTM
D
6051,
Standard
Guide
for
Composite
Sampling
and
Field
Subsampling
for
Environmental
Waste
Management
Activities,
also
provides
a
procedure
for
estimating
the
precision
of
a
single
composite
sample.

69
If
the
individual
field
sample
locations
are
independent
(that
is,
they
have
no
temporal
or
spatial
correlation),
then
compositing
within
blocks
can
be
an
efficient
strategy
for
estimating
the
population
mean.
If
the
assumption
of
sample
independence
cannot
be
supported,
then
an
alternative
design
should
be
selected
if
the
objective
is
to
estimate
the
mean.

5.3.4
Practical
Considerations
for
Composite
Sampling
In
creating
composite
samples
from
individual
field
samples,
it
is
possible
that
a
relatively
large
volume
of
material
will
need
to
be
physically
mixed
at
some
point
­­
either
in
the
field
or
in
the
laboratory.
Thorough
mixing
is
especially
important
when
the
individual
samples
exhibit
a
high
degree
of
heterogeneity.

Once
the
individual
samples
are
mixed,
one
or
more
subsamples
must
be
taken
because
the
entire
composite
sample
usually
cannot
be
analyzed
directly.
A
decision
must
be
made
as
to
where
the
individual
samples
will
be
combined
into
the
composite
samples.
Because
large
samples
(e.
g.,
several
kilograms
or
more)
may
pose
increased
difficulties
to
the
field
team
for
containerization
and
shipping
and
pose
storage
problems
for
the
laboratory
due
to
limited
storage
space,
there
may
be
a
distinct
advantage
to
performing
mixing
or
homogenization
in
the
field.
There
are,
however,
some
disadvantages
to
forming
the
composite
samples
in
the
field.
As
pointed
out
by
Mason
(1992),
the
benefits
of
homogenization
may
be
temporary
because
gravity
induced
segregation
can
occur
during
shipment
of
the
samples.
Unless
homogenization
(mixing),
particle
size
reduction,
and
subsampling
are
carried
out
immediately
prior
to
analysis,
the
benefits
of
these
actions
may
be
lost.
Therefore,
if
practical,
it
may
be
best
to
leave
the
mixing
and
subsampling
operations
to
laboratory
personnel.

See
Section
7.3
of
this
document
and
ASTM
standards
D
6051
and
D
6323
for
guidance
on
homogenization,
particle
size
reduction,
and
subsampling.

5.3.5
Using
Composite
Sampling
To
Obtain
a
More
Precise
Estimate
of
the
Mean
When
analytical
error
is
minor
compared
to
sampling
error,
then
composite
sampling
can
be
a
resource­
efficient
mechanism
for
increasing
the
precision
of
estimates
of
the
population
mean.
If
composite
sampling
is
to
be
used
to
estimate
the
mean
with
a
specified
level
of
confidence,
then
multiple
composite
samples
can
be
used
to
estimate
the
mean
and
variance.
Alternately,
confidence
limits
can
be
constructed
around
the
sample
analysis
result
for
a
single
composite
sample
if
an
estimate
of
the
variance
of
the
fundamental
error
is
available
(see
Gy
1998,
page
73).
3
See
Section
6.2.2.1
for
a
discussion
of
fundamental
error.

The
population
mean
(
)
can
be
estimated
from
the
analysis
of
composite
samples
(each
µ
n
made
from
individual
samples).
The
population
mean
(
)
is
estimated
by
the
sample
mean
g
µ
()
by
x
x
n
xi
i
n
=
=
 
1
1
Equation
6
70
The
sample
variance
(
)
can
then
be
calculated
by
s
2
s
n
x
x
i
i
n
2
1
2
1
1
=
 
 
=
 
()
Equation
7
Note
that
Equations
6
and
7
are
the
same
as
Equations
1
and
2,
respectively,
for
the
mean
and
variance.
When
the
equations
are
used
for
composite
sampling,
is
the
measurement
value
xi
from
a
subsample
taken
from
each
composite
sample
rather
than
each
individual
sample.
n
Use
of
these
equations
assumes
equal
numbers
of
individual
field
samples
(
)
are
used
to
g
form
each
composite,
and
equal
numbers
of
subsamples
are
taken
from
each
composite
sample
and
analyzed.
If
these
assumptions
are
not
correct,
an
alternative
approach
described
in
Gilbert
(1987,
page
79)
can
be
used.

By
increasing
the
number
of
individual
field
samples
(
)
per
composite
sample,
there
will
be
a
g
corresponding
decrease
in
the
standard
error
(
),
thus
improving
the
precision
of
the
estimate
sx
of
the
mean.
Edland
and
van
Belle
(1994)
show
that
by
doubling
the
number
of
individual
samples
per
composite
(or
laboratory)
sample,
the
expected
size
of
the
confidence
interval
around
the
mean
decreases
by
a
factor
of
,
which
is
a
29­
percent
decrease
in
the
1
2
/
expected
width
of
the
confidence
interval.
One
of
the
key
assumptions
underlying
the
above
discussion
is
that
variances
between
the
samples
greatly
exceed
the
random
error
variance
of
the
analytical
method
(Garner,
et
al.
1988).

Williams,
et
al.
(1989)
demonstrated
the
benefits
of
using
composite
sampling
to
obtain
a
more
precise
estimate
of
the
mean.
One
of
their
objectives
was
to
study
the
efficiency
of
using
composite
sampling
as
compared
to
collecting
individual
samples
for
the
purpose
of
estimating
the
mean
concentration
at
a
site.
Five
sites
known
to
have
radium
contamination
in
shallow
soils
were
extensively
sampled.
At
each
site,
shallow
soil
samples
were
collected
at
approximately
uniformly
spaced
points
over
the
entire
site.
Three
types
of
samples
were
taken:
(1)
individual
500­
gram
samples,
(2)
composite
samples
consisting
of
ten
50­
gram
aliquots
uniformly
spaced
over
the
site,
and
(3)
composite
samples
consisting
of
twenty
25­
gram
aliquots
uniformly
spaced
over
the
site.
The
samples
were
measured
for
226
Ra.
The
results
indicated
the
individual
samples
yielded
the
least
precision,
even
when
more
than
twice
as
many
individual
samples
were
collected.
Sixty­
six
individual
samples
produced
a
standard
error
of
1.35,
while
the
thirty
10­
aliquot
composites
and
the
thirty
20­
aliquot
composite
samples
produced
standard
errors
of
0.76
and
0.51
respectively.
The
results
demonstrate
that
composite
sampling
can
produce
more
precise
estimates
of
the
mean
with
fewer
analytical
samples.

Box
7
provides
an
example
of
how
a
mean
and
variance
can
be
estimated
using
composite
sampling
combined
with
systematic
sampling.
71
5.3.6
Using
Composite
Sampling
To
Locate
Extreme
Values
or
"Hot
Spots"

One
disadvantage
of
composite
sampling
is
the
possibility
that
one
or
more
of
the
individual
samples
making
up
the
composite
could
be
"hot"
(exceed
a
fixed
standard),
but
remain
undetected
due
to
dilution
that
results
from
the
pooling
process.
If
the
sampling
objective
is
to
determine
if
any
one
or
more
individual
samples
is
"hot,"
composite
sampling
can
still
be
used.
1
n
·
g
=
20
…..

2
One
measurement
taken
on
each
composite
sample
5
g
=
4
n
=
5
(composites)
t1
t2
t3
t4
t5
t6
t7
t8
t17
t18
t19
t20
Sampling
Point
Waste
Preparation
Process
Fuel
Storage
Tank
Figure
21.
Example
of
systematic
composite
sampling
Box
7.
Example
of
How
To
Estimate
the
Mean
and
Variance
Using
Systematic
Composite
Sampling
(Assume
Samples
Are
Independent)

Under
40
CFR
261.38,
a
generator
of
hazardous
waste­
derived
fuel
is
seeking
an
exclusion
from
the
definition
of
solid
and
hazardous­
waste.
To
prepare
the
one­
time
notice
under
40
CFR
261.38(
c),
the
generator
requires
information
on
the
mean
and
variance
of
the
concentrations
of
constituents
of
concern
in
the
waste
as
generated.
The
generator
elects
to
use
composite
samples
to
estimate
the
mean
and
variance
of
the
nonvolatile
constituents
of
concern.

Using
a
systematic
sampling
design,
a
composite
sample
is
prepared
by
taking
an
individual
(grab)
sample
at
regular
time
intervals
t1
through
t4.
The
set
of
four
grab
samples
are
thoroughly
mixed
to
form
a
composite,
and
one
subsample
is
taken
from
each
composite
for
analysis.
The
process
is
repeated
until
five
composite
samples
are
formed
(see
Figure
21).
(Note:
If
the
assumption
of
independent
samples
cannot
be
supported,
then
a
simple
random
design
should
be
used
in
which
the
20
grab
samples
are
randomly
grouped
to
form
the
five
composites).

The
analytical
results
for
one
of
the
constituents
of
concern,
in
ppm,
are
summarized
as
follows
for
the
composite
samples
(n1
through
n5):
2.75,
3.71,
3.28,
1.95,
and
5.10.

Using
Equations
6
and
7
for
the
mean
and
variance
of
composite
samples,
the
following
results
are
obtained:

x
n
x
ppm
s
n
x
x
i
i
n
i
i
n
=
=
=

=
 
 
=
+
+
+
+
=
=

=
 
 
1
1679
5
3
36
1
1
1
4
0
3721
01225
0
0064
199
303
138
1
2
1
2
.
.

()
.
.
.
...

The
standard
error
is
obtained
as
follows:

s
s
n
ppm
x
=
=
=
117
5
052
.
.
72
A
procedure
for
detecting
hot
spots
using
composite
sampling
is
given
below.
The
approach
assumes
the
underlying
distribution
is
normal
and
the
composite
samples
were
formed
from
equal­
sized
individual
samples.

Let
be
some
"action
level"
or
regulatory
threshold
that
cannot
be
exceeded
in
an
individual
AL
sample.
Note
that
must
be
large
relative
to
the
quantitation
limit
for
the
constituent
of
AL
concern.
For
a
measurement
from
a
composite
sample
formed
from
individual
samples,
xi
g
the
following
rules
apply,
assuming
analytical
and
sampling
error
are
negligible:

°
If
,
then
no
single
individual
sample
can
be
x
AL
g
i
<
>
AL
°
If
,
then
at
least
one
must,
and
as
many
as
all
individual
samples
may,
x
AL
i
>
be
>
AL
°
If
,
then
at
least
one
of
the
individual
samples
must
be
.
x
AL
g
i
>
g
>
AL
As
a
general
rule,
we
can
say
that
no
more
than
individual
samples
can
be
.
g
x
AL
i
 
>
AL
If
one
or
more
of
the
composites
are
"hot"
(i.
e.,
),
then
it
might
be
desirable
to
go
back
>
AL
and
analyze
the
individual
samples
used
to
form
the
composite.
Consider
saving
splits
of
each
individual
field
sampling
so
individual
samples
can
be
analyzed
later,
if
needed.

If
compositing
is
used
to
identify
a
hot
spot,
then
the
number
of
samples
that
make
up
the
composite
should
be
limited
to
avoid
overall
dilution
below
the
analytical
limit.
It
is
possible
for
a
composite
sample
to
be
diluted
to
a
concentration
below
the
quantitation
limit
if
many
of
the
individual
samples
have
concentrations
near
zero
and
a
single
individual
sample
has
a
concentration
just
above
the
action
level.
Mason
(1992)
and
Skalski
and
Thomas
(1984)
suggest
the
maximum
number
of
identically
sized
individual
samples
(
)
that
can
be
used
to
g
form
such
a
composite
should
not
exceed
the
action
level
(
)
divided
by
the
quantitation
limit
AL
(
).
But
the
relationship
of
indicates
that
the
theoretical
maximum
number
of
QL
g
ALQL
 
/
samples
to
form
a
composite
can
be
quite
high,
especially
given
a
very
low
quantitation
limit.
As
a
practical
matter,
the
number
of
individual
samples
used
to
form
a
composite
should
be
kept
to
a
minimum
(usually
between
2
and
10).

An
example
of
the
above
procedure,
provided
in
Box
8,
demonstrates
how
a
"hot"
drum
can
be
identified
through
the
analysis
of
just
nine
samples
(five
composites
plus
four
individual
analyses),
resulting
in
considerable
savings
in
analytical
costs
over
analysis
of
individual
samples
from
each
of
the
20
drums.
73
5.4
Determining
the
Appropriate
Number
of
Samples
Needed
To
Estimate
the
Mean
This
section
provides
guidance
for
determining
the
appropriate
number
of
samples
(
)
needed
n
to
estimate
the
mean.
The
procedures
can
be
used
when
the
objective
is
to
calculate
a
confidence
limit
on
the
mean.
If
the
objective
is
to
estimate
a
percentile,
see
Section
5.5.

To
calculate
the
appropriate
number
of
samples,
it
is
necessary
to
assemble
existing
data
identified
in
DQO
Step
3
("
Identify
Inputs
to
the
Decision")
and
Step
6
("
Specify
Limits
on
Decision
Errors").
If
the
parameter
of
interest
is
the
mean,
you
can
calculate
using
equations
n
presented
in
the
following
sections
or
by
using
EPA's
DEFT
software
(USEPA
2001a).
…..

One
measurement
taken
on
each
composite
sample
Point
of
Waste
Generation
1
2
5
Composite
Samples
Grab
Samples
Waste
Figure
22.
Composite
sampling
strategy
for
locating
a
"hot"
drum
Box
8.
How
To
Locate
a
"Hot
Spot"
Using
Composite
Sampling
­
Hypothetical
Example
A
secondary
lead
smelter
produces
a
slag
that
under
some
operating
conditions
exhibits
the
Toxicity
Characteristic
(TC)
for
lead.
At
the
point
of
generation,
a
grab
sample
of
the
slag
is
taken
as
the
slag
is
placed
in
each
drum.
A
composite
sample
is
formed
from
the
four
grab
samples
representing
a
set
of
four
drums
per
pallet.
The
process
is
repeated
until
five
composite
samples
representing
five
sets
of
four
drums
(20
drums
total)
have
been
prepared
(see
Figure
22).

The
generator
needs
to
know
if
the
waste
in
any
single
drum
in
a
given
set
of
four
drums
contains
lead
at
a
total
concentration
exceeding
100
ppm.
If
the
waste
in
any
single
drum
exceeds
100
ppm,
then
its
maximum
theoretical
TCLP
leachate
concentration
could
exceed
the
regulatory
limit
of
5
mg/
L.
Waste
in
drums
exceeding
100
ppm
total
lead
will
be
tested
using
the
TCLP
to
determine
if
the
total
leachable
lead
equals
or
exceeds
the
TC
regulatory
limit.

The
sample
analysis
results
for
total
lead
are
measured
as
follows
(in
ppm)
in
composite
samples
n1
through
n5
:
6,
9,
18,
20,
and
45.

Using
the
approach
for
locating
a
"hot
spot"
in
a
composite
sample,
we
observe
that
all
of
the
composite
samples
except
for
n5
are
less
than
or
100
ppm/
4
(i.
e.,
25
AL
g
/
ppm).
The
result
for
n5
(45
ppm)
is
greater
than
25
ppm,
indicating
a
potential
exceedance
of
the
TC
regulatory
level.
A
decision
about
the
set
of
drums
represented
by
n5
can
be
made
as
follows:

No
more
than
individual
samples
can
be
,
or
no
more
than
or
1
(round
g
x
AL
i
 
>
AL
()
.
4
45
100
18
ppm
ppm
=

down)
individual
sample
exceeds
100
ppm
total
lead.

We
now
know
that
it
is
possible
that
one
of
the
four
drums
on
the
fifth
palette
exceeds
100
ppm,
but
we
do
not
know
which
one.
As
a
practical
matter,
analysis
of
all
four
of
the
individual
samples
should
reveal
the
identity
of
the
"hot"
drum
(if,
indeed,
one
exists);
however,
the
above
process
of
elimination
could
be
repeated
on
two
new
composite
samples
formed
from
samples
taken
from
just
the
four
drums
in
question.
4
One
exception
is
when
sequential
sampling
is
used
in
which
the
number
of
samples
is
not
fixed
a
priori;
rather,
the
statistical
test
is
performed
after
each
round
of
sampling
and
analysis
(see
Section
5.2.5).

74
Alternative
equations
can
be
found
in
the
statistical
literature
and
guidance,
including
ASTM
(Standard
D
6311),
Cochran
(1977),
Gilbert
(1987),
and
USEPA
(2000a,
2000b,
and
2000d).

The
equations
presented
here
should
yield
the
approximate
minimum
number
of
samples
needed
to
estimate
the
mean
within
the
precision
and
confidence
levels
established
in
the
DQO
Process;
however,
it
is
prudent
to
collect
a
somewhat
greater
number
of
samples
than
indicated
by
the
equations.
4
This
is
recommended
to
protect
against
poor
preliminary
estimates
of
the
mean
and
standard
deviation,
which
could
result
in
an
underestimate
of
the
appropriate
number
of
samples
to
collect.
For
analytes
with
long
holding
times
(e.
g.,
6
months),
it
may
be
possible
to
process
and
store
extra
samples
appropriately
until
analysis
of
the
initially
identified
samples
is
completed
and
it
can
be
determined
if
analysis
of
the
additional
samples
is
warranted.

It
is
important
to
note
that
the
sample
size
equations
do
not
account
for
the
number
or
type
of
control
samples
(or
quality
assessment
samples)
required
to
support
the
QC
program
associated
with
your
project.
Control
samples
may
include
blanks
(e.
g.,
trip,
equipment,
and
laboratory),
field
duplicates,
spikes,
and
other
samples
used
throughout
the
data
collection
process.
Refer
to
Chapter
One
of
SW­
846
for
recommendations
on
the
type
and
number
of
control
samples
needed
to
support
your
project.
It
is
best
to
first
determine
how
each
type
of
control
sample
is
to
be
used,
then
to
determine
the
number
of
that
type
based
on
their
use
(van
Ee,
et
al.
1990).

A
key
assumption
for
use
of
the
sample
size
equations
is
that
you
have
some
prior
estimate
of
the
total
study
error,
measured
as
the
sample
standard
deviation
(
)
or
sample
variance
(
).
s
s
2
Since
total
study
error
includes
variability
associated
with
the
sampling
and
measurement
methods
(see
Section
6),
it
is
important
to
understand
the
relative
contributions
that
sampling
and
analysis
activities
make
to
the
overall
estimate
of
variability.
Lack
of
prior
information
regarding
population
and
measurement
variability
is
one
of
the
most
frequently
encountered
difficulties
in
sampling.
It
quickly
resembles
a
"chicken­
and­
the­
egg"
question
for
investigators
–
you
need
an
estimate
of
the
standard
deviation
to
calculate
how
many
samples
you
need,
yet
you
cannot
derive
that
estimate
without
any
samples.
To
resolve
this
seemingly
paradoxical
question,
two
options
are
available:

Option
1.
Conduct
a
pilot
study.
A
pilot
study
(sometimes
called
an
exploratory
or
preliminary
study)
is
the
preferred
method
for
obtaining
estimates
of
the
mean
and
standard
deviation,
as
well
as
other
relevant
information.
The
pilot
study
is
simply
phase
one
of
a
multi­
phase
sampling
effort
(Barth,
et
al.
1989).
For
some
pilot
studies,
a
relatively
small
number
of
samples
(e.
g.,
four
or
five
or
more)
may
provide
a
suitable
preliminary
estimate
of
the
standard
deviation.

Option
2.
Use
data
from
a
study
of
a
similar
site
or
waste
stream.
In
some
cases,
you
might
be
able
to
use
sampling
and
analysis
data
from
another
facility
or
similar
operation
that
generates
the
same
waste
stream
and
uses
the
same
process.

If
neither
of
the
above
options
can
provide
a
suitable
estimate
of
the
standard
deviation
(
),
a
s
crude
approximation
of
still
can
be
obtained
using
the
following
approach
adopted
from
s
75
USEPA
1989a
(page
6­
6).
The
approximation
is
based
on
the
judgment
of
a
person
knowledgeable
of
the
waste
and
his
or
her
estimate
of
the
range
within
which
constituent
concentrations
are
likely
to
fall.
Given
a
range
of
constituent
concentrations
in
a
waste,
but
lacking
the
individual
data
points,
an
approximate
value
for
may
be
computed
by
dividing
the
s
range
(the
estimated
maximum
concentration
minus
the
minimum
concentration)
by
6,
or
.
This
approximation
method
should
be
used
only
if
no
other
alternative
is
s
Range
 
/6
available.
The
approach
is
based
on
the
assumption
that
more
than
99
percent
of
all
normally
distributed
measurements
will
fall
within
three
standard
deviations
of
the
mean;
therefore,
the
length
of
this
interval
is
.
6s
5.4.1
Number
of
Samples
to
Estimate
the
Mean:
Simple
Random
Sampling
In
Step
6
of
the
DQO
Process
("
Specify
Limits
on
Decision
Errors"),
you
established
the
width
of
the
gray
region
(
)
and
acceptable
probabilities
for
making
a
decision
error
(
and
).
 
 
 
Using
this
information,
along
with
an
estimate
of
the
standard
deviation
(
),
calculate
the
s
appropriate
number
of
samples
(
)
for
simple
random
sampling
using
n
n
z
zsz
=
+
+
 
 
 
()
1
1
2
2
2
1
2
2
  
 
 
Equation
8
where
=
the
quantile
of
the
standard
normal
distribution
(from
the
last
row
of
z1   
 
pth
Table
G­
1,
Appendix
G),
where
is
the
probability
of
making
a
Type
I
 
set
in
DQO
Step
6
(Section
4.6.4).
=
the
quantile
of
the
standard
normal
distribution
(from
the
last
row
of
z1   
 
pth
Table
G­
1,
Appendix
G),
where
is
the
probability
of
making
a
Type
II
 
error
set
in
DQO
Step
6
(Section
4.6.4).
=
an
estimate
of
the
standard
deviation.
s
=
the
width
of
the
gray
region
from
DQO
Step
6.
 
An
example
application
of
Equation
8
is
presented
in
Box
9.

Two
assumptions
underlie
the
use
of
Equation
8.
First,
it
is
assumed
that
data
are
drawn
from
an
approximately
normal
distribution.
Second,
it
is
assumed
the
data
are
uncorrelated.
In
correlated
data,
two
or
more
samples
taken
close
to
each
other
(in
time
or
in
space)
will
have
similar
concentrations
(Gilbert
1987).
In
situations
in
which
spatial
or
temporal
correlation
is
expected,
some
form
of
systematic
sampling
is
preferred.

If
the
underlying
population
appears
to
exhibit
a
lognormal
distribution,
normal
theory
sample
size
equations
(such
as
Equation
8)
still
can
be
used
though
they
will
tend
to
underestimate
the
minimum
number
of
samples
when
the
geometric
standard
deviation
(
)
is
low
(e.
g.,
exp(
)
sy
2).
If
the
underlying
distribution
is
known
to
be
lognormal,
the
method
given
by
Land
(1971,
 
1975)
and
Gilbert
(1987)
for
calculating
confidence
limits
for
a
lognormal
mean
can
be
solved
"in
reverse"
to
obtain
.
(A
software
tool
for
performing
the
calculation,
MTCAStat
3.0,
is
n
published
by
the
Washington
Department
of
Ecology.
See
Appendix
H).
Also,
techniques
described
by
Perez
and
Lefante
(1996
and
1997)
can
be
used
to
estimate
the
sample
sizes
needed
to
estimate
the
mean
of
a
lognormal
distribution.
Otherwise,
consult
a
professional
statistician
for
assistance.
76
Box
9.
Number
of
Samples
Required
to
Estimate
the
Mean
Using
Simple
Random
Sampling:
Hypothetical
Example
Under
40
CFR
261.38,
a
generator
of
hazardous
waste­
derived
fuel
is
seeking
an
exclusion
from
the
definition
of
solid
and
hazardous­
waste.
To
prepare
the
one­
time
notice
under
40
CFR
261.38(
c),
the
generator
plans
to
conduct
waste
sampling
and
analysis
to
support
the
exclusion.
The
output
of
the
first
six
steps
of
the
DQO
Process
are
summarized
below:

Step
1:
State
the
Problem:
The
planning
team
reviewed
the
applicable
regulations,
historical
analyses,
and
process
chemistry
information.
The
problem
is
to
determine
whether
Appendix
VIII
constituents
present
in
the
waste
are
at
concentration
levels
less
than
those
specified
in
Table
1
of
§261.38.

Step
2:
Identify
the
Decision:
If
the
waste
attains
the
specification
levels,
then
it
will
be
judged
eligible
for
the
exclusion
from
the
definition
of
hazardous
and
solid
waste.

Step
3:
Identify
Inputs
to
the
Decision:
Sample
analysis
results
are
required
for
a
large
number
of
constituents
present
in
the
waste,
however,
most
constituents
are
believed
to
be
present
at
concentrations
well
below
the
specification
levels.
Historically,
benzene
concentrations
have
been
most
variable,
therefore,
the
planning
team
will
estimate
the
number
of
samples
required
to
determine
if
the
specification
level
for
benzene
is
attained.

Step
4:
Define
the
Boundaries:
The
DQO
decision
unit
is
defined
as
the
batch
of
waste
generated
over
a
one­
week
period.
Samples
will
be
taken
as
the
waste
exits
the
preparation
process
and
prior
to
storage
in
a
fuel
tank
(i.
e.,
at
the
point
of
generation).

Step
5:
Develop
a
Decision
Rule:
The
RCRA
regulations
at
40
CFR
261.38(
c)(
8)(
iii)(
A)
specify
the
mean
as
the
parameter
of
interest.
The
"Action
Level"
for
benzene
is
specified
in
Table
1
of
§268.38
as
4,100
ppm.
If
the
mean
concentration
of
benzene
within
the
DQO
decision
unit
is
less
than
or
equal
to
4,100
ppm,
then
the
waste
will
be
considered
eligible
for
the
exclusion
(for
benzene).
Otherwise,
the
waste
will
not
be
eligible
for
the
exclusion
for
benzene.
(Note
that
the
demonstration
must
be
made
for
all
Appendix
VIII
constituents
known
to
be
present
in
the
waste).

Step
6:
Specify
Limits
on
Decision
Errors:
In
the
interest
of
being
protective
of
the
environment,
the
null
hypothesis
was
established
as
"the
mean
concentration
of
benzene
within
the
decision
unit
boundary
exceeds
4,100
ppm,"
or
Ho:
mean
(benzene)
>
4,100
ppm.
The
boundaries
of
the
gray
region
were
set
at
the
Action
Level
(4,100
ppm)
and
at
a
value
less
than
the
Action
Level
at
3000
ppm.
The
regulations
at
§261.38(
c)(
8)(
iii)(
A)
specify
a
Type
I
(false
rejection)
error
rate
(
)
of
0.05.
The
regulations
do
not
specify
a
Type
II
(false
acceptance)
error
rate
(
),
 
 
but
the
planning
team
deemed
a
false
acceptance
as
of
lesser
concern
than
a
false
rejection,
and
set
the
false
acceptance
rate
at
0.25.
Sample
analysis
results
from
previous
sampling
and
analyses
indicate
the
standard
deviation
(
)
of
benzene
concentrations
is
about
1,200
ppm.
s
What
is
the
appropriate
number
of
samples
to
collect
and
analyze
for
a
simple
random
sampling
design?

Solution:
Using
Equation
8
and
the
outputs
of
the
first
six
steps
of
the
DQO
Process,
the
number
of
samples
is
determined
as:

n
z
zsz
=
+
+

=
+
=
 
 
()

(.
()
(.
.(
1
1 
2
2
2
1 
2
2
2
2
2
2
1645+
0.674)
(1200)
4100 
3000
1645)
2
7
75
8
  
 
 
round
up)
where
the
values
for
and
are
obtained
from
the
last
row
of
Table
G­
1
in
Appendix
G.
z1   
 
z1   
 
77
x
Wx
st
h
h
h
L
=
=
 
1
5.4.2
Number
of
Samples
to
Estimate
the
Mean:
Stratified
Random
Sampling
An
important
aspect
of
a
stratified
random
sampling
plan
is
deciding
how
many
samples
to
collect
within
each
of
the
strata
(Gilbert
1987).
There
are
many
ways
to
design
a
stratified
random
sampling
plan;
the
development
here
makes
the
following
assumptions
(refer
to
Section
5.2.2
for
a
description
of
terms
and
symbols
used
below):

°
Weights
for
each
stratum
(
)
are
known
in
advance.
One
possible
way
to
Wh
assign
weights
to
each
stratum
is
to
calculate
the
ratio
between
the
waste
volume
classified
as
the
stratum
and
the
total
waste
volume.
hth
°
The
number
of
possible
sample
units
(i.
e.,
physical
samples)
of
a
certain
physical
size
is
much
larger
than
the
number
of
sample
units
that
will
be
collected
and
analyzed.
As
a
general
guide,
this
assumption
should
be
reasonable
as
long
as
the
ratio
between
the
stratum
waste
volume
and
the
volume
of
the
individual
samples
is
at
least
100.
Otherwise,
you
may
need
to
consider
formulas
that
include
the
finite
population
correction
(see
Cochran
1977,
page
24).

°
The
number
of
sample
units
to
be
collected
and
analyzed
in
each
stratum,
due
to
analytical
costs
and
other
considerations,
generally
will
be
fairly
small.

°
A
preliminary
estimate
of
variability
(
)
is
available
for
each
stratum.
If
this
is
sh
2
not
the
case,
one
can
use
an
estimate
of
the
overall
variability
(
)
as
a
s
2
substitute
for
the
separate
stratum
estimates.
By
ignoring
possible
differences
in
the
variance
characteristics
of
separate
strata,
the
sample
size
formulas
given
below
may
tend
to
underestimate
the
necessary
number
of
samples
for
each
strata
(
).
nh
Given
a
set
of
stratum
weights
and
sample
measurements
in
each
stratum,
the
overall
mean
(
)
and
overall
standard
error
of
the
mean
(
)
(i.
e.,
for
the
entire
waste
under
study)
are
xst
sxst
computed
as
follows
for
a
stratified
random
sample:

Equation
9
and
s
W
s
n
x
h
h
L
h
h
st
=
=
 
2
1
2
Equation
10
Note
that
and
in
these
formulas
represent
the
arithmetic
mean
and
sample
variance
for
xh
sh
2
the
measurements
taken
within
each
stratum.

In
general,
there
are
two
approaches
for
determining
the
number
of
samples
to
take
when
stratified
random
sampling
is
used:
optimal
allocation
and
proportional
allocation.
78
5.4.2.1
Optimal
Allocation
In
optimal
allocation,
the
number
of
samples
assigned
to
a
stratum
(
)
is
proportional
to
the
nh
relative
variability
within
each
stratum
and
the
relative
cost
of
obtaining
samples
from
each
stratum.
The
number
of
samples
can
be
determined
to
minimize
the
variance
for
a
fixed
cost
or
to
minimize
the
cost
for
a
prespecified
variance.

Optimal
allocation
requires
considerable
advance
knowledge
about
the
relative
variability
within
each
stratum
and
the
costs
associated
with
obtaining
samples
from
each
stratum;
therefore,
we
recommend
the
use
of
proportional
allocation
(see
below)
as
an
alternative.
For
more
complex
situations
in
which
optimal
allocation
is
preferred,
consult
a
statistician
or
see
Cochran
(1977,
page
96),
Gilbert
(1987,
page
50),
or
USEPA
(1989a
(page
6­
13)).

5.4.2.2
Proportional
Allocation
In
proportional
allocation,
the
number
of
samples
assigned
to
a
stratum
(
)
is
proportional
to
nh
the
stratum
size,
that
is,
.
To
determine
the
total
number
of
samples
(
)
so
that
a
n
nW
h
h
=
n
true
difference
(
)
between
the
mean
waste
concentration
and
the
Action
Level
can
be
 
detected
with
Type
I
error
rate
and
Type
II
error
rate
,
use
the
following
equation:
 
 
n
t
t
W
s
df
df
h
h
h
L
=
+
 
 
=
 
1
1
2
2
2
1
  
,,

 
Equation
11
To
use
this
formula
correctly,
the
degrees
of
freedom
(
)
connected
with
each
­quantile
df
t
(from
Table
G­
1,
Appendix
G)
in
the
above
equation
must
be
computed
as
follows:

df
W
s
W
s
nW
h
h
L
h
h
h
h
L
=
 
 
 
 
 
 
 
=
=
 
 
2
1
2
2
4
1
1
Equation
12
Because
the
degrees
of
freedom
also
depend
on
n,
the
final
number
of
samples
must
be
computed
iteratively.
Then,
once
the
final
total
number
of
samples
is
computed,
the
number
of
samples
for
each
stratum
is
determined
by
multiplying
the
total
number
of
samples
by
the
stratum
weight.
An
example
of
this
approach
is
presented
in
Box
10.

If
only
an
overall
estimate
of
is
available
in
the
preliminary
data,
Equation
11
reduces
to:
s
2
n
t
ts
df
df
=
+
 
 
1
1
2
2
2
  
,,

 
Equation
13
and
Equation
12
reduces
to
df
W
nW
h
h
h
L
=
 
=
 
1
1
2
1
Equation
14
79
Box
10.
Number
of
Samples
Required
to
Estimate
the
Mean
Using
Stratified
Random
Sampling
–
Proportional
Allocation:
Hypothetical
Example
Under
the
RCRA
Corrective
Action
program,
a
facility
owner
has
conducted
a
cleanup
of
a
solid
waste
management
unit
(SWMU)
in
which
the
contaminant
of
concern
is
benzene.
The
cleanup
involved
removal
of
all
waste
residues,
contaminated
subsoils,
and
structures.
The
facility
owner
needs
to
conduct
sampling
and
analysis
to
confirm
that
the
remaining
soils
comply
with
the
cleanup
standard.

Step
1:
State
the
Problem:
The
planning
team
needs
to
confirm
that
soils
remaining
in
place
contain
benzene
at
concentrations
below
the
risk­
based
levels
established
by
the
authorized
state
as
part
of
the
cleanup.

Step
2:
Identify
the
Decision:
If
the
soils
attain
the
cleanup
standard,
then
the
land
will
be
used
for
industrial
purposes.
Otherwise,
additional
soil
removal
will
be
required.

Step
3:
Identify
Inputs
to
the
Decision:
A
sampling
program
will
be
conducted,
and
sample
analysis
results
for
benzene
will
be
used
to
make
the
cleanup
attainment
determination.

Step
4:
Define
the
Boundaries:
The
DQO
decision
unit
is
the
top
6
inches
of
soil
within
the
boundary
of
the
SWMU.
Based
on
prior
sample
analysis
results
and
field
observations,
two
strata
are
identified:
fine­
grained
soils
in
20
percent
of
the
unit
("
Stratum
1"),
and
coarse­
grained
soils
comprising
the
other
80
percent
of
the
unit
("
Stratum
2").
Based
on
the
relative
mass
of
the
two
strata,
a
weighting
factor
is
assigned
to
each
stratum
such
that
Wh
hth
and
.
W1
02
=
.
W2
08
=
.

Step
5:
Develop
a
Decision
Rule:
The
parameter
of
interest
is
established
as
the
mean,
and
the
Action
Level
for
benzene
is
set
at
1.5
mg/
kg.
If
the
mean
concentration
of
benzene
within
the
DQO
decision
unit
is
less
than
or
equal
to
1.5
mg/
kg,
then
the
unit
will
be
considered
"clean."
Otherwise,
another
layer
of
soil
will
be
removed.

Step
6:
Specify
Limits
on
Decision
Errors:
In
the
interest
of
being
protective
of
the
environment,
the
null
hypothesis
is
established
as
"the
mean
concentration
of
benzene
within
the
decision
unit
boundary
exceeds
1.5
mg/
kg,"
or
Ho:
mean
(benzene)
>
1.5
mg/
kg.
The
boundaries
of
the
gray
region
are
set
at
the
Action
Level
(1.5
mg/
kg)
and
at
a
value
less
than
the
Action
Level
at
1.0
mg/
kg.
The
Type
I
error
rate
(
)
is
set
at
0.10
and
the
Type
 
II
error
rate
(
)
is
set
at
0.25.
Sample
analysis
results
from
initial
non­
composite
samples
provided
an
 
n
=
8
estimate
of
the
overall
standard
deviation
of
,
and
the
standard
deviations
(
)
within
each
stratum
of
s
=
183
.
sh
hth
and
(and
and
).
s1
25
=
.
s2
13
=
.
s1
2
625
=
.
s2
2
169
=
.

What
is
the
appropriate
number
of
samples
to
collect
and
analyze
for
a
stratified
random
sampling
design?

Solution:
Using
Equation
12
for
the
degrees
of
freedom
under
proportional
allocation:

(
)
(
)
(
)
(
)
(
)
df1
2
2
2
02
625
08
169
02
625
8
02
1
08
169
8
08
1
2
3
2
=
×
+
×
×
 
+
×
 
 
 
 
 
 
 
 
 
=
 
(.
.
)
(.
.
)
..
.
..
.
.

Then,
looking
up
the
t­
quantiles
(from
Table
G­
1,
Appendix
G)
with
2
degree
of
freedom
and
taking
(i.
e.,
 
=
05
.
1.5
ppm
­
1.0
ppm),
the
total
sample
size
(using
Equation
12)
works
out
to
[
]
(
)
(
)
n1
2
2
1886
0816
05
02
625
169
76
=
+
×
+
08×
=
..

.
(.
.
)
(.
.
)

Since
the
equations
must
be
solved
iteratively,
recompute
the
formulas
using
.
The
same
calculations
give
n
=
76
and
.
After
two
more
iterations,
the
sample
size
stabilizes
at
.
Using
the
proportional
df
2
48
=
n2
41
=
n
=
42
allocation
with
one
should
take
42(
0.2)
=
8.4
(round
up
to
9)
measurements
from
the
first
stratum
and
n
=
42
42(
0.8)
=
33.6
(round
up
to
34)
measurements
from
the
second
stratum.
Since
four
samples
already
were
collected
from
each
stratum,
at
least
five
additional
random
samples
should
be
obtained
from
the
first
stratum
and
at
least
thirty
additional
random
samples
should
be
collected
from
the
second
stratum.
80
In
the
example
in
Box
10,
stratified
random
sampling
provides
a
more
efficient
and
costeffective
design
compared
to
simple
random
sampling
of
the
same
unit.
If
simple
random
sampling
were
used,
a
total
of
52
samples
would
be
required.
With
stratified
random
sampling,
only
42
samples
are
required,
thereby
reducing
sampling
and
analytical
costs.

5.4.3
Number
of
Samples
to
Estimate
the
Mean:
Systematic
Sampling
Despite
the
attractiveness
and
ease
of
implementation
of
systematic
sampling
plans,
whether
via
a
fixed
square,
rectangular,
or
triangular
grid,
or
through
the
use
of
systematic
random
sampling,
methods
for
estimating
the
standard
error
of
the
mean
are
beyond
the
scope
of
this
guidance
(for
example,
see
Cochran
1977)
and
often
involve
more
advanced
geostatistical
techniques
(for
example,
see
Myers
1997).
An
alternate
approach
is
to
treat
the
set
of
systematic
samples
as
though
they
were
obtained
using
simple
random
sampling.
Such
an
approach
should
provide
reasonable
results
as
long
as
there
are
no
strong
cyclical
patterns,
periodicities,
or
significant
spatial
correlations
between
adjacent
sample
locations.
If
such
features
are
present
or
suspected
to
be
present,
consultation
with
a
professional
statistician
is
recommended.

By
regarding
the
systematic
sample
as
a
simple
random
sample,
one
can
simply
use
the
algorithm
and
formulas
for
simple
random
sampling
described
in
Section
5.4.1
(Equation
8)
to
estimate
the
necessary
sample
size.
As
with
all
the
sampling
designs
described
in
this
section,
you
should
have
a
preliminary
estimate
of
the
sample
variance
before
using
the
sample
size
equation.

5.4.4
Number
of
Samples
to
Estimate
the
Mean:
Composite
Sampling
In
comparison
to
noncomposite
sampling,
composite
sampling
may
have
the
effect
of
minimizing
between­
sample
variation,
thereby
reducing
somewhat
the
total
number
of
composite
samples
that
must
be
submitted
for
analysis.

The
appropriate
number
of
composite
samples
to
be
collected
from
a
waste
or
media
can
be
estimated
by
Equation
8
for
simple
random
and
systematic
composite
sampling.
Equation
11
can
be
used
when
composite
sampling
will
be
implemented
with
a
stratified
random
sampling
design
(using
proportional
allocation).
Any
preliminary
or
pilot
study
conducted
to
estimate
the
appropriate
number
of
composite
samples
should
be
generated
using
the
same
compositing
scheme
planned
for
the
confirmatory
study.
If
the
preliminary
or
pilot
study
data
were
generated
using
random
"grab"
samples
rather
than
composites,
then
the
sample
variance
(
)
in
the
s
2
sample
size
equations
should
be
replaced
with
where
is
the
number
of
individual
or
s
g
2
g
grab
samples
used
to
form
each
composite
(Edland
and
Van
Belle
1994,
page
45).

Additional
guidance
on
the
optimal
number
of
samples
required
for
composite
sampling
and
the
number
of
subsample
aliquots
required
to
achieve
maximum
precision
for
a
fixed
cost
can
be
found
in
Edland
and
van
Belle
(1994,
page
36
and
page
44),
Exner,
et
al.
(1985,
page
512),
and
Gilbert
(1987,
page
78).
81
5.5
Determining
the
Appropriate
Number
of
Samples
to
Estimate
A
Percentile
or
Proportion
This
section
provides
guidance
for
determining
the
appropriate
number
of
samples
(
)
needed
n
to
estimate
an
upper
percentile
or
proportion
with
a
prespecified
level
of
confidence.
The
approaches
can
be
used
when
the
objective
is
to
determine
whether
the
upper
percentile
is
less
than
a
concentration
standard
or
whether
a
given
proportion
of
the
population
or
decision
unit
is
less
than
a
specified
value.

Two
methods
for
determining
the
appropriate
number
of
samples
are
given
below:
(1)
Section
5.5.1
provides
a
method
based
on
the
assumption
that
the
population
is
large
and
the
samples
are
drawn
at
random
from
the
population,
and
(2)
Section
5.5.2
provides
a
method
with
similar
assumptions
but
only
requires
specification
of
the
level
of
confidence
required
and
the
number
of
exceedances
allowed
(usually
zero).
For
both
methods,
it
is
assumed
that
the
measurements
can
be
expressed
as
a
binary
variable
–
that
is,
that
the
sample
analysis
results
can
be
interpreted
as
either
in
compliance
with
the
applicable
standard
("
pass")
or
not
in
compliance
with
the
applicable
standard
("
fail").

5.5.1
Number
of
Samples
To
Test
a
Proportion:
Simple
Random
or
Systematic
Sampling
This
section
provides
a
method
for
determining
the
appropriate
number
of
samples
when
the
objective
is
to
test
whether
a
proportion
or
percentile
of
a
population
complies
with
an
applicable
standard.
A
population
proportion
is
the
ratio
of
the
number
of
elements
of
a
population
that
has
some
specific
characteristic
to
the
total
number
of
elements.
A
population
percentile
represents
the
percentage
of
elements
of
a
population
having
values
less
than
some
value.
The
number
of
samples
needed
to
test
a
proportion
can
be
calculated
using
n
z
GR
GR
z
AL
AL
=
 
+
 
 
 
 
 
 
 
 
 
 
 
1
1
2
1
1
  
()
()
 
Equation
15
where
=
false
rejection
error
rate
 
=
false
acceptance
error
rate
 
=
the
percentile
of
the
standard
normal
distribution
(from
the
last
row
of
z
p
pth
Table
G­
1
in
Appendix
G)
=
the
Action
Level
(e.
g.,
the
proportion
of
all
possible
samples
of
a
given
AL
support
that
must
comply
with
the
standard)
=
other
bound
of
the
gray
region,
GR
=
width
of
the
gray
region
(
),
and
 
GR
AL
 
=
the
number
of
samples.
n
An
example
calculation
of
using
the
approach
described
here
is
presented
in
Box
11.
n
82
Box
11.
Example
Calculation
of
the
Appropriate
Number
of
Samples
Needed
To
Test
a
Proportion
–
Simple
Random
or
Systematic
Sampling
A
facility
is
conducting
a
cleanup
of
soil
contaminated
with
pentachlorophenol
(PCP).
Based
on
the
results
of
a
field
test
method,
soil
exceeding
the
risk­
based
cleanup
level
of
10
mg/
kg
total
PCP
will
be
excavated,
classified
as
a
solid
or
hazardous
waste,
and
placed
into
roll­
off
boxes
for
subsequent
disposal,
or
treatment
(if
needed)
and
disposal.
The
outputs
of
the
first
six
steps
of
the
DQO
Process
are
summarized
below.

Step
1:
State
the
Problem:
The
project
team
needs
to
decide
whether
the
soil
being
placed
in
each
roll­
off
box
is
a
RCRA
hazardous
or
nonhazardous
waste.

Step
2:
Identify
the
Decision:
If
the
excavated
soil
is
hazardous,
it
will
be
treated
to
comply
with
the
applicable
LDR
treatment
standard
and
disposed
as
hazardous
waste.
If
it
is
nonhazardous,
then
it
will
be
disposed
as
solid
waste
in
a
permitted
industrial
waste
landfill
(as
long
as
it
is
not
mixed
with
a
listed
hazardous
waste).

Step
3:
Identify
Inputs
to
the
Decision:
The
team
requires
sample
analysis
results
for
TCLP
PCP
to
determine
compliance
with
the
RCRA
TC
regulatory
threshold
of
100
mg/
L.

Step
4:
Define
the
Boundaries:
The
DQO
"decision
unit"
for
each
hazardous
waste
determination
is
defined
as
a
roll­
off
box
of
contaminated
soil.
The
"support"
of
each
sample
is
in
part
defined
by
SW­
846
Method
1311
(TCLP)
as
a
minimum
mass
of
100­
grams
with
a
maximum
particle
size
of
9.5
mm.
Samples
will
be
obtained
as
the
soil
is
excavated
and
placed
in
the
roll­
off
box
(i.
e.,
at
the
point
of
generation).

Step
5:
Develop
a
Decision
Rule:
The
project
team
wants
to
ensure
with
reasonable
confidence
that
little
or
no
portions
of
the
soil
in
the
roll­
off
box
are
hazardous
waste.
The
parameter
of
interest
is
then
defined
as
the
90
th
percentile.
If
the
90
th
percentile
concentration
of
PCP
is
less
than
100
mg/
L
TCLP,
then
the
waste
will
be
classified
as
nonhazardous.
Otherwise,
it
will
be
considered
hazardous.

Step
6:
Specify
Limits
on
Decision
Errors:
The
team
establishes
the
null
hypothesis
(Ho)
as
the
"true
proportion
(P)
of
the
waste
that
complies
with
the
standard
is
less
than
0.90,"
or
Ho:
P
<
0.90.
The
false
rejection
error
rate
(
)
is
 
set
at
0.10.
The
false
acceptance
error
rate
(
)
is
set
at
0.30.
The
Action
Level
(
)
is
0.90,
and
the
other
 
AL
boundary
of
the
gray
region
(
)
is
set
at
0.99.
GR
How
many
samples
are
required?

Solution:
Using
Equation
15
and
the
outputs
of
the
first
six
steps
of
the
DQO
Process,
the
number
of
samples
(
)
n
is
determined
as:

=
 
+
 
 
 
 
 
 
 
 
 
 
=
 
0524
0991
099
1282
0901
090
099
090
235
24
2
.
.(
.)
.
.(
.)
..
.

where
the
values
for
and
are
obtained
from
the
last
row
of
Table
G­
1
in
Appendix
G.
z1   
 
z1   
 
83
5.5.2
Number
of
Samples
When
Using
a
Simple
Exceedance
Rule
If
a
simple
exceedance
rule
is
used
(see
Section
3.4.2.2),
then
it
is
possible
to
estimate
the
number
of
samples
required
to
achieve
a
prespecified
level
of
confidence
that
a
given
fraction
of
the
waste
or
site
has
a
constituent
concentration
less
than
the
standard
or
does
not
exhibit
a
characteristic
or
property
of
concern.
The
approach
is
based
on
the
minimum
sample
size
required
to
determine
a
nonparametric
(distribution­
free)
one­
sided
confidence
bound
on
a
percentile
(Hahn
and
Meeker
1991
and
USEPA
1989a).

If
the
exceedance
rule
specifies
no
exceedance
of
the
standard
in
any
sample,
then
the
number
of
samples
that
must
achieve
the
standard
can
be
obtained
from
Table
G­
3a
in
Appendix
G.
The
table
is
based
on
the
expression:

n
=
log(
)
log(
p)
 
Equation
16
where
alpha
(
)
is
the
probability
of
a
Type
I
error
and
is
the
proportion
of
the
waste
or
site
 
p
that
must
comply
with
the
standard.
Alternatively,
the
equation
can
be
rearranged
so
that
statistical
performance
)
can
determined
for
a
fixed
number
of
samples:
(1   
 
()
1
 
=
1 
 
p
n
Equation
17
Notice
that
the
method
does
not
require
specification
of
the
other
bound
of
the
gray
region,
nor
does
it
require
specification
of
a
Type
II
(false
acceptance)
error
rate
(
).
 
If
the
decision
rule
allows
one
exceedance
of
the
standard
in
a
set
of
samples,
then
the
number
of
samples
required
can
be
obtained
from
Table
G­
3b
in
Appendix
G.

An
example
application
of
the
above
equations
is
presented
in
Box
12.
See
also
Appendix
F,
Section
F.
3.2.

Box
12.
Example
Calculation
of
Number
of
Samples
Needed
When
a
Simple
Exceedance
Rule
Is
Used
–
Simple
Random
or
Systematic
Sampling
What
is
the
minimum
number
of
samples
required
(with
no
exceedance
of
the
standard
in
any
of
the
samples)
to
determine
with
at
least
90­
percent
confidence
)
that
at
least
90
percent
of
all
possible
samples
from
(1
090
 
=
 
.
the
waste
(as
defined
by
the
DQO
decision
unit)
are
less
than
the
applicable
standard?

From
Table
G­
3a,
we
find
that
for
and
that
22
samples
are
required.
Alternately,
using
1
090
 
=
a
.
p
=
090
.
Equation
16,
we
find
n
=
=
=
 
 
=
 
log(
)
log(p)
.
)
.
)
.
 
log(010
log(090
1
0.0457
218
22
If
only
11
samples
were
analyzed
(with
no
exceedance
of
the
standard
in
any
of
the
samples),
what
level
of
confidence
can
we
have
that
at
least
90
percent
of
all
possible
samples
are
less
than
the
standard?
Using
Equation
17,
we
find
()
.
1
11090
11
0.3138
0.6862
 
=
 
=
 
=
1 
=
 
p
n
Rounding
down,
we
can
say
with
at
least
68
percent
confidence
that
at
least
90
percent
of
all
possible
samples
would
be
less
than
the
applicable
standard.
84
5.6
Selecting
the
Most
Resource­
Effective
Design
If
more
than
one
sampling
design
option
is
under
consideration,
evaluate
the
various
designs
based
on
their
cost
and
the
ability
to
achieve
the
data
quality
and
regulatory
objectives.
Choose
the
design
that
provides
the
best
balance
between
the
expected
cost
and
the
ability
to
meet
the
objectives.
To
improve
the
balance
between
meeting
your
cost
objectives
and
achieving
the
DQOs,
it
might
be
necessary
to
modify
either
the
budget
or
the
DQOs.
As
can
be
seen
from
the
sample
size
equations
in
Section
5.4
and
5.5,
there
is
an
interrelationship
between
the
appropriate
number
of
samples
and
the
desired
level
of
confidence,
expected
variability
(both
population
and
measurement
variability),
and
the
width
of
the
gray
region.
To
reduce
costs
(i.
e.,
decrease
the
number
of
samples
required),
several
options
are
available:

°
Decrease
the
confidence
level
for
the
test
°
Increase
the
width
of
the
"gray
region"
(not
recommended
if
the
parameter
of
interest
is
near
the
Action
Level)

°
Divide
the
population
into
smaller
less
heterogeneous
decision
units,
or
use
a
stratified
sampling
design
in
which
the
population
is
broken
down
into
parts
that
are
internally
less
heterogeneous
°
Employ
composite
sampling
(if
non­
volatile
constituents
are
of
interest
and
if
allowed
by
the
regulations).

Note
that
seemingly
minor
modifications
to
the
sampling
design
using
one
or
more
of
the
above
strategies
may
result
in
major
increases
or
decreases
in
the
number
of
samples
needed.

When
estimating
costs,
be
sure
to
include
the
costs
for
labor,
travel
and
lodging
(if
necessary),
expendable
items
(such
as
personal
protective
gear,
sample
containers,
preservatives,
etc.),
preparation
of
a
health
and
safety
plan,
sample
and
equipment
shipping,
sample
analysis,
assessment,
and
reporting.
Some
sampling
plans
(such
as
composite
sampling)
may
require
fewer
analyses
and
associated
analytical
costs,
but
might
require
more
time
to
implement
and
not
achieve
the
project
objectives.
EPA's
Data
Quality
Objectives
Decision
Error
Feasibility
Trials
Software
(DEFT)
(USEPA
2001a)
is
one
tool
available
that
makes
the
process
of
selecting
the
most
resource
effective
design
easier.

5.7
Preparing
a
QAPP
or
WAP
In
this
activity,
the
outputs
of
the
DQO
Process
and
the
sampling
design
are
combined
in
a
planning
document
such
as
a
QAPP
or
WAP.
The
Agency
has
developed
detailed
guidance
on
how
to
prepare
a
QAPP
(see
USEPA
1998a)
or
WAP
(see
USEPA
1994a).
The
minimum
requirements
for
a
WAP
are
specified
at
40
CFR
§264.13.
The
following
discussion
is
focused
on
the
elements
of
a
QAPP;
however,
the
information
can
be
used
to
help
develop
a
WAP.
For
additional
guidance
on
selecting
the
most
resourceefficient
design,
see
ASTM
standard
D
6311­
98,
Standard
Guide
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities:
Selection
and
Optimization
of
Sampling
Design.
85
Additional
EPA
Guidance
on
Preparing
a
QAPP
or
WAP
°
Chapter
One,
SW­
846
°
EPA
Requirements
for
Quality
Assurance
Project
Plans,
EPA
QA/
R­
5
(replaces
QAMS­
005/
80)
(USEPA
2001b)

°
EPA
Guidance
for
Quality
Assurance
Project
Plans,
EPA
QA/
G­
5
(EPA/
600/
R­
98/
018)
(USEPA
1998a)

°
Guidance
for
Choosing
a
Sampling
Design
for
Environmental
Data
Collection,
EPA
QA/
G­
5S
­
Peer
Review
Draft
(EPA
QA/
G­
5S)
(USEPA
2000c)

°
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
And
Dispose
Of
Hazardous
Wastes,
a
Guidance
Manual
(USEPA
1994a)
The
QAPP
is
a
critical
planning
document
for
any
environmental
data
collection
operation
because
it
documents
project
activities
including
how
QA
and
QC
activities
will
be
implemented
during
the
life
cycle
of
a
project.
The
QAPP
is
the
"blueprint"
for
identifying
how
the
quality
system
of
the
organization
performing
the
work
is
reflected
in
a
particular
project
and
in
associated
technical
goals.
QA
is
a
system
of
management
activities
designed
to
ensure
that
data
produced
by
the
operation
will
be
of
the
type
and
quality
needed
and
expected
by
the
data
user.
QA,
acknowledged
to
be
a
management
function
emphasizing
systems
and
policies,
aids
the
collection
of
data
of
needed
and
expected
quality
appropriate
to
support
management
decisions
in
a
resource­
efficient
manner.

The
activities
addressed
in
the
QAPP
cover
the
entire
project
life
cycle,
integrating
elements
of
the
planning,
implementation,
and
assessment
phases.
If
the
DQOs
are
documented
(e.
g.,
in
a
memo
or
report
format),
include
the
DQO
document
as
an
attachment
to
the
QAPP
to
help
document
the
technical
basis
for
the
project
and
to
document
any
agreements
made
between
stakeholders.

As
recommended
in
EPA
QA/
G­
5
(USEPA
1998a),
a
QAPP
is
composed
of
four
sections
of
project­
related
information
called
"groups,"
which
are
subdivided
into
specific
detailed
"elements."
The
elements
and
groups
are
summarized
in
the
following
subsections.

5.7.1
Project
Management
The
QAPP
(or
WAP)
is
prepared
after
completion
of
the
DQO
Process.
Much
of
the
following
guidance
related
to
project
management
can
be
excerpted
from
the
outputs
of
the
DQO
Process.

The
following
group
of
QAPP
elements
covers
the
general
areas
of
project
management,
project
history
and
objectives,
and
roles
and
responsibilities
of
the
participants.
The
following
elements
ensure
that
the
project's
goals
are
clearly
stated,
that
all
participants
understand
the
goals
and
the
approach
to
be
used,
and
that
project
planning
is
documented:

°
Title
and
approval
sheet
°
Table
of
contents
and
document
control
format
°
Distribution
list
°
Project/
task
organization
and
schedule
(from
DQO
Step
1)
°
Problem
definition/
background
(from
DQO
Step
1)
°
Project/
task
description
(from
DQO
Step
1)
°
Quality
objectives
and
criteria
for
measurement
data
(DQO
Step
3)
86
°
Special
training
requirements/
certification
°
Documentation
and
records.

For
some
projects,
it
will
be
necessary
to
include
the
names
and
qualifications
of
the
person(
s)
who
will
obtain
the
samples
(e.
g.,
as
required
under
40
CFR
§261.38(
c)(
7)
in
connection
with
testing
for
the
comparable
fuels
exclusion).

5.7.2
Measurement/
Data
Acquisition
This
group
of
QAPP
elements
covers
all
aspects
of
measurement
system
design
and
implementation,
ensuring
that
appropriate
methods
for
sampling,
analysis,
data
handling,
and
QC
are
employed
and
thoroughly
documented.
Apart
from
the
sample
design
step
(DQO
Step
7),
the
following
information
should
be
included
in
the
QAPP
or
incorporated
by
reference:

°
Sampling
process
design/
experimental
design
(DQO
Steps
5
and
7)
°
Sampling
methods
and
SOPs
°
Sample
handling
and
chain­
of­
custody
requirements
°
Analytical
methods
and
SOPs
(DQO
Step
3)
°
QC
requirements;
°
Instrument/
equipment
testing,
inspection,
and
maintenance
requirements
°
Instrument
calibration
and
frequency
°
Inspection/
acceptance
requirements
for
supplies
and
consumables
°
Data
acquisition
requirements
(non­
direct
measurements)
°
Data
management.

For
some
projects,
under
various
circumstances
it
may
be
appropriate
to
include
hard
copies
of
the
SOPs
in
the
QAPP,
rather
than
incorporate
the
information
by
reference.
For
example,
under
the
performance­
based
measurement
system
(PBMS)
approach,
alternative
sampling
and
analytical
methods
can
be
used.
Such
methods
can
be
reviewed
and
used
more
readily
if
actual
copies
of
the
SOPs
are
included
in
the
QAPP.
Hard
copies
of
SOPs
also
are
critically
important
when
field
analytical
techniques
are
used.
Field
personnel
must
have
detailed
instructions
available
to
ensure
that
the
methods
are
followed.
If
it
is
discovered
that
deviation
from
an
SOP
is
required
due
to
site­
specific
circumstances,
the
deviations
can
be
documented
more
easily
if
hard
copies
of
the
SOPs
are
available
in
the
field
with
QAPP.

5.7.3
Assessment/
Oversight
The
purpose
of
assessment
is
to
ensure
that
the
QAPP
is
implemented
as
prescribed.
The
elements
below
address
the
activities
for
assessing
the
effectiveness
of
the
implementation
of
the
project
and
the
associated
QA/
QC
activities:

°
Assessments
and
response
actions
°
Reports
to
management.

5.7.4
Data
Validation
and
Usability
Implementation
of
these
elements
ensures
that
the
data
conform
to
the
specified
criteria,
thus
enabling
reconciliation
with
the
project's
objectives.
The
following
elements
cover
QA
activities
that
occur
after
the
data
collection
phase
of
the
project
has
been
completed:
87
°
Data
review,
verification,
and
validation
requirements
°
Verification
and
validation
methods
°
Reconciliation
with
DQOs.

5.7.5
Data
Assessment
Historically,
the
focus
of
most
QAPPs
has
been
on
analytical
methods,
sampling,
data
handling,
and
quality
control.
Little
attention
has
been
paid
to
data
assessment
and
interpretation.
We
recommend
that
the
QAPP
address
the
data
assessment
steps
that
will
be
followed
after
data
verification
and
validation.
While
it
may
not
be
possible
to
specify
the
statistical
test
to
be
used
in
advance
of
data
generation,
the
statistical
objective
(identified
in
the
DQO
Process)
should
be
stated
along
with
general
procedures
that
will
be
used
to
test
distributional
assumptions
and
select
statistical
tests.
EPA's
Guidance
for
Data
Quality
Assessment
(USEPA
2000d)
suggests
the
following
five­
step
methodology
(see
also
Section
8
for
a
similar
methodology):

1.
Review
the
DQOs
2.
Conduct
a
preliminary
data
review
3.
Select
the
statistical
test
4.
Verify
the
assumptions
of
the
test
5.
Draw
conclusions
from
the
Data.

The
degree
to
which
each
QAPP
element
should
be
addressed
will
be
dependent
on
the
specific
project
and
can
range
from
"not
applicable"
to
extensive
documentation.
The
final
decision
on
the
specific
need
for
these
elements
for
project­
specific
QAPPs
will
be
made
by
the
regulatory
agency.
Documents
prepared
prior
to
the
QAPP
(e.
g.,
SOPs,
test
plans,
and
sampling
plans)
can
be
appended
or,
in
some
cases,
incorporated
by
reference.
88
6
CONTROLLING
VARIABILITY
AND
BIAS
IN
SAMPLING
The
DQO
Process
allows
you
to
identify
the
problem
to
be
solved,
set
specific
goals
and
objectives,
establish
probability
levels
for
making
incorrect
decisions,
and
develop
a
resourceefficient
data
collection
and
analysis
plan.
While
most
of
the
sampling
designs
suggested
in
this
guidance
incorporate
some
form
of
randomness
so
that
unbiased
estimates
can
be
obtained
from
the
data,
there
are
other
equally
important
considerations
(Myers
1997).
Sampling
and
analysis
activities
must
also
include
use
of
correct
devices
and
procedures
to
minimize
or
control
random
variability
and
biases
(collectively
known
as
"error")
that
can
be
introduced
in
field
sampling,
sample
transport,
subsampling,
sample
preparation,
and
analysis.
Sampling
error
can
lead
to
incorrect
conclusions
irrespective
of
the
quality
of
the
analytical
measurements
and
the
appropriateness
of
the
statistical
methods
used
to
evaluate
the
data.

This
section
is
organized
into
three
subsections
which
respond
to
these
questions:

1.
What
are
the
sources
of
error
in
sampling
(Section
6.1)?

2.
What
is
sampling
theory
(Section
6.2)?

3.
How
can
you
reduce
or
otherwise
control
sampling
error
in
the
field
and
laboratory
(Section
6.3)?

6.1
Sources
of
Random
Variability
and
Bias
in
Sampling
In
conducting
sampling,
we
are
interested
in
obtaining
an
estimate
of
a
population
parameter
(such
as
the
mean,
median,
or
a
percentile);
but
an
estimate
of
a
parameter
made
from
measurements
of
samples
always
will
include
some
random
variability
(or
variances)
and
bias
(or
a
systematic
shift
away
from
the
true
value)
due
primarily
to
(1)
the
inherent
variability
of
the
waste
or
media
(the
"between­
sampling­
unit
variability")
and
(2)
imprecision
in
the
methods
used
to
collect
and
analyze
the
samples
(the
"within­
sampling­
unit
variability")
(USEPA
2001e).

Errors
caused
by
the
sample
collection
process
can
be
much
greater
than
the
preparation,
analytical,
and
data
handling
errors
(van
Ee,
et
al.
1990,
Crockett,
et
al
1996)
and
can
dominate
the
overall
uncertainty
associated
with
a
characterization
study
(Jenkins,
et
al.
1996
and
1997).
In
fact,
analytical
errors
are
usually
well­
characterized,
well­
understood,
and
well­
controlled
by
laboratory
QA/
QC,
whereas
sampling
and
sample
handling
errors
are
not
usually
well­
characterized,
well­
understood,
or
well­
controlled
(Shefsky
1997).
Because
sampling
error
contributes
to
overall
error,
it
is
important
for
field
and
laboratory
personnel
to
understand
the
sources
of
sampling
errors
and
to
take
measures
to
control
them
in
field
sampling.

The
two
components
of
error
­­
random
variability
and
bias
­­
are
independent.
This
concept
is
demonstrated
in
the
"target"
diagram
(see
Figure
7
in
Section
2),
in
which
random
variability
(expressed
as
the
variance,
)
refers
to
the
"degree
of
clustering"
and
bias
(
)
relates
 
2
µ
 
x
to
the
"amount
of
offset
from
the
center
of
the
target"
(Myers
1997).

Random
variability
and
bias
occur
at
each
stage
of
sampling.
Variability
occurs
due
to
the
heterogeneity
of
the
material
sampled
and
random
variations
in
the
sampling
and
sample
handling
procedures.
In
addition,
bias
can
be
introduced
at
each
stage
by
the
sampling
device
(or
the
manner
in
which
it
is
used),
sample
handling
and
transport,
subsampling,
and
analysis.
89
MSE(x
bias
)()
=
+
 
2
2
Systematic
Error
(Bias)
Random
Variability
where
bias
=
Sum
of
all
biases
Analytical
variability
Between­
sampling­
unit
variability
(population
variability)

Sampling
and
subsampling
variability
including
Analytical
bias
Sampling
bias
(e.
g.,
improper
selection
and
use
of
sampling
devices;
loss
or
gain
of
constituents
during
sampling,
transport,
storage,
subsampling,
and
sample
preparation)

Statistical
bias
Mistakes,
blunders,
sabotage
h
h
h
 
b
2
=

 
s
2
=

 
a
2
=
h
    
22
2
2
=
+
+
b
sa
Figure
23.
Components
of
error
and
the
additivity
of
variances
and
biases
in
sampling
and
analysis
While
it
is
common
practice
to
calculate
the
variability
of
sample
analysis
results
"after
the
fact,"
it
is
more
difficult
to
identify
the
sources
and
potential
impacts
of
systematic
sampling
bias.
As
discussed
in
more
detail
below,
it
usually
is
best
to
understand
the
potential
sources
of
error
"up
front"
and
take
measures
to
minimize
them
when
planning
and
implementing
the
sampling
and
analysis
program.

Even
though
random
variability
and
bias
are
independent,
they
are
related
quantitatively
(see
Figure
23).
Errors
expressed
as
the
variance
can
be
added
together
to
estimate
overall
or
"total
study
error."
Biases
can
be
added
together
to
estimate
overall
bias
(though
sampling
bias
is
difficult
to
measure
in
practice).
Conceptually,
the
sum
of
all
the
variances
can
be
added
to
the
sum
of
all
biases
(which
is
then
squared)
and
expressed
as
the
mean
square
error
()
MSE
x
()
which
provides
a
quantitative
way
of
measuring
the
degree
of
representativeness
of
the
samples.
In
practice,
it
is
not
necessary
to
try
to
calculate
mean
square
error,
however,
we
suggest
you
understand
the
sources
and
impacts
of
variability
and
bias
so
you
can
take
steps
to
control
them
in
sampling
and
improve
the
representativeness
of
the
samples.
(See
Sections
5.2.4
and
5.2.5
of
EPA's
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
­
QA00
Update
(USEPA
2000d)
for
a
more
detailed
discussion
of
how
to
address
measurement
variability
and
bias
in
the
sampling
design).

The
relatively
new
science
of
sampling
theory
and
practice
(Myers
1997)
provides
a
technically
based
approach
for
addressing
sampling
errors
(see
Section
6.2).
Sampling
theory
recognizes
that
sampling
errors
arise
from
or
are
related
to
the
size
and
distribution
of
particles
in
the
waste,
the
weight
of
the
sample,
the
shape
and
orientation
of
the
sampling
device,
the
manner
90
in
which
the
sample
is
collected,
sample
handling,
and
the
manner
in
which
subsampling
is
performed
within
the
laboratory.
Sampling
theory
applies
to
particulate
solids,
liquids,
and
mixtures
of
solids
and
liquids.
Understanding
sampling
theory
does
not
allow
us
to
completely
eliminate
sampling
and
analytical
errors,
but
sampling
theory
does
allow
us
to
identify
the
sources
and
magnitudes
of
sampling
errors
so
we
can
take
steps
to
minimize
those
that
are
the
largest.
In
doing
so,
samples
will
be
more
precise
and
unbiased
(i.
e.,
more
"representative"),
thus
reducing
the
number
of
samples
required
(lowering
costs)
and
improving
our
ability
to
achieve
the
decision
error
rate
specified
in
the
DQOs.

6.2
Overview
of
Sampling
Theory
A
number
of
environmental
scientists
have
recognized
a
set
of
sampling
theories
developed
by
Dr.
Pierre
Gy
(Gy
1982
and
1998)
and
others
(Ingamells
and
Switzer
1973;
Ingamells
1974;
Ingamells
and
Pitard
1986;
Pitard
1989;
and
Visman
1969)
as
one
set
of
tools
for
improving
sampling.
These
researchers
have
studied
the
sources
of
sampling
error
(particularly
in
the
sampling
of
particulate
matter)
and
developed
techniques
for
quantifying
the
amount
of
error
that
can
be
introduced
by
the
physical
sampling
process.
The
theories
were
originally
developed
in
support
of
mineral
exploration
and
mining
and
more
recently
were
adopted
by
EPA
for
soil
sampling
(van
Ee,
et
al.
1990;
Mason
1992).
Under
some
conditions,
however,
the
theories
can
be
applied
to
waste
sampling
as
a
means
for
improving
the
efficiency
of
the
sampling
and
analysis
process
(Ramsey,
et
al.
1989).

As
discussed
in
the
context
of
this
guidance,
Gy's
theories
focus
on
minimizing
error
during
the
physical
collection
of
a
sample
of
solid
and
liquid
media
and
should
not
be
confused
with
the
statistical
sampling
designs
such
as
simple
random,
stratified
random,
etc.
discussed
in
Section
5.
Both
sampling
theory
and
sampling
design,
however,
are
critical
elements
in
sampling:
Gy's
theories
facilitate
collection
of
"correct"
individual
samples,
while
statistical
sampling
designs
allow
us
to
conduct
statistical
analyses
and
make
conclusions
about
the
larger
mass
of
waste
or
environmental
media
(i.
e.,
the
decision
unit).

The
following
three
subsections
describe
key
aspects
of
sampling
theory
including
heterogeneity,
sampling
errors,
and
the
concept
of
sample
support.
The
descriptions
are
mostly
qualitative
and
intended
to
provided
the
reader
with
an
appreciation
for
the
types
and
complexities
of
sampling
error.
Detailed
descriptions
of
the
development
and
application
of
sampling
theory
can
be
found
in
Sampling
for
Analytical
Purposes
(Gy
1998),
Geostatistical
Error
Management
(Myers
1997),
Pierre
Gy's
Sampling
Theory
and
Sampling
Practice
(Pitard
1993),
and
in
EPA's
guidance
document
Preparation
of
Soil
Sampling
Protocols:
Sampling
Techniques
and
Strategies
(Mason
1992).

6.2.1
Heterogeneity
One
of
the
underlying
principles
of
sampling
theory
is
that
the
medium
to
be
sampled
is
not
uniform
in
its
composition
or
in
the
distribution
of
constituents
in
the
medium,
rather,
it
is
heterogeneous.
Heterogeneity
causes
the
sampling
errors.

Appropriate
treatment
of
heterogeneity
in
sampling
depends
on
the
scale
of
observation.
Largescale
variations
in
a
waste
stream
or
site
affect
where
and
when
we
take
samples.
Small­
scale
variations
in
a
waste
or
media
affect
the
size,
shape,
and
orientation
of
individual
field
samples
and
laboratory
subsamples.
Gy's
theory
identifies
three
major
types
of
heterogeneity:
(1)
short­
91
range
(or
small­
scale)
heterogeneity,
(2)
long­
range
(or
large­
scale)
heterogeneity,
and
(3)
periodic
heterogeneity:

Short­
range
heterogeneity
refers
to
properties
of
the
waste
at
the
sample
level
or
in
the
immediate
vicinity
of
a
sample
location.
Two
other
types
of
heterogeneity
are
found
within
short­
range
heterogeneity:
one
reflected
by
differences
in
the
composition
between
individual
particles,
the
other
having
to
do
with
the
distribution
of
those
particles
in
the
waste.
Composition
heterogeneity
(also
known
as
constitution
heterogeneity)
is
constant
and
cannot
be
altered
except
by
particle
size
reduction
(e.
g.,
grinding
or
crushing
the
material).
The
distribution
heterogeneity
plays
an
important
role
in
sampling
because
particles
can
separate
into
groups.
Distribution
heterogeneity
can
be
increased
(e.
g.,
by
gravitational
segregation
of
particles
or
liquids)
and
can
be
reduced
by
homogenization
(mixing)
or
by
taking
many
small
increments
to
form
a
sample.

Large­
scale
heterogeneity
reflects
local
trends
and
plays
an
important
role
in
deciding
whether
to
divide
the
population
into
smaller
internally
homogenous
decision
units
or
to
use
a
stratified
sampling
design.
See
Appendix
C
for
a
detailed
description
of
largescale
heterogeneity.

Periodic
heterogeneity,
another
larger­
scale
phenomena,
refers
to
cyclic
phenomena
found
in
flowing
streams
or
discharges.
Understanding
periodic
heterogeneity
can
aid
in
dividing
a
waste
into
separate
waste
streams
or
in
establishing
a
stratified
sampling
design.

Forming
a
conceptual
model
of
the
heterogeneity
of
a
waste
will
help
you
to
determine
how
to
address
it
in
sampling.

6.2.2
Types
of
Sampling
Error
Gy's
theory
(see
also
Mason
1992,
Pitard
1993,
and
Gy
1998)
identifies
a
number
of
different
types
of
error
that
can
occur
in
sampling
as
a
result
of
heterogeneity
in
the
waste
and
failure
to
correctly
define
the
appropriate
shape
and
volume
of
material
for
inclusion
in
the
sample.
Understanding
the
types
and
sources
of
the
errors
is
an
important
step
toward
avoiding
them.
In
qualitative
terms,
these
errors
include
the
following:

°
Fundamental
error,
which
is
caused
by
differences
in
the
composition
of
individual
particles
in
the
waste
°
Errors
due
to
segregation
and
grouping
of
particles
and
the
constituent
associated
with
the
particles
°
Errors
due
to
various
types
of
trends
including
small­
scale
trends,
large­
scale
trends,
or
cycles
°
Errors
due
to
defining
(or
delimiting)
the
sample
space
and
extracting
the
sample
from
the
defined
area
°
Errors
due
to
preparation
of
the
sample,
including
shipping
and
handling.
[Note
that
the
term
"preparation,"
as
used
here,
describes
all
the
activities
that
take
92
Sample
A
Sample
B
"Population"

Figure
24.
Effects
of
sample
size
on
fundamental
error.
Small
samples
such
as
"A"
cause
the
constituent
of
interest
to
be
under­
represented
in
most
samples
and
over­
represented
in
a
small
proportion
of
samples.
Larger
samples
such
as
"B"
more
closely
reflect
the
parent
population.
place
after
the
primary
sample
is
obtained
in
the
field
and
includes
sample
containerization,
preservation,
handling,
mixing,
grinding,
subsampling,
and
other
preparative
steps
taken
prior
to
analysis
(such
as
the
"sample
preparation
methods"
as
described
in
Chapters
Three,
Four,
and
Five
of
SW­
846).]

Errors
that
can
occur
during
sampling
are
described
below.

6.2.2.1
Fundamental
Error
The
composition
of
a
sample
never
perfectly
matches
the
overall
composition
of
the
larger
mass
from
which
is
was
obtained
because
the
mass
of
an
individual
sample
is
always
less
than
the
mass
of
the
population
and
the
population
is
never
completely
homogeneous.
These
conditions
result
in
a
sampling
error
known
as
fundamental
error.
The
error
is
referred
to
as
"fundamental"
because
it
is
an
incompressible
minimum
sampling
error
that
depends
on
the
composition,
shape,
fragment
size
distribution,
and
chemical
properties
of
the
material,
and
it
is
not
affected
by
homogenization
or
mixing.
It
arises
when
the
constituent
of
interest
is
concentrated
in
constituent
"nuggets"
in
a
less
concentrated
matrix,
especially
when
the
constituent
is
present
at
a
trace
concentration
level
(e.
g.,
less
than
1
percent).
This
type
of
sampling
error
occurs
even
when
the
nuggets
are
mixed
as
well
as
possible
in
the
matrix
(so
long
as
they
are
not
dissolved).
The
fundamental
error
is
the
only
error
that
remains
when
the
sampling
operation
is
"perfect";
that
is,
when
all
parts
of
the
sample
are
obtained
in
a
probabilistic
manner
and
each
part
is
independent.

As
a
conceptual
example
of
fundamental
error,
consider
a
container
filled
with
many
white
marbles
and
a
few
black
marbles
that
have
been
mixed
together
well
(Figure
24).
If
a
small
sample
comprising
only
a
few
marbles
is
picked
at
random,
there
is
a
high
probability
they
would
all
be
white
(Sample
"A"
in
Figure
24)
and
a
small
chance
that
one
or
more
would
be
black.
As
the
sample
size
becomes
larger,
the
distribution
in
the
sample
will
reflect
more
and
more
closely
the
parent
population
(Sample
"B"
in
Figure
24).
The
situation
is
similar
in
a
waste
that
contains
rare
highly
concentrated
"nuggets"
of
a
constituent
of
concern.
If
a
small
sample
is
taken,
it
is
possible,
and
even
likely,
that
no
nuggets
of
the
constituent
would
be
selected
as
part
of
the
sample.
This
would
lead
to
a
major
underestimate
of
the
true
parameter
of
interest.
It
also
is
possible
with
a
small
sample
that
a
gross
overestimate
of
the
parameter
of
interest
will
occur
if
a
nugget
is
included
in
the
sample
because
the
nugget
would
comprise
a
relatively
large
proportion
of
the
analytical
sample
compared
to
the
true
population.
To
minimize
fundamental
error,
the
point
is
not
to
simply
"fish"
for
a
black
marble
(the
contaminant),
but
to
sample
for
all
of
the
fragments
and
constituents
such
that
the
sample
is
a
representation
of
the
lot
from
which
it
is
derived.
1
This
approach
should
not
be
confused
with
composite
sampling,
in
which
individual
samples
from
different
times
or
locations
are
pooled
and
mixed
into
a
single
sample.

93
(A)
(B)
Increments
Increments
Grouping
Segregation
Figure
25.
How
grouping
and
segregation
of
particles
can
affect
sampling
results.
Grouping
and
segregation
error
can
be
minimized
by
taking
many
small
increments.
The
fundamental
error
is
never
zero
(unless
the
population
is
completely
homogeneous
or
the
entire
population
is
submitted
for
analysis)
and
it
never
"cancels
out."
It
can
be
controlled
by
taking
larger
physical
samples;
however,
larger
samples
can
be
difficult
to
handle
in
the
field
and
within
the
laboratory,
and
they
may
pose
practical
constraints
due
to
increased
space
needed
for
storage.
Furthermore,
small
samples
(e.
g.,
less
than
1
gram)
generally
are
required
for
analytical
purposes.
To
preserve
the
character
of
a
large
sample
in
the
small
analytical
sample,
subsampling
and
particle
size
reduction
strategies
should
be
employed
(see
also
Section
7.3).

6.2.2.2
Grouping
and
Segregation
Error
Grouping
and
segregation
results
from
the
short­
range
heterogeneity
within
and
around
the
area
from
which
a
sample
is
collected
(i.
e.,
the
sampling
location)
and
within
the
sample
container.
This
small­
scale
heterogeneity
is
caused
by
the
tendency
for
some
particles
to
associate
into
groups
of
like
particles
due
to
gravitational
separation,
chemical
partitioning,
differing
moisture
content,
magnetism,
or
electrostatic
charge.
Grouping
and
segregation
of
particles
can
lead
to
sampling
bias.

Figure
25
depicts
grouping
of
particles
(at
"A")
and
segregation
of
particles
(at
"B")
within
a
sample
location.
The
grouping
of
particles
at
location
"A"
could
result
from
an
affinity
between
like
particles
(for
example,
due
to
electrostatic
forces).
Analytical
samples
formed
from
just
one
group
of
particles
would
yield
biased
results.

The
segregation
of
particles
at
location
"B"
could
result
from
gravitation
separation
(e.
g.,
during
sample
shipment).
If
the
contaminant
of
interest
was
associated
with
only
one
class
of
particle
(for
example,
only
the
black
diamond
shapes),
then
a
sample
collected
from
the
top
would
result
in
a
different
concentration
than
a
sample
collected
from
the
bottom,
thus
biasing
the
sample.

Grouping
and
segregation
error
can
be
minimized
by
properly
homogenizing
and
splitting
the
sample.
As
an
alternative,
an
individual
sample
can
be
formed
by
taking
a
number
of
increments
(small
portions
of
media)
in
the
immediate
vicinity
of
the
sampling
location
and
combining
them
into
the
final
collected
sample.
1
Pitard
(1993)
suggests
collecting
between
10
and
25
increments
as
a
means
to
control
grouping
and
segregation
error.
These
increments
are
then
combined
to
form
an
individual
sample
to
be
submitted
to
the
laboratory
for
analysis.
94
The
approach
of
taking
multiple
increments
to
form
a
sample
is
not
recommended
when
volatile
constituents
are
of
interest
and
may
have
practical
limitations
when
sampling
highly
heterogeneous
wastes
or
debris
containing
very
large
fragments.

6.2.2.3
Increment
Delimitation
Error
Increment
delimitation
error
occurs
when
the
shape
of
the
sampling
device
excludes
or
discriminates
against
certain
portions
of
the
material
to
be
sampled.
For
example,
a
sampling
device
that
only
samples
the
top
portion
of
a
liquid
effluent
as
it
is
leaves
a
discharge
pipe
(leaving
a
portion
of
the
flow
unsampled)
causes
increment
delimitation
error.
This
type
of
error
is
eliminated
by
choosing
a
sampling
device
capable
of
obtaining
all
of
the
flow
for
a
fraction
of
the
time
(see
also
Sections
6.3.2
and
6.3.3).

6.2.2.4
Increment
Extraction
Error
Increment
extraction
error
occurs
when
portions
of
the
sample
are
lost
or
extraneous
materials
are
included
in
the
sample.
For
example,
if
the
coring
device
is
too
small
to
accommodate
a
large
fragment
of
waste,
particles
that
should
be
in
the
sample
might
get
pushed
aside,
causing
sampling
bias.
Extraction
error
can
be
controlled
through
selection
of
devices
designed
to
accommodate
the
physical
characteristics
of
the
waste.

6.2.2.5
Preparation
Error
This
error
results
from
the
incorrect
preservation,
handling,
mixing,
grinding,
and
subsampling
that
can
result
in
loss,
contamination,
or
altering
of
the
sample
such
that
it
no
longer
is
an
accurate
representation
of
the
material
being
sampled.
Proper
choice
and
implementation
of
preparation
methods
controls
this
error.

6.2.3
The
Concept
of
"Sample
Support"

The
weight,
shape
(length,
width
and
height
dimensions),
and
orientation
of
a
sample
describe
the
"sample
support."
The
term
"support"
has
been
used
in
sampling
and
statistical
literature
in
various
ways,
such
as
to
describe
the
mass
or
volume
of
an
"exposure
unit"
or
"exposure
area"
in
the
Superfund
program
­­
similar
to
the
"decision
unit"
described
in
the
DQO
Process.

Conceptually,
there
is
a
continuum
of
support
from
the
decision
unit
level
(e.
g.,
an
exposure
area
of
a
waste
site
or
a
drum
of
solid
waste)
to
the
sample
and
subsample
level
down
to
the
molecular
level.
Because
it
is
not
possible
to
submit
the
entire
decision
unit
for
analysis,
samples
must
be
submitted
instead.
For
heterogeneous
media,
the
sample
support
will
have
a
substantial
effect
on
the
reported
measurement
values.

Measures
can
be
taken
to
ensure
adequate
size,
shape,
and
orientation
of
a
sample:

°
The
appropriate
size
of
a
sample
(either
volume
or
mass)
can
be
determined
based
on
the
relationship
that
exists
between
the
particle
size
distribution
and
expected
sampling
error
­­
known
as
the
fundamental
error
(see
Section
6.2.2.1).
In
the
DQO
Process,
you
can
define
the
amount
of
fundamental
error
that
is
acceptable
(specified
in
terms
of
the
standard
deviation
of
the
fundamental
error)
and
estimate
the
volume
required
for
field
samples.
The
sampling
tool
should
95
have
dimensions
three
or
more
times
larger
than
that
of
the
diameter
of
the
largest
particles.
Proper
sizing
of
the
sampling
tool
will
help
ensure
that
the
particle
size
distribution
of
the
sampled
material
is
represented
in
the
sample
(see
discussion
at
Section
6.3.1).

°
The
appropriate
shape
and
orientation
of
the
sample
are
determined
by
the
sampling
mode.
For
a
one­
dimensional
waste
(e.
g.,
liquid
flowing
from
a
discharge
pipe
or
solids
on
a
conveyor
belt),
the
correct
or
"ideal"
sample
is
an
undisturbed
cross
section
delimited
by
two
parallel
planes
(Pitard
1993,
Gy
1998)
(see
discussion
at
Section
6.3.2.1).
For
three­
dimensional
waste
forms
(such
as
solids
in
a
roll­
off
bin,
piles,
thick
slabs,
soil
in
drums,
liquids
in
a
tank,
etc.),
the
sampling
problem
is
best
treated
as
a
series
of
overlapping
two­
dimensional
problems.
The
correct
or
ideal
sample
is
an
undisturbed
core
(Pitard
1993)
that
captures
the
entire
thickness
of
the
waste
(see
discussion
at
Section
6.3.2.2).

6.3
Practical
Guidance
for
Reducing
Sampling
Error
This
section
describes
steps
that
can
be
taken
to
control
sampling
error.
While
the
details
of
sampling
theory
may
appear
complex
and
difficult
to
explain,
in
practice
most
sampling
errors
can
be
minimized
by
observing
a
few
simple
rules
that,
when
used,
can
greatly
improve
the
reliability
of
sampling
results
with
little
or
no
additional
costs
(Gy
1998):

°
Determine
the
optimal
mass
of
each
field
sample.
For
particulate
solids,
determine
the
appropriate
sample
weight
based
on
the
particle
size
distribution
and
characteristics,
and
consider
any
practical
constraints
(see
Section
6.3.1).
Also,
determine
additional
amounts
of
the
sampled
material
needed
for
split
samples,
for
field
and
laboratory
quality
control
purposes,
or
for
archiving.

°
Select
the
appropriate
shape
and
orientation
of
the
sample
based
on
the
sampling
design
model
identified
in
DQO
Step
7
(see
Section
6.3.2).

°
Select
sampling
devices
and
procedures
that
will
minimize
grouping
and
segregation
errors
and
increment
delimitation
and
increment
extraction
errors
(see
Sections
6.3.3
and
7.1).

Implement
the
sampling
plan
by
obtaining
the
number
of
samples
at
the
sampling
locations
and
times
specified
in
the
sampling
design
selected
in
DQO
Step
7,
and
take
measures
to
minimize
preparation
errors
during
sample
handling,
subsampling,
analysis,
documentation,
and
reporting.
When
collecting
samples
for
analysis
for
volatile
organic
constituents,
special
considerations
are
warranted
to
minimize
bias
due
to
loss
of
constituents
(see
Section
6.3.4).

Table
7
provides
a
summary
of
strategies
that
can
be
employed
to
minimize
the
various
types
of
sampling
error.
96
Table
7.
Strategies
for
Minimizing
Sampling
Error
Type
of
Sampling
Error
Strategy
To
Minimize
or
Reduce
Error
Fundamental
Error
°
To
reduce
variability
caused
by
fundamental
error,
increase
the
volume
of
the
sample.
°
To
reduce
the
volume
of
the
sample
and
maintain
low
fundamental
error,
perform
particle­
size
reduction
followed
by
subsampling.
°
When
volatile
constituents
are
of
interest,
do
not
grind
or
mix
the
sample.
Rather,
take
samples
using
a
method
that
minimizes
disturbances
of
the
sample
material
(see
also
Section
6.3.4).

Grouping
and
Segregation
Error
°
To
minimize
grouping
error,
take
many
increments.
°
To
minimize
segregation
error,
homogenize
the
sample
(but
beware
of
techniques
that
promote
segregation)

Increment
Delimitation/
Extraction
Errors
°
Select
sampling
devices
that
delimit
and
extract
the
sample
so
that
all
material
that
should
be
included
in
the
sample
is
captured
and
retained
by
the
device
(Pitard
1993,
Myers
1997).
°
For
one­
dimensional
wastes
(e.
g.,
flowing
streams
or
waste
on
a
conveyor),
the
correct
or
"ideal"
sample
is
an
undisturbed
cross
section
delimited
by
two
parallel
planes
(Pitard
1993,
Gy
1998).
To
obtain
such
a
sample,
use
a
device
that
can
obtain
"all
of
the
flow
for
a
fraction
of
the
time"
(Gy
1998)
(see
also
Section
6.3.2.1).
°
For
three­
dimensional
wastes
(e.
g.,
solids
in
a
roll­
off
bin),
the
waste
can
be
considered
for
practical
purposes
a
series
of
overlapping
twodimensional
wastes.
The
correct
or
"ideal"
sample
is
an
undisturbed
vertical
core
(Pitard
1993,
Gy
1998)
that
captures
the
full
depth
of
interest.

Preparation
Error
°
Take
steps
to
prevent
contamination
of
the
sample
during
field
handling
and
shipment.
Sample
contamination
can
be
checked
through
preparation
and
analysis
of
field
quality
control
samples
such
as
field
blanks,
trip
blanks,
and
equipment
rinsate
blanks.
°
Prevent
loss
of
volatile
constituents
through
proper
storage
and
handling.
°
Minimize
chemical
transformations
via
proper
storage
and
chemical/
physical
preservation.
°
Take
care
to
avoid
unintentional
mistakes
when
labeling
sample
containers,
completing
other
documentation,
and
handling
and
weighing
samples.

6.3.1
Determining
the
Optimal
Mass
of
a
Sample
As
part
of
the
DQO
Process
(Step
4
­
Define
the
Boundaries),
we
recommend
that
you
determine
the
appropriate
size
(i.
e.,
the
mass
or
volume),
shape,
and
orientation
of
the
primary
field
sample.
For
heterogeneous
materials,
the
size,
shape,
and
orientation
of
each
field
sample
will
affect
the
analytical
result.
To
determine
the
optimal
mass
(or
weight)
of
samples
to
be
collected
in
the
field,
you
should
consider
several
key
factors:

°
The
number
and
type
of
chemical
and/
or
physical
analyses
to
be
performed
on
each
sample,
including
extra
volumes
required
for
QA/
QC.
(For
example,
SW846
Method
1311
(TCLP)
specifies
the
minimum
sample
mass
to
be
used
for
the
extraction.)

°
Practical
constraints,
such
as
the
available
volume
of
the
material
and
the
ability
to
collect,
transport,
and
store
the
samples
2
In
this
section,
we
use
the
"relative
variance"
(
)
and
the
"relative
standard
deviation"
(
).
The
s
x
2
2
s
x
values
are
dimensionless
and
are
useful
for
comparing
results
from
different
experiments.

97
°
The
characteristics
of
the
matrix
(such
as
particulate
solid,
sludge,
liquid,
debris,
oily
waste,
etc.)

°
Health
and
safety
concerns
(e.
g.,
acutely
toxic,
corrosive,
reactive,
or
ignitable
wastes
should
be
transported
and
handled
in
safe
quantities)

°
Availability
of
equipment
and
personnel
to
perform
particle­
size
reduction
(if
needed)
in
the
field
rather
than
within
a
laboratory.

Often,
the
weight
(or
mass)
of
a
field
sample
is
determined
by
"whatever
will
fit
into
the
jar."
While
this
criterion
may
be
adequate
for
some
wastes
or
media,
it
can
introduce
serious
biases
–
especially
in
the
case
of
sampling
particulate
solids.

If
a
sample
of
particulate
material
is
to
be
representative,
then
it
needs
to
be
representative
of
the
largest
particles
of
interest
(Pitard
1993).
This
is
relevant
if
the
constituent
of
concern
is
not
uniformly
distributed
across
all
the
particle
size
fractions.
To
obtain
a
sample
representative
of
the
largest
particles
of
interest,
the
sample
must
be
of
sufficient
weight
(or
mass)
to
control
the
amount
of
fundamental
error
introduced
during
sampling.

If
the
constituent(
s)
of
concern
is
uniformly
distributed
throughout
all
the
particle
size
fractions,
then
determination
of
the
optimal
sample
mass
using
Gy's
approach
will
not
improve
the
representativeness
of
the
sample.
Homogeneous
or
uniform
distribution
of
contaminants
among
all
particle
sizes,
however,
is
not
a
realistic
assumption,
especially
for
contaminated
soils.
In
contaminated
soils,
concentrations
of
metals
tend
to
be
higher
in
the
clay­
and
silt­
size
fractions
and
organic
contaminants
tend
to
be
associated
with
organic
matter
and
fines
in
the
soil.

The
following
material
provides
a
"rule
of
thumb"
approach
for
determining
the
particle­
size
sample­
weight
relationship
sufficient
to
maintain
fundamental
error
(as
measured
by
the
standard
deviation
of
the
fundamental
error)
within
desired
limits.
A
detailed
quantitative
method
is
presented
in
Appendix
D.
Techniques
for
calculating
the
variance
of
the
fundamental
error
also
are
presented
in
Mason
(1992),
Pitard
(1993),
Myers
(1997),
and
Gy
(1998).

The
variance
of
the
fundamental
error
(
)
is
directly
proportional
to
the
size
of
the
largest
sFE
2
particle
and
inversely
proportional
to
the
mass
of
the
sample.
2
To
calculate
the
appropriate
mass
of
the
sample,
Pitard
(1989)
proposed
a
"Quick
Safety
Rule"
for
use
in
environmental
sampling
based
on
a
standard
deviation
of
the
fundamental
error
of
5
percent
(
):
sFE
=
±
5%

MS
 
10000
3
d
Equation
18
where
is
the
mass
of
the
sample
in
grams
(g)
and
of
the
diameter
of
the
largest
particle
MS
d
in
centimeters
(cm).
98
Direction
of
Flow
Taking
all
of
the
flow
part
of
the
time.

Taking
part
of
the
flow
all
of
the
time.

Taking
part
of
the
flow
part
of
the
time.
A
B
C
Figure
26.
Three
ways
of
obtaining
a
sample
from
a
moving
stream.
"A"
is
correct.
"B"
and
"C"
will
obtain
biased
samples
unless
the
material
is
homogeneous
(modified
after
Gy
1998).
Alternatively,
if
we
are
willing
to
accept
,
we
can
use
sFE
=
±
16%

MS
 
1000
3
d
Equation
19
An
important
feature
of
the
fundamental
error
is
that
it
does
not
"cancel
out."
On
the
contrary,
the
variance
of
the
fundamental
error
adds
together
at
each
stage
of
subsampling.
As
pointed
out
by
Myers
(1997),
the
fundamental
error
quickly
can
accumulate
and
exceed
50
percent,
100
percent,
200
percent,
or
greater
unless
it
is
controlled
through
particle­
size
reduction
at
each
stage
of
sampling
and
subsampling.
The
variance,
,
calculated
at
each
stage
of
sFE
2
subsampling
and
particle­
size
reduction,
must
be
added
together
at
the
end
to
derive
the
total
.
A
example
of
how
the
variances
of
the
fundamental
error
can
be
added
together
is
sFE
2
provided
in
Appendix
D.

6.3.2
Obtaining
the
Correct
Shape
and
Orientation
of
a
Sample
When
sampling
heterogeneous
materials,
the
shape
and
orientation
of
the
sampling
device
can
affect
the
composition
of
the
resulting
samples
and
facilitate
or
impede
achievement
of
DQOs.
The
following
two
subsections
provide
guidance
on
selecting
the
appropriate
shape
and
orientation
of
samples
obtained
from
a
moving
stream
of
material
and
a
stationary
batch
or
unit
of
material.

6.3.2.1
Sampling
of
a
Moving
Stream
of
Material
In
sampling
a
moving
stream
of
material,
such
as
solids,
liquids,
and
multi­
phase
mixtures
moving
through
a
pipe,
on
a
conveyor,
etc.,
the
material
can
be
treated
as
a
one­
dimensional
mass.
That
is,
the
material
is
assumed
to
be
linear
in
time
or
space.

The
correct
or
"ideal"
sample
is
an
undisturbed
cross
section
delimited
by
two
parallel
planes
(Pitard
1993,
Gy
1998).
The
approach
is
depicted
in
Figure
26
in
which
all
of
the
flow
is
collected
for
part
of
the
time.
In
practice,
the
condition
can
be
met
by
using
"cross­
stream"
sampling
devices
positioned
at
the
discharge
of
a
conveyor,
hose,
duct,
etc.
(Pitard
1993).
Alternatively,
in
sampling
solids
from
a
conveyor
belt,
a
transverse
cutter
or
flat
scoop
(with
vertical
sides)
can
be
used
to
obtain
a
sample,
preferably
with
the
conveyor
stopped
(though
this
condition
may
not
be
practical
for
large
industrial
conveyors).

For
sampling
of
liquids,
if
the
entire
stream
cannot
be
obtained
for
a
fraction
of
the
time
(e.
g.,
at
the
discharge
point),
then
it
may
be
necessary
to
introduce
turbulence
in
the
stream
using
baffles
and
to
obtain
a
portion
of
the
mixed
stream
part
of
the
time
(Pitard
1993).
99
Different
Size
Devices
Different
Shape
and
Orientation
C
B
A
Decision
Unit
Different
Orientation
of
Coring
Device
D
Figure
27.
Sampling
a
three­
dimensional
waste
by
treating
the
sampling
problem
as
a
series
of
overlapping
two­
dimensional
wastes.
Only
device
"A"
provides
the
correct
size,
shape,
and
orientation
of
the
sample.
6.3.2.2
Sampling
of
a
Stationary
Batch
of
Material
Sampling
of
a
stationary
batch
of
material,
such
as
filter
cake
in
a
roll­
off
bin,
soil
in
a
drum,
or
liquid
in
a
tank
can
be
approached
by
viewing
the
threeCoring

dimensional
space
as
a
series
of
overlapping
two­
dimensional
(i.
e.,
relatively
flat)
masses
in
a
horizontal
plane.
The
correct
or
"ideal"
sample
of
a
is
a
core
that
obtains
the
full
thickness
of
the
material
of
interest.

For
example,
Figure
27
shows
a
bin
of
granular
waste
with
fine
grain
material
in
the
upper
layer
and
larger
fragments
in
the
bottom
layer.
The
entire
batch
of
material
is
the
"decision
unit."
Coring
device
"A"
is
correct:
it
is
wide
enough
and
long
enough
to
include
the
largest
fragments
in
the
waste.
Coring
device
"B"
is
too
narrow.
It
either
fails
to
capture
the
larger
particles
or
simply
pushes
them
out
of
the
way
(causing
increment
delimitation
error).
Device
"C,"
a
trowel
or
small
shovel,
can
collect
an
adequate
volume
of
sample,
but
it
preferentially
selects
only
the
finer
grained
material
near
the
top
of
the
bin.
Device
"D"
is
the
correct
shape,
but
it
is
not
in
the
correct
orientation.
Devices
"B,"
"C,"
and
"D"
yield
incorrect
sample
support.

6.3.3
Selecting
Sampling
Devices
That
Minimize
Sampling
Errors
As
part
of
the
project
planning
process,
you
should
establish
performance
goals
for
the
sampling
devices
to
be
used
and
understand
the
possible
limitations
of
any
candidate
sampling
devices
or
equipment.
The
performance
goals
can
then
be
used
to
select
specific
sampling
devices
or
technologies
with
a
clear
understanding
of
the
limitations
of
those
devices
in
the
field.
Detailed
guidance
on
the
selection
of
specific
sampling
devices
is
provided
in
Section
7
and
Appendix
E
of
this
document.

6.3.3.1
General
Performance
Goals
for
Sampling
Tools
and
Devices
Selection
of
the
appropriate
sampling
device
and
sampling
method
will
depend
on
the
sampling
objectives,
the
physical
characteristics
of
the
waste
or
media,
the
chemical
constituents
of
concern,
the
sampling
location,
and
practical
concerns
such
as
technology
limitations
and
safety
issues
(see
also
Section
7).
The
following
general
performance
goals
apply
to
the
selection
of
sampling
devices
for
use
in
those
situations
in
where
it
is
desirable
to
control
or
otherwise
minimize
biases
introduced
by
the
sampling
device:

°
The
device
should
not
include
or
exclude
portions
of
the
waste
that
do
not
belong
in
the
sample
(in
other
words,
the
device
should
minimize
delimitation
and
extraction
errors).
100
°
If
volatile
constituents
are
of
interest,
the
device
should
obtain
samples
in
an
undisturbed
state
to
minimize
loss
of
volatile
constituents.

°
The
device
should
be
constructed
of
materials
that
will
not
alter
analyte
concentrations
due
to
loss
or
gain
of
analytes
via
sorption,
desorption,
degradation,
or
corrosion.

°
The
device
should
retain
the
appropriate
size
(volume
or
mass)
and
shape
of
sample,
and
obtain
it
in
the
orientation
appropriate
for
the
sampling
condition

preferably
in
one
pass.

Other
considerations
not
related
to
performance
follow:

°
"Ease
of
use"
of
the
sampling
device
under
the
conditions
that
will
be
encountered
in
the
field.
This
includes
the
ease
of
shipping
to
and
from
the
site,
ease
of
deployment,
and
ease
of
decontamination.

°
The
degree
of
hazard
associated
with
the
deployment
of
one
sampling
device
versus
another
(e.
g.,
consider
use
of
an
extension
pole
instead
of
a
boat
to
sample
from
a
waste
lagoon).

°
Cost
of
the
sampling
device
and
of
the
labor
(e.
g.,
single
vs.
multiple
operators)
for
its
deployment
(including
training)
and
maintenance.

6.3.3.2
Use
and
Limitations
of
Common
Devices
Unfortunately,
many
sampling
devices
in
common
use
today
lack
the
properties
required
to
minimize
certain
types
of
sampling
error.
In
fact,
there
are
few
devices
available
that
satisfy
all
the
general
performance
goals
stated
above.
Pitard
(1993),
however,
has
identified
a
number
of
devices
that
can
help
minimize
delimitation
and
extraction
error
(depending
on
the
physical
form
of
the
waste
to
be
sampled).
These
devices
include:

°
COLIWASA
(or
"composite
liquid
waste
sampler")
­­
for
sampling
free­
flowing
liquids
in
drums
or
containers
°
Shelby
tube
or
similar
device
­­
for
obtaining
core
samples
of
solids
°
Kemmerer
depth
sampler
­­
for
obtaining
discrete
samples
of
liquids
°
Flat
scoop
(with
vertical
walls)
­­
for
subsampling
solids
on
a
flat
surface.

Some
devices
in
common
use
that
can
cause
delimitation
and
extraction
errors
include
the
following:
auger,
shovel,
spoon,
trowel,
thief,
and
trier.
In
spite
of
the
limitations
of
many
conventional
sampling
devices,
it
is
necessary
to
use
them
under
some
circumstances
encountered
in
the
field
because
there
are
few
alternatives.
When
selecting
a
sampling
tool,
choose
the
one
that
will
introduce
the
least
sampling
error.
In
cases
in
which
no
such
tool
exists,
document
the
approach
used
and
be
aware
of
the
types
of
errors
likely
introduced
and
their
possible
impact
on
the
sampling
results.
To
the
extent
possible
and
practicable,
minimize
sampling
errors
by
applying
the
concepts
presented
in
this
chapter.
101
6.3.4
Special
Considerations
for
Sampling
Waste
and
Soils
for
Volatile
Organic
Compounds
In
most
contaminated
soils
and
other
solid
waste
materials,
volatile
organic
compound
(VOCs),
when
present,
coexist
in
gaseous,
liquid,
and
solid
(sorbed)
phases.
Of
particular
concern
with
regard
to
the
collection,
handling,
and
storage
of
samples
for
VOC
characterization
is
the
retention
of
the
gaseous
component.
This
phase
exhibits
molecular
diffusion
coefficients
that
allow
for
the
immediate
loss
of
gas­
phase
VOCs
from
a
freshly
exposed
surface
and
continued
losses
from
well
within
a
porous
matrix.
Furthermore,
once
the
gaseous
phase
becomes
depleted,
nearly
instantaneous
volatilization
from
the
liquid
and
sorbed
phases
occurs
in
an
attempt
to
restore
the
temporal
equilibrium
that
often
exists,
thereby
allowing
the
impact
of
this
loss
mechanism
to
continue.

Another
mechanism
that
can
influence
VOC
concentrations
in
samples
is
biological
degradation.
In
general,
this
loss
mechanism
is
not
expected
to
be
as
large
a
source
of
determinate
error
as
volatilization.
This
premise
is
based
on
the
observation
that
losses
of
an
order
of
magnitude
can
occur
on
a
time
scale
of
minutes
to
hours
due
solely
to
diffusion
and
advection,
whereas
losses
of
a
similar
magnitude
due
to
biological
processes
usually
require
days
to
weeks.
Furthermore,
under
aerobic
conditions,
which
is
typical
of
most
samples
that
are
transported
and
stored,
biological
mechanisms
favor
the
degradation
of
aromatic
hydrocarbons
over
halogenated
compounds.
Therefore,
besides
the
slower
rate
of
analyte
loss,
biodegradation
is
compound
selective.

To
limit
the
influence
of
volatilization
and
biodegradation
losses,
which,
if
not
addressed
can
biased
results
by
one
or
more
orders
of
magnitude,
it
is
currently
recommended
that
sample
collection
and
preparation,
however
not
necessarily
preservation,
follow
one
or
the
other
of
these
two
protocols:

°
The
immediate
in­
field
transfer
of
a
sample
into
a
weighed
volatile
organic
analysis
vial
that
either
contains
VOC­
free
water
so
that
a
vapor
partitioning
(purge­
and­
trap
or
headspace)
analysis
can
be
performed
without
reopening
or
that
contains
methanol
for
analyte
extraction
in
preparation
for
analysis,
or
°
The
collection
and
up
to
2­
day
storage
of
intact
samples
in
airtight
containers
before
initiating
one
of
the
aforementioned
sample
preparation
procedures.

In
both
cases,
samples
should
be
held
at
4±
2
o
C
while
being
transported
from
the
sampling
location
to
the
laboratory.

The
Standard
Guide
for
Sampling
Waste
and
Solids
for
Volatile
Organics
(ASTM
D
4547­
98)
is
recommended
reading
for
those
unfamiliar
with
the
many
challenges
associated
with
collecting
and
handling
samples
for
VOC
analysis.
102
For
additional
guidance
on
the
selection
and
use
of
sampling
tools
and
devices,
see:

°
40
CFR
261,
Appendix
I,
Representative
Sampling
Methods
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities
(ASTM
D
6232)
7
IMPLEMENTATION:
SELECTING
EQUIPMENT
AND
CONDUCTING
SAMPLING
This
section
provides
guidance
on
selecting
appropriate
sampling
tools
and
devices
(Section
7.1),
conducting
field
sampling
activities
(Section
7.2),
and
using
sample
homogenization,
splitting,
and
subsampling
techniques
(Section
7.3).

7.1
Selecting
Sampling
Tools
and
Devices
The
tools,
devices,
and
methods
used
for
sampling
waste
materials
will
vary
with
the
form,
consistency,
and
location
of
the
waste
materials
to
be
sampled.
As
part
of
the
DQO
Process,
you
identify
the
location
(type
of
unit
or
other
source
description)
from
which
the
samples
will
be
obtained
and
the
"dimension"
of
the
sampling
problem
(such
as
"one­
dimensional"
or
"two­
dimensional").
In
the
DQO
Process,
you
also
specify
the
appropriate
size,
shape,
orientation
and
other
characteristics
for
each
sample
(called
the
"sample
support").
In
addition
to
the
DQOs
for
the
sample,
you
will
identify
performance
goals
for
the
sampling
device.
You
may
need
a
device
that
meets
the
following
qualifications:

°
Minimizes
delimitation
and
extraction
errors
so
that
it
does
not
include
material
that
should
not
be
in
the
sample,
nor
exclude
material
that
should
be
in
the
sample
°
Provides
a
largely
undisturbed
sample
(e.
g.,
one
that
minimizes
the
loss
of
volatile
constituents,
if
those
are
constituents
of
concern)

°
Is
constructed
of
materials
that
are
compatible
with
the
media
and
the
constituents
of
concern
(e.
g.,
the
materials
of
construction
do
not
cause
constituent
loss
or
gain
due
to
sorption,
desorption,
degradation,
or
corrosion)

°
Is
easy
to
use
under
the
conditions
of
the
sampling
location,
and
the
degree
of
health
or
safety
risks
to
workers
is
minimal
°
Is
easy
to
decontaminate
°
Is
cost­
effective
during
use
and
maintenance.

Unfortunately,
few
devices
will
satisfy
all
of
the
above
goals
for
a
given
waste
or
medium
and
sampling
design.
When
selecting
a
device,
try
first
to
choose
one
that
will
introduce
the
least
sampling
error
and
satisfy
other
performance
criteria
established
by
the
planning
team,
within
practical
constraints.

Figure
28
summarizes
the
steps
you
can
use
to
select
an
optimal
device
for
obtaining
samples.
1
ASTM
is
a
consensus
standards
development
organization.
Consistent
with
the
provisions
of
the
National
Technology
Transfer
and
Advancement
Act
of
1995
(NTTAA),
Public
Law
104­
113,
Section
12(
d),
which
directs
EPA
to
use
voluntary
consensus
standards
to
the
extent
possible,
this
guidance
supports
the
use
of
and
provides
references
to
ASTM
standards
applicable
to
waste
sampling.

103
Step
1
Identify
the
medium
(e.
g.,
liquid
or
sludge)
in
Table
8
that
best
describes
the
material
to
be
sampled.

Step
2
Select
the
location
or
point
of
sample
collection
(e.
g.,
conveyor,
drum,
tank,
etc.)
in
Table
8
for
the
medium
selected
in
Step
1.

Step
3
Identify
candidate
sampling
devices
in
the
third
column
of
Table
8.
For
each,
review
the
information
in
Table
9
and
the
device
summaries
in
Appendix
E.

Step
4
Select
a
sampling
device
based
on
its
ability
to
(1)
obtain
the
correct
size,
shape,
and
orientation
of
the
samples,
and
(2)
meet
other
performance
goals
specified
by
the
planning
team.
Step
1
Identify
the
medium
(e.
g.,
liquid
or
sludge)
in
Table
8
that
best
describes
the
material
to
be
sampled.

Step
2
Select
the
location
or
point
of
sample
collection
(e.
g.,
conveyor,
drum,
tank,
etc.)
in
Table
8
for
the
medium
selected
in
Step
1.
Step
2
Select
the
location
or
point
of
sample
collection
(e.
g.,
conveyor,
drum,
tank,
etc.)
in
Table
8
for
the
medium
selected
in
Step
1.

Step
3
Identify
candidate
sampling
devices
in
the
third
column
of
Table
8.
For
each,
review
the
information
in
Table
9
and
the
device
summaries
in
Appendix
E.
Step
3
Identify
candidate
sampling
devices
in
the
third
column
of
Table
8.
For
each,
review
the
information
in
Table
9
and
the
device
summaries
in
Appendix
E.

Step
4
Select
a
sampling
device
based
on
its
ability
to
(1)
obtain
the
correct
size,
shape,
and
orientation
of
the
samples,
and
(2)
meet
other
performance
goals
specified
by
the
planning
team.
Step
4
Select
a
sampling
device
based
on
its
ability
to
(1)
obtain
the
correct
size,
shape,
and
orientation
of
the
samples,
and
(2)
meet
other
performance
goals
specified
by
the
planning
team.

Figure
28.
Steps
for
selecting
a
sampling
device
Using
the
outputs
from
the
DQO
Process,
a
description
of
the
medium
to
be
sampled,
and
knowledge
of
the
site
or
location
of
sample
collection,
Tables
8
and
9
(beginning
on
pages
109
and
115
respectively)
can
be
used
to
quickly
identify
an
appropriate
sampling
device.
For
most
situations,
the
information
in
the
tables
will
be
sufficient
to
make
an
equipment
selection;
however,
if
you
need
additional
guidance,
review
the
more
detailed
information
provided
in
Appendix
E
or
refer
to
the
references
cited.

If
desired,
you
can
refer
to
the
documents
(such
as
ASTM
standards)
referenced
by
Table
8
for
supplementary
guidance
specific
to
sampling
a
specific
medium
and
site,
or
refer
to
those
referenced
by
Table
9
for
supplementary
guidance
on
a
device.
1
The
contents
of
the
ASTM
standards
are
summarized
in
Appendix
J.
(For
more
information
on
ASTM
or
purchasing
their
publications,
including
the
standards
referenced
in
this
chapter,
contact
ASTM
at:
ASTM,
100
Barr
Harbor
Drive,
West
Conshohocken,
PA
19428­
2959,
or
by
telephone
at
610­
832­
9585,
via
the
World
Wide
Web
at
http://
www.
astm.
org.)

In
particular,
we
recommend
that
you
review
the
guidance
found
in
ASTM
Standard
D
6232,
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities.
Most
of
the
information
on
sampling
devices
found
in
this
chapter
and
in
Tables
8
and
9
came
from
that
standard.
As
noted
by
the
standard,
it
covers
criteria
that
should
be
considered
when
selecting
sampling
equipment
for
collecting
environmental
and
waste
samples
for
waste
management
activities.
It
also
describes
many
of
the
typical
devices
used
during
such
sampling.
Because
each
sampling
situation
is
unique,
the
guidance
in
this
chapter
may
not
adequately
cover
your
specific
sampling
scenario.
You
may
have
to
modify
a
part
of
the
device
or
modify
the
device
application
to
improve
its
performance
or
to
facilitate
sample
collection.
For
104
example,
you
might
use
a
rope
or
an
extension
handle
on
a
device
to
access
a
particular
location
within
a
waste
management
unit.
In
other
cases,
you
may
need
auxiliary
equipment
that
will
increase
the
cost
or
complexity
of
sampling
operation
(such
as
a
drill
rig
to
drive
a
split
barrel
sampler
or
a
power
supply
to
run
a
pump).
The
physical
state
of
the
waste
or
design
of
the
unit
also
may
affect
how
the
equipment
is
deployed.
You
should
address
such
variations
as
part
of
your
sampling
plan
and
make
sure
that
any
modifications
do
not
cause
sampling
bias.

Finally,
other
sampling
devices
not
addressed
in
this
chapter
can
and
should
be
used
if
appropriate
(e.
g.,
if
the
device
meets
the
performance
goals
and
is
more
practical).
New
or
innovative
devices
not
discussed
in
this
chapter
also
should
be
considered
for
use
if
they
allow
you
to
meet
the
sampling
objectives
in
a
more
cost­
effective
manner.
In
other
words,
we
encourage
and
recommend
a
performance­
based
approach
for
selecting
sampling
equipment.

7.1.1
Step
1:
Identify
the
Waste
Type
or
Medium
to
be
Sampled
The
first
column
of
Table
8
(page
109)
lists
the
media
type
or
waste
matrix
commonly
sampled
under
RCRA.
These
media
may
include
liquids,
sludges
or
slurries,
various
unconsolidated
solids,
consolidated
solids
and
debris,
soil,
ground
water,
sediment,
soil
gas,
and
air.
In
general,
the
types
of
media
describe
the
physical
state
of
the
material
to
be
sampled.
The
physical
characteristics
of
the
waste
or
medium
affect
many
aspects
of
sampling,
including
the
volume
of
material
required,
selection
of
the
appropriate
sampling
device,
how
the
device
is
deployed,
and
the
containers
used
for
the
samples.
Table
10
provides
an
expanded
description
of
the
media
listed
in
Table
8.

7.1.2
Step
2:
Identify
the
Site
or
Point
of
Sample
Collection
In
the
second
column
of
Table
8,
identify
the
site
or
point
of
sample
collection
that
best
describes
where
you
plan
to
obtain
the
samples.
The
"site
or
point
of
sample
collection"
may
include
(1)
the
point
at
which
the
waste
is
generated
(e.
g.,
as
the
waste
exits
a
pipe,
moves
along
a
conveyor,
or
is
poured
or
placed
into
a
container,
tank,
impoundment
or
other
waste
management
unit);
(2)
the
unit
in
which
the
waste
is
stored
(such
as
a
drum,
collection
hopper,
tank,
waste
pile,
surface
impoundment,
sack
or
bag)
or
transported
(such
as
a
drum,
tanker
truck,
or
roll­
off
box);
or
(3)
the
environmental
medium
to
be
sampled
(such
as
surface
soil,
subsurface
soil,
ground
water,
surface
water,
soil
gas,
or
air).

When
testing
a
solid
waste
to
determine
if
it
should
be
characterized
as
a
hazardous
waste
or
to
determine
if
the
waste
is
restricted
from
land
disposal,
such
a
determination
must
be
made
at
the
point
of
waste
generation.

7.1.2.1
Drums
and
Sacks
or
Bags
Drums
and
sacks
or
bags
are
portable
containers
used
to
store,
handle,
or
transport
waste
materials
and
sometimes
are
used
in
waste
disposal
(e.
g.,
drums
in
a
landfill).
"Drums"
include
metal
drums
and
pails,
plastic
drums,
or
durable
fiberboard
paper
drums
or
pails
(USEPA
1994a).
Drums
and
pails
may
contain
nearly
the
full
range
of
media
­­
liquids
(single
or
multilayered
sludges,
slurries,
or
solids.
Sacks
or
bags
include
less
rigid
portable
containers
and
thus
can
contain
only
solids.
The
sampling
approach
(including
number
of
samples,
locations
of
samples,
sampling
device,
depth
of
samples)
for
these
containers
will
depend
on
the
number
of
105
containers
to
be
sampled,
waste
accessibility,
physical
and
chemical
characteristics
of
the
waste,
and
component
distribution
within
the
containers.

Review
ASTM
Standards
D
6063,
Guide
for
Sampling
Drums
and
Similar
Containers
by
Field
Personnel,
and
D
5679,
Practice
for
Sampling
Consolidated
Solids
in
Drums
or
Similar
Containers,
for
more
information
on
the
sampling
of
drums
and
sacks
or
bags.
Other
useful
guidance
on
sampling
drums
includes
"Drum
Sampling"
(USEPA
1994b),
issued
by
EPA's
Environmental
Response
Team.

7.1.2.2
Surface
Impoundments
Surface
impoundments
include
natural
depressions,
manmade
excavations,
or
diked
areas
that
contain
an
accumulation
of
liquids
or
wastes
containing
free
liquids
and
solids.
Examples
of
surface
impoundments
are
ponds,
lagoons,
and
holding,
storage,
settling,
and
aeration
pits
(USEPA
1994a).
The
appropriate
sampling
device
for
sampling
a
surface
impoundment
will
depend
on
accessibility
of
the
waste,
the
type
and
number
of
phases
of
the
waste,
the
depth,
and
chemical
and
physical
characteristics
of
the
waste.

7.1.2.3
Tanks
A
tank
is
defined
at
§
260.10
as
a
stationary
device,
designed
to
contain
an
accumulation
of
hazardous
waste
which
is
constructed
primarily
of
non­
earthen
materials
which
provide
structural
support.
A
container
is
defined
at
§
260.10
as
a
portable
device,
in
which
a
material
is
stored,
transported,
treated,
disposed
of,
or
otherwise
handled.
The
distinction
that
a
tank
is
not
a
container
is
important
because
the
regulations
at
261.7
set
forth
conditions
to
distinguish
whether
hazardous
waste
in
a
container
is
subject
to
regulation.
Nevertheless,
for
the
purpose
of
selecting
an
appropriate
sampling
device,
the
term
"tank"
as
used
in
Table
8
could
include
other
units
such
as
tank
trucks
and
tanker
cars
even
though
they
are
portable
devices.

The
selection
of
equipment
for
sampling
the
pipes
and
sampling
ports
of
a
tank
system
is
covered
separately
under
those
categories.
The
equipment
used
to
sample
a
pipe
or
spigot
can
be
very
different
from
that
used
to
sample
an
open
tank.

Tanks
usually
contain
liquids
(single
or
multi­
layered),
sludges,
or
slurries.
In
addition,
suspended
solids
or
sediments
may
have
settled
in
the
bottom
of
the
tank.
When
sampling
from
a
tank,
one
typically
considers
how
to
acquire
a
sufficient
number
of
samples
from
different
locations
(including
depths)
to
adequately
represent
the
entire
content
of
the
tank.

Waste
accessibility
and
component
distribution
will
affect
the
sampling
strategy
and
equipment
selection.
In
addition
to
discharge
valves
near
the
bottom,
most
tanks
have
hatches
or
other
openings
at
the
top.
It
is
usually
desirable
to
collect
samples
via
a
hatch
or
opening
at
the
top
of
the
tank
because
of
the
potential
of
waste
stratification
in
the
tank
(USEPA
1996b).
In
an
open
tank,
the
size
of
the
tank
may
restrict
sampling
to
the
perimeter
of
the
tank.
Usually,
the
most
appropriate
type
of
sampling
equipment
for
tanks
depends
on
the
design
of
the
tanks
and
the
media
contained
within
the
tank.

You
can
find
additional
guidance
on
sampling
tanks
in
"Tank
Sampling"
(USEPA
1994c),
issued
by
the
EPA's
Environmental
Response
Team.
106
7.1.2.4
Pipes,
Point
Source
Discharges,
or
Sampling
Ports
For
the
purpose
of
this
guidance,
pipes
or
point
source
discharges
include
moving
streams
of
sludge
or
slurry
discharging
from
a
pipe
opening,
sluice,
or
other
discharge
point
(such
as
the
point
of
waste
generation).
Sampling
ports
include
controlled
liquid
discharge
points
that
were
installed
for
the
purpose
of
sampling,
such
as
may
be
found
on
tank
systems,
a
tank
truck,
or
leachate
collection
systems
at
waste
piles
or
landfills.

A
dipper
also
is
used
to
sample
liquids
from
a
sampling
port.
Typically,
it
is
passed
through
the
stream
in
one
sweeping
motion
so
that
it
is
filled
in
one
pass.
In
that
instance,
the
size
of
the
dipper
beaker
should
be
related
to
the
stream
flow
rate.
If
the
cross­
sectional
area
of
the
stream
is
too
large,
more
than
one
pass
may
be
necessary
to
obtain
a
sample
(USEPA
1993b).
Besides
the
use
of
a
dipper
or
other
typical
sampling
devices,
sometimes
the
sample
container
itself
is
used
to
sample
a
spigot
or
point
source
discharge.
This
eliminates
the
possibility
of
contaminating
the
sample
with
intermediate
collection
equipment,
such
as
a
dipper
(USEPA
1996b).

See
ASTM
D
5013­
89
Standard
Practices
for
Sampling
Wastes
from
Pipes
and
Other
Point
Discharges
for
more
information
on
sampling
at
this
location.
Also
see
Gy
(1998)
and
Pitard
(1989,
1993).

7.1.2.5
Storage
Bins,
Roll­
Off
Boxes,
or
Collection
Hoppers
Discharges
of
unconsolidated
solids
from
a
process,
such
as
filter
cakes,
often
fall
from
the
process
into
a
collection
hopper
or
other
type
of
open­
topped
storage
container.
Sometimes
the
waste
materials
are
combined
into
large
a
storage
bin,
such
as
a
roll­
off
box
or
collection
hopper.
A
storage
bin
also
may
be
used
to
collect
consolidated
solids,
such
as
construction
debris.
The
waste
can
be
sampled
either
as
it
is
placed
in
the
container
or
after
a
certain
period
of
accumulation,
depending
on
the
technical
and
regulatory
objectives
of
the
sampling
program.

7.1.2.6
Waste
Piles
Waste
piles
include
the
non­
containerized
accumulation
of
solid
and
nonflowing
waste
material
on
land.
The
size
of
waste
piles
can
range
from
small
heaps
to
large
aggregates
of
wastes.
Liners
may
underlie
a
waste
pile,
thereby
preventing
direct
contact
with
the
soil.
As
with
other
scenarios,
waste
accessibility
and
heterogeneity
will
be
key
factors
in
the
sampling
design
and
equipment
selection.
Besides
the
devices
listed
in
this
chapter,
excavation
equipment
may
be
needed
at
first
to
properly
sample
large
piles.
Waste
piles
may
present
unique
sample
delimitation
problems
(Pitard
1993
and
Myers
1997),
and
special
considerations
related
to
sampling
design
may
be
necessary
(such
as
the
need
to
flatten
the
pile).

We
recommend
a
review
of
ASTM
Standard
D
6009,
Guide
for
Sampling
Waste
Piles
for
more
information.
Another
source
of
information
on
sampling
waste
piles
is
"Waste
Pile
Sampling"
(USEPA
1994d),
issued
by
EPA's
Environmental
Response
Team.

7.1.2.7
Conveyors
Solid
process
discharges
are
sometimes
sampled
from
conveyors
such
as
conveyor
belts
or
screw
conveyors.
Conveyor
belts
are
open
moving
platforms
used
to
transport
material
107
between
locations.
Solid
or
semi­
solid
wastes
on
a
conveyor
belt
can
be
sampled
with
a
flat
scoop
or
similar
device
(see
also
Section
6.3.2.1).
Screw
conveyors
usually
are
enclosed
systems
that
require
access
via
a
sampling
port,
or
they
can
be
sampled
at
a
discharge
point.
See
also
ASTM
D
5013
and
Gy
(1998,
pages
43
through
56).

7.1.2.8
Structures
and
Debris
This
guidance
assumes
that
the
sampling
of
structure
or
debris
typically
will
include
the
sampling
of
consolidated
solids
such
as
concrete,
wood,
or
other
structure
debris.
Appendix
C
provides
supplemental
guidance
on
developing
a
sampling
strategy
for
such
heterogeneous
wastes.
See
also
AFCEE
(1995),
Koski,
et
al.
(1991),
Rupp
(1990),
USEPA
and
USDOE
(1992),
and
ASTM
Standard
D
5956,
Standard
Guide
For
Sampling
Strategies
for
Heterogeneous
Wastes.

7.1.2.9
Surface
or
Subsurface
Soil
Selection
of
equipment
for
sampling
soil
is
based
on
the
depth
of
sampling,
the
grain­
size
distribution,
physical
characteristics
of
the
soil,
and
the
chemical
parameters
of
interest
(such
as
the
need
to
analyze
the
samples
for
volatiles).
Your
sampling
strategy
should
specify
the
depth
and
interval
(e.
g.,
"0
to
6
inches
below
ground
surface")
of
interest
for
the
soil
samples.

Simple
manual
techniques
and
equipment
can
be
used
for
surface
or
shallow
depth
sampling.
To
obtain
samples
of
soil
from
greater
depths,
powered
equipment
(e.
g.,
power
augers
or
drill
rigs)
will
be
required;
however,
those
are
not
used
for
actual
sample
collection,
but
are
used
solely
to
gain
easier
access
to
the
required
sample
depth
(USEPA
1996b).
Once
at
the
depth,
surface
sampling
devices
may
be
used.

ASTM
has
developed
many
informative
standards
on
the
sampling
of
soil,
including
D
4700,
Standard
Guide
for
Soil
Sampling
from
the
Vadose
Zone,
and
D
4220,
Standard
Practices
for
Preserving
and
Transporting
Soil
Samples.
In
addition,
see
EPA­
published
guidance
such
as
Preparation
of
Soil
Sampling
Protocols:
Sampling
Techniques
and
Strategies
(Mason
1992)
and
Description
and
Sampling
of
Contaminated
Soils
­
A
Field
Pocket
Guide
(USEPA
1991b).

7.1.3
Step
3:
Consider
Device­
Specific
Factors
After
you
identify
the
medium
and
site
of
sample
collection,
refer
to
the
third
column
of
Table
8
for
the
list
of
candidate
sampling
devices.
We
listed
common
devices
that
are
appropriate
for
the
given
media
and
site.
Next,
refer
to
the
information
in
Table
9
for
each
of
the
candidate
devices
to
select
the
most
appropriate
one
for
your
sampling
effort.

Table
9
provides
device­
specific
information
to
help
you
choose
the
appropriate
device
based
on
the
study
objective
and
the
DQOs
established
for
volume
(size),
shape,
depth,
and
orientation
of
the
sample,
and
sample
type
(discrete
or
composite,
surface
or
at
depth).

For
easy
reference,
the
devices
are
listed
alphabetically
in
Table
9.
Appendix
E
contains
a
summary
description
of
key
features
of
each
device
and
sources
for
other
information.
Under
the
third
column
in
Table
9,
"Other
Device­
Specific
Guidance,"
we
have
identified
some
of
those
sources,
especially
relevant
ASTM
standards
(see
summaries
of
ASTM
standards
in
Appendix
J).
108
7.1.3.1
Sample
Type
The
column
"Sample
Type"
Table
9
identifies
whether
the
device
can
sample
at
surface
only,
shallow
or
at
a
deeper
profile
(depth),
and
whether
the
device
can
obtain
a
discrete
sample
or
a
composite
sample.
For
example,
a
COLIWASA
or
drum
thief
can
be
used
to
sample
a
container
that
is
3­
feet
deep,
but
a
Kemmerer
sampler
may
be
required
to
sample
the
much
deeper
depth
of
an
impoundment.
We
also
identify
in
this
column
whether
the
device
collects
a
undisturbed
or
disturbed
solid
sample.
Also,
the
actual
depth
capacity
may
depend
on
the
design
of
the
device.
Some
devices
can
be
modified
or
varied
to
collect
at
different
depths
or
locations
in
a
material.
You
should
refer
to
the
device
summary
in
Appendix
E
if
you
need
specifics
regarding
the
sampling
depth
available
for
a
given
device.

7.1.3.2
Sample
Volume
The
column
for
volume
in
Table
9
identifies
the
range
of
sample
volume,
in
liters,
that
the
device
can
obtain.
It
may
be
possible
to
increase
or
decrease
this
value
through
modification
of
the
device.
During
the
planning
process,
you
should
determine
the
correct
volume
of
sample
needed.
Volume
is
one
of
the
components
of
sample
"support"
(that
is,
the
size,
shape,
and
orientation
of
the
sample).

7.1.3.3
Other
Device­
Specific
Considerations
The
last
column
of
Table
9
notes
other
considerations
for
device
selection.
The
comments
focus
on
those
factors
that
may
cause
error
to
be
introduced
or
that
might
increase
the
time
or
cost
of
sampling.
For
some
devices,
the
column
includes
comments
on
how
easy
the
equipment
is
to
use,
such
as
whether
it
needs
a
power
source
or
is
heavy,
and
whether
it
can
be
decontaminated
easily.
The
table
also
mentions
whether
the
device
is
appropriate
for
samples
requiring
the
analysis
of
volatile
organic
constituents
and
any
other
important
considerations
regarding
analyte
and
device
compatibility.
The
equipment
should
be
constructed
of
materials
that
are
compatible
with
the
waste
and
not
susceptible
to
reactions
that
might
alter
or
bias
the
physical
or
chemical
characteristics
of
the
sample
of
the
waste.

7.1.4
Step
4:
Select
the
Sampling
Device
Select
the
sampling
device
based
on
its
ability
to
(1)
obtain
the
correct
size,
shape,
and
orientation
of
the
samples
(see
Sections
6.3.1
and
6.3.2)
and
(2)
meet
any
other
performance
criteria
specified
by
the
planning
team
in
the
DQO
Process
(see
Section
6.3.3.1).
In
addition,
samples
to
be
analyzed
for
volatile
organic
constituents
should
be
obtained
using
a
sampling
technique
that
will
minimize
the
loss
of
constituents
and
obtain
a
sample
volume
required
for
the
analytical
method
(see
Section
6.3.4).
109
Table
8.
Device
Selection
Guide
­­
Media
and
Site
of
Sample
Collection
Media
(See
Section
7.1.1)
Site
or
Point
of
Sample
Collection
(See
Section
7.1.2)
Candidate
Devices
(Listed
Alphabetically.
For
Device­
Specific
Information,
See
Table
9)
Other
Related
Guidance
Liquids,
no
distinct
layer
of
interest
Examples:
Containerized
spent
solvents,
leachates
or
other
liquids
discharged
from
a
pipe
or
spigot
Drum
COLIWASA
Dipper
Drum
thief
Liquid
grab
sampler
Peristaltic
pump
Plunger
type
sampler
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
Valved
drum
sampler
ASTM
D
5743
ASTM
D
6063
EPA/
ERT
SOP
2009
(USEPA
1994b)

Surface
impoundment
Automatic
sampler
Bacon
bomb
Bailer
Bladder
pump
Centrifugal
sub­
pump
Dipper
Displacement
pump
Kemmerer
sampler
Liquid
grab
sampler
Peristaltic
pump
Plunger
type
sampler
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
ASTM
D
6538
USEPA
(1984,
1985,
and
1989c)

Tank
Bacon
bomb
Bailer
COLIWASA
Dipper
Drum
thief
Kemmerer
sampler
Liquid
grab
sampler
Peristaltic
pump
Plunger
type
sampler
Settleable
solids
profiler
Submersible
pump
Swing
jar
sampler
Syringe
sampler
ASTM
D
6063
ASTM
D
5743
EPA/
ERT
SOP
2010
(USEPA
1994c)

*
Copies
of
EPA/
ERT
SOPs
are
available
on
the
Internet
at
http://
www.
ert.
org/
110
Table
8.
Device
Selection
Guide
­­
Media
and
Site
of
Sample
Collection
(Continued)

Media
(See
Section
7.1.1)
Site
or
Point
of
Sample
Collection
(See
Section
7.1.2)
Candidate
Devices
(Listed
Alphabetically.
For
Device­
Specific
Information,
See
Table
9)
Other
Related
Guidance
Liquids,
no
distinct
layer
of
interest
(continued)
Pipe,
point
source
discharge
Automatic
sampler
Bladder
pump
Centrifugal
submersible
pump
Dipper
Displacement
pump
Liquid
grab
sampler
Plunger
type
sampler
Sample
container
Swing
jar
sampler
ASTM
D
5013
ASTM
D
5743
ASTM
D
6538
Gy
1998
Sampling
port
(e.
g.,
spigot)
Beaker,
bucket,
sample
container
Swing
jar
sampler
Gy
1998
Liquids,
multi­
layered,
with
one
or
more
distinct
layers
of
interest
Examples:
Non­
aqueous
phase
liquids
(NAPLs)
in
a
tank;
mixtures
of
antifreeze
in
a
tank.
Drum
COLIWASA
Discrete
level
sampler
Drum
thief
Plunger
type
sampler
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
Valved
drum
sampler
ASTM
D
6063
Surface
impoundment
Automatic
sampler
Bacon
bomb
Bailer
(point
source
bailer)
Bladder
pump
Centrifugal
submersible
pump
Discrete
level
sampler
Displacement
pump
Peristaltic
pump
Plunger
type
sampler
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
ASTM
D
6538
USEPA
(1989c)

Tank
COLIWASA
Centrifugal
submersible
pump
Bacon
bomb
Bailer
Discrete
level
sampler
Peristaltic
pump
Plunger
type
sampler
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
Valved
drum
sampler
ASTM
D
6063
ASTM
D
5743
EPA/
ERT
SOP
2010
(USEPA
1994c)
Table
8.
Device
Selection
Guide
­­
Media
and
Site
of
Sample
Collection
(Continued)

Media
(See
Section
7.1.1)
Site
or
Point
of
Sample
Collection
(See
Section
7.1.2)
Candidate
Devices
(Listed
Alphabetically.
For
Device­
Specific
Information,
See
Table
9)
Other
Related
Guidance
111
Sludges,
slurries,
and
solidliquid
suspensions
Examples:
Paint
sludge,
electroplating
sludge,
and
ash
and
water
slurry.
Drum
COLIWASA
Dipper
Liquid
grab
sampler
Plunger
type
sampler
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
ASTM
D
6063
Tank
COLIWASA
Dipper
Lidded
sludge/
water
sampler
Liquid
grab
sampler
Plunger
type
sampler
Ponar
dredge
Settleable
solids
profiler
Swing
jar
sampler
Syringe
sampler
ASTM
D
6063
EPA/
ERT
2010
(USEPA
1994c)

Surface
impoundment
Dipper
Lidded
sludge/
water
sampler
Liquid
grab
sampler
Peristaltic
pump
Plunger
type
sampler
Ponar
dredge
Settleable
solids
profiler
Swing
jar
sampler
USEPA
(1989c)

Pipe
or
conveyor
Dipper
or
bucket
Scoop/
trowel/
shovel
Swing
jar
sampler
ASTM
D
5013
Granular
solids
–
unconsolidated
Examples:
Filter
press
cake,
powders,
excavated
(ex
situ)
soil,
incinerator
ash
Drum
Bucket
auger
Coring
type
sampler
(w/
valve)
Miniature
core
sampler
Modified
syringe
sampler
Trier
Scoop/
trowel/
shovel
ASTM
D
5680
ASTM
D
6063
EPA/
ERT
SOP
2009
(USEPA
1994b)

Sack
or
bag
Concentric
tube
thief
Miniature
core
sampler
Modified
syringe
sampler
Scoop/
trowel/
shovel
Trier
ASTM
D
5680
ASTM
D
6063
Table
8.
Device
Selection
Guide
­­
Media
and
Site
of
Sample
Collection
(Continued)

Media
(See
Section
7.1.1)
Site
or
Point
of
Sample
Collection
(See
Section
7.1.2)
Candidate
Devices
(Listed
Alphabetically.
For
Device­
Specific
Information,
See
Table
9)
Other
Related
Guidance
112
Granular
solids
–
unconsolidated
(continued)
Storage
bin,
rolloff
box,
or
collection
hopper
Bucket
auger
Concentric
tube
thief
Coring
type
sampler
(w/
valve)
Miniature
core
sampler
Modified
syringe
sampler
Scoop/
trowel
Trier
ASTM
D
5680
ASTM
D
6063
Waste
pile
Bucket
auger
Concentric
tube
thief
Coring
type
sampler
(w/
valve)
Miniature
core
sampler
Modified
syringe
sampler
Scoop/
trowel/
shovel
Thin­
walled
tube
Trier
ASTM
D
6009
EPA/
ERT
SOP
2017
(USEPA
1994d)

Pipe
(e.
g.,
vertical
discharge
from
cyclone
centrifuge
or
baghouse)
or
conveyor
Bucket,
dipper,
pan,
or
sample
container
Miniature
core
sampler
Scoop/
trowel/
shovel
Trier
ASTM
D
5013
Gy
(1998)
Pitard
(1993)

Other
solids
–
unconsolidated
Examples:
Waste
pellets,
catalysts,
or
large­
grained
solids.
Drum
Bucket
auger
Scoop/
trowel/
shovel
ASTM
D
5680
ASTM
D
6063
EPA/
ERT
SOP
2009
(USEPA
1994b)

Sack
or
bag
Bucket
auger
Scoop/
trowel/
shovel
ASTM
D
5680
ASTM
D
6063
Storage
bin,
rolloff
box,
or
collection
hopper
Bucket
auger
Scoop/
trowel/
shovel
ASTM
D
5680
ASTM
D
6063
Waste
pile
Bucket
auger
Scoop/
trowel/
shovel
Split
barrel
Thin­
walled
tube
ASTM
D
6009
EPA/
ERT
SOP
2017
(USEPA
1994d)

Conveyor
Scoop/
trowel/
shovel
ASTM
D
5013
Gy
(1998)
Pitard
(1993)
Table
8.
Device
Selection
Guide
­­
Media
and
Site
of
Sample
Collection
(Continued)

Media
(See
Section
7.1.1)
Site
or
Point
of
Sample
Collection
(See
Section
7.1.2)
Candidate
Devices
(Listed
Alphabetically.
For
Device­
Specific
Information,
See
Table
9)
Other
Related
Guidance
113
Soil
and
other
unconsolidated
geologic
material
Examples:
In
situ
soil
at
a
land
treatment
unit
or
in
situ
soil
at
a
SWMU
Surface
Bucket
auger
Concentric
tube
thief
Coring
type
sampler
Miniature
core
sampler
Modified
syringe
sampler
Penetrating
probe
sampler
Scoop/
trowel/
shovel
Thin­
Walled
Tube
Trier
ASTM
D
5730
ASTM
E
1727
ASTM
D
4700
EISOPQA
Manual
(USEPA
1996b)

Subsurface
Bucket
auger
Coring
type
sampler
Miniature
core
sampler
Mod.
syringe
sampler
Penetrating
probe
sampler
Shovel/
scoop/
shovel
Split
barrel
Thin­
walled
tube
ASTM
D
4700
ASTM
D
5730
ASTM
D
6169
ASTM
D
6282
USEPA
(1996b)
USEPA
(1993c)

Solids
–
consolidated
Examples:
Concrete,
wood,
architectural
debris*
Storage
bin
(e.
g.,
roll­
off
box)
Penetrating
probe
sampler
Rotating
coring
device
ASTM
D
5679
ASTM
D
5956
ASTM
D
6063
USEPA
and
USDOE
(1992)

Waste
pile
Penetrating
probe
sampler
Rotating
coring
device
Split
barrel
ASTM
D
6009
USEPA
and
USDOE
(1992)

Structure
Rotating
coring
device
(See
also
Appendix
C,
Section
C.
5)
AFCEE
(1995)
Koski,
et
al
(1991)
USEPA
and
USDOE
(1992)

*
The
term
"debris"
has
a
specific
definition
under
40
CFR
268.2(
g)
(Land
Disposal
Restrictions
regulations)
and
includes
"solid
material
exceeding
a
60
mm
particle
size
that
is
intended
for
disposal
and
that
is
a
manufactured
object;
or
plant
or
animal
matter;
or
natural
geologic
material."
§
268.2(
g)
also
identifies
materials
that
are
not
debris.
In
general,
debris
includes
materials
of
either
a
large
particle
size
or
variation
in
the
items
present.
114
Table
8.
Device
Selection
Guide
­­
Media
and
Site
of
Sample
Collection
(Continued)

Selected
References
for
Sampling
of
Other
Media
Air
Example:
BIF
emissions
Chapter
Ten
SW­
846
EISOPQA
Manual
(USEPA
1996b)

Sediment
Example:
Surface
impoundment
sediment
QA/
QC
Guidance
for
Sampling
and
Analysis
of
Sediments,
Water,
and
Tissues
for
Dredged
Material
Evaluations
(USEPA
1995d)

Superfund
Program
Representative
Sampling
Guidance
Volume
5;
Water
and
Sediment,
Part
I
–
Surface
Water
and
Sediment,
Interim
Final
Guidance
(USEPA
1995e)

Region
4
EISOPQA
Manual
(USEPA
1996b)

Sediment
Sampling
(USEPA
1994e)

ASTM
D
4823;
ASTM
D
5387
Soil
Gas
or
Vapor
Examples:
Soil,
soil
water,
or
gas
in
the
vadose
zone
at
a
waste
disposal
site
Subsurface
Characterization
and
Monitoring
Techniques
­
A
Desk
Reference
Guide
(USEPA
1993c)

ASTM
Standard
Guide
for
Soil
Gas
Monitoring
in
the
Vadose
Zone
(ASTM
D
5314)

Soil
Gas
Sampling
(USEPA
1996c)

Ground
Water
Example:
Ground­
water
monitoring
wells
at
a
landfill
RCRA
Ground­
Water
Monitoring
Draft
Technical
Guidance
(USEPA
1992c)

Low­
Flow
(Minimal
Drawdown)
Ground­
Water
Sampling
Procedures
(Puls
and
Barcelona
1996)

ASTM
D4448­
01
Standard
Guide
for
Sampling
Ground­
Water
Monitoring
Wells
ASTM
D
5092­
90
Standard
Practice
for
Design
and
Installation
of
Ground
Water
Monitoring
Wells
in
Aquifers
ASTM
D
6286­
98
Standard
Guide
for
Selection
of
Drilling
Methods
for
Environmental
Site
Characterization
ASTM
D
6282
Standard
Guide
for
Direct
Push
Soil
Sampling
for
Environmental
Site
Characterizations
ASTM
D
6771­
02
Standard
Practice
for
Low­
Flow
Purging
and
Sampling
for
Wells
and
Devices
Used
for
Ground­
Water
Quality
Investigations
115
Table
9.
Device
Selection
Guide
–
Device­
Specific
Factors
Sampling
Device
(Listed
in
Alphabetical
Order)
Description,
Appendix
E,

Section
No.
Other
Device
Specific
Guidance
(in
Addition
to
ASTM
D
6232)
Sample
Type
Volume
(Liters
per
Pass)
Comments
(For
Example:
Effects
on
Matrix,
Operational
Considerations,
Typical
Uses)

Automatic
sampler
E.
1.1
ASTM
D
6538
EISOPQA
Manual
(USEPA
1996b)
Shallow
(25
in.),
discrete
or
composite
Unlimited
Auto
samplers
are
available
to
collect
samples
for
volatile
organics
analysis,
provide
a
grab
or
composite
sample,
and
may
be
unattended.
Need
power
source/
battery.
Commonly
used
at
waste
water
treatment
plants.
Must
be
knowledgeable
of
compatibility
of
waste
and
sampler
components.

Bacon
bomb
E.
3.1
USEPA
1984
USEPA
1994c
Depth,
discrete
0.1
to
0.5
For
parameters
that
do
not
require
a
polytetrafluroethylene
(PTFE)
sampler.
Recommended
for
sampling
of
lakes,
ponds,

large
tanks,
or
lagoons.
May
be
difficult
to
decontaminate
and
materials
of
construction
may
not
be
compatible
with
sample
matrix.

Bailer
E.
7.1
ASTM
D
4448
USEPA
1992c
USEPA
1994c
Depth,
discrete
0.5
to
2.0
Bailers
are
not
recommended
for
sampling
ground
water
for
trace
constituent
analysis
due
to
sampling
induced
turbidity
(USEPA
1992c
and
Puls
and
Barcelona
1996).
Unable
to
collect
samples
from
specific
depths
(unless
a
point­
source
bailer
is
used).
Available
in
a
variety
of
sizes
as
either
reusable
or
single
use
devices.
May
be
chemically
incompatible
with
certain
matrices
unless
constructed
of
resistant
material.

Bladder
pump
E.
1.2
ASTM
D
4448
USEPA
1992c
USEPA
1996b
Depth,
discrete
Unlimited
For
purging
or
sampling
of
wells,
surface
impoundments,
or
point
discharges.
Contact
parts
are
made
of
PTFE,
PVC
and
stainless
steel.
Requires
a
power
source,
compressed
gas,
and
a
controller.
Difficult
to
decontaminate
(based
on
design).

Suitable
for
samples
requiring
VOAs.
May
require
a
winch
or
reel.

Bucket
auger
E.
5.1
ASTM
D
1452
ASTM
D
4700
ASTM
D
6063
Mason
1992
USEPA
1993c
Surface
or
depth,
disturbed
0.2
to
1.0
Easy
and
quick
for
shallow
subsurface
samples
but
not
recommended
for
VOAs.
Requires
considerable
strength
and
labor
and
destroys
soil
horizons.
116
Table
9.
Device
Selection
Guide
–
Device­
Specific
Factors
(Continued)

Sampling
Device
(listed
in
alphabetical
order)
Description,
Appendix
E,

Section
Other
Device
Specific
Guidance
(in
addition
to
ASTM
D
6232)
Sample
Type
Volume
(Liters
Per
Pass)
Comments
(For
Example:
Effects
on
Matrix,
Operational
Considerations,
Typical
Uses)

Centrifugal
submersible
pump
E.
1.4
ASTM
D
4448
ASTM
D
4700
USEPA
1992c
Depth,
discrete
Unlimited
For
purging
or
sampling
wells,
surface
impoundments,
or
point
discharges.
Contact
parts
are
made
of
PTFE
and
stainless
steel.
Requires
a
power
source.
Adjustable
flow
rate
and
easy
to
decontaminate.
Not
compatible
with
liquids
containing
high
percent
solids.
May
require
a
winch
or
reel.

COLIWASA
E.
6.1
ASTM
D
5495
ASTM
D
5743
ASTM
D
6063
USEPA
1980
Shallow,
composite
0.5
to
3.0
Reusable
and
single
use
models
available.
Inexpensive.
Glass
type
devices
may
be
difficult
to
decontaminate.
Collects
undisturbed
sample.
For
mixed
solid/
liquid
media
will
collect
semi­
liquid
only.
Not
for
high
viscosity
liquids.

Concentric
tube
thief
E.
4.3
ASTM
D
6063
USEPA
1994d
Surface,
relatively
undisturbed,
selective
0.5
to
1.0
Recommended
for
powdered
or
granular
materials
or
wastes
in
piles
or
in
bags,
drums
or
similar
containers.
Best
used
in
dry,

unconsolidated
materials.
Not
suitable
for
sampling
large
particles
due
to
narrow
width
of
slot.

Coring
type
sampler
(with
or
without
valve)
E.
4.6
ASTM
D
4823
USEPA
1989c
Surface
or
depth,
disturbed
0.2
to
1.5
Designed
for
wet
soils
and
sludge.
May
be
equipped
with
a
plastic
liner
and
caps.
May
be
used
for
VOAs.
Reusable
and
easy
to
decontaminate.

Dipper
(or
"pond
sampler")
E.
7.2
ASTM
D
5358
ASTM
D
5013
USEPA
1980
Shallow,
composite
0.5
to
1.0
For
sampling
liquids
in
surface
impoundments.
Inexpensive.

Not
appropriate
for
sampling
stratified
waste
if
discrete
characterization
needed.

Discrete
level
sampler
E.
3.5
Depth,
discrete
0.2
to
0.5
Easy
to
decontaminate.
Obtains
samples
from
a
discrete
interval.
Limited
by
sample
volume
and
liquids
containing
high
solids.
Can
be
used
to
store
and
transport
sample.

Displacement
pumps
E.
1.5
ASTM
D
4448
Depth,
discrete
Unlimited
Can
be
used
for
purging
or
sampling
of
wells,
impoundments,
or
point
discharges.
Contact
parts
are
made
of
PVC,
stainless
steel,
or
PTFE
to
reduce
risk
of
contamination
when
trace
levels
or
organics
are
of
interest.
Requires
a
power
source
and
a
large
gas
source.
May
be
difficult
to
decontaminate
(piston
displacement
type).
May
require
a
winch
or
reel
to
deploy.
Table
9.
Device
Selection
Guide
–
Device­
Specific
Factors
(Continued)

Sampling
Device
(listed
in
alphabetical
order)
Description,
Appendix
E,

Section
Other
Device
Specific
Guidance
(in
addition
to
ASTM
D
6232)
Sample
Type
Volume
(Liters
Per
Pass)
Comments
(For
Example:
Effects
on
Matrix,
Operational
Considerations,
Typical
Uses)

117
Drum
thief
E.
6.2
ASTM
D
6063
ASTM
D
5743
USEPA
1994b
Shallow,
composite
0.1
to
0.5
Usually
single
use.
If
made
of
glass
and
reused,

decontamination
may
be
difficult.
Limited
by
length
of
sampler,

small
volume
of
sample
collected,
and
viscosity
of
fluids.

Kemmerer
sampler
E.
3.2
Depth,
discrete
1.0
to
2.0
Recommended
for
lakes,
ponds,
large
tanks
or
lagoons.
May
be
difficult
to
decontaminate.
Materials
may
not
be
compatible
with
sample
matrix
but
all
PTFE
construction
is
available.
Sample
container
exposed
to
media
at
other
depths
while
being
lowered
to
sample
point.

Lidded
sludge/
water
sampler
E.
3.4
Discrete,
composite
1.0
1­
L
sample
jar
placed
into
device
(low
risk
of
contamination).

May
sample
at
different
depths
and
samples
up
to
40­
percent
solids.
Equipment
is
heavy
and
limited
to
one
bottle
size.

Liquid
grab
sampler
E.
7.3
Shallow,
discrete,
composite
suspended
solids
only
0.5
to
1.0
For
sampling
liquids
or
slurries.
Can
be
capped
and
used
to
transport
sample.
Easy
to
use.
May
be
lowered
to
specific
depths.
Compatibility
with
sample
parameters
is
a
concern.

Miniature
core
sampler
E.
4.7
ASTM
D
4547
ASTM
D
6418
Discrete
0.
01
to
0.05
Used
to
retrieve
samples
from
surface
soil,
trench
walls,
or
sub

samples
from
soil
cores.
O­
rings
on
plunger
and
cap
minimize
loss
of
volatiles
and
allow
device
to
be
used
to
transport
sample.

Designed
for
single
use.
Cannot
be
used
on
gravel
or
rocky
soils
must
avoid
trapping
air
with
samples.

Modified
syringe
sampler
E.
4.8
ASTM
D
4547
Discrete
0.
01
to
0.05
Made
by
modifying
a
plastic,
medical,
single­
use
syringe.
Used
to
collect
a
sample
from
a
material
surface
or
to
sub­
sample
a
core.
The
sample
is
transferred
to
a
vial
for
transportation.

Inexpensive.
Must
ensure
device
is
clean
and
compatible
with
media
to
be
sampled.
Table
9.
Device
Selection
Guide
–
Device­
Specific
Factors
(Continued)

Sampling
Device
(listed
in
alphabetical
order)
Description,
Appendix
E,

Section
Other
Device
Specific
Guidance
(in
addition
to
ASTM
D
6232)
Sample
Type
Volume
(Liters
Per
Pass)
Comments
(For
Example:
Effects
on
Matrix,
Operational
Considerations,
Typical
Uses)

118
Penetrating
probe
sampler
E.
4.1
USEPA
1993c
Discrete,
undisturbed
0.2
to
2.0
Used
to
sample
soil
vapor,
soil,
and
ground
water
(pushed
or
hydraulically
driven).
Versatile,
make
samples
available
for
onsite
analysis
and
reduces
investigation
derived
waste.
Limited
by
sample
volume
and
composition
of
subsurface
material.

Peristaltic
pump
E.
1.3
ASTM
D
4448
ASTM
D
6063
USEPA
1996b
Shallow,
discrete
or
composite
suspended
solids
only
Unlimited
Possible
to
collect
samples
from
multiple
depths
up
to
25
feet.

Decontamination
of
pump
is
not
required
and
tubing
is
easy
to
replace.
Can
collect
samples
for
purgeable
organics
with
modified
equipment,
but
may
cause
loss
of
VOAs.

Plunger
type
sampler
E.
6.4
ASTM
D
5743
Surface
or
depth,
discrete
0.2
to
Unlimited
Made
of
high­
density
polyethylene
(HDPE)
or
PTFE
with
optional
glass
sampling
tubes.
Used
to
collect
a
vertical
column
of
liquid.
Either
a
reusable
or
single
use
device.

Decontamination
may
be
difficult
(with
glass
tubes).

Ponar
dredge
E.
2.1
ASTM
D
4387
ASTM
D
4342
USEPA
1994e
Bottom
surface,
rocky
or
soft,
disturbed
0.5
to
3.0
One
of
the
most
effective
samplers
for
general
use
on
all
types
of
substrates
(silt
to
granular
material).
May
be
difficult
to
repeatedly
collect
representative
samples.
May
be
heavy.

Rotating
coring
device
E.
5.2
ASTM
D
5679
Surface
or
depth,
undisturbed
0.5
to
1.0
May
obtain
a
core
of
consolidated
solid.
Requires
power
and
water
source
and
is
difficult
to
operate.
Sample
integrity
may
be
affected.

Scoop
E.
7.5
ASTM
D
5633
ASTM
D
4700
ASTM
D
6063
Surface,
disturbed,
selective
<0.
1
to
0.6
Usually
for
surface
soil
and
solid
waste
samples.
Available
in
different
materials
and
simple
to
obtain.
May
bias
sample
because
of
particle
size.
May
exacerbate
loss
of
VOCs.

Settleable
solids
profiler
E.
6.5
Depth,
composite
suspended
solids
only
1.3
to
4.0
Typically
used
at
waste
water
treatment
plants,
waste
settling
ponds,
and
impoundments
to
measure
and
sample
settleable
solids.
Easy
to
assemble,
reusable
and
unbreakable
under
normal
use.
Not
recommended
for
caustics
or
high
viscosity
materials.
Table
9.
Device
Selection
Guide
–
Device­
Specific
Factors
(Continued)

Sampling
Device
(listed
in
alphabetical
order)
Description,
Appendix
E,

Section
Other
Device
Specific
Guidance
(in
addition
to
ASTM
D
6232)
Sample
Type
Volume
(Liters
Per
Pass)
Comments
(For
Example:
Effects
on
Matrix,
Operational
Considerations,
Typical
Uses)

119
Shovel
E.
7.5
ASTM
D
4700
Surface,
disturbed
1.0
to
5.0
Used
to
collect
surface
material
or
large
samples
from
waste
piles.
Easy
to
decontaminate
and
rugged.
Limited
to
surface
use
and
may
exacerbate
the
loss
of
samples
for
VOAs.

Split
barrel
sampler
E.
4.2
ASTM
D
1586
ASTM
D
4700
ASTM
D
6063
Discrete,
undisturbed
0.5
to
30.0
May
be
driven
manually,
or
mechanically
by
a
drill
rig
with
trained
personnel.
May
collect
a
sample
at
depth.
A
liner
may
be
used
in
the
device
to
minimize
disturbance
or
for
samples
requiring
VOAs.

Swing
jar
sampler
E.
7.4
Shallow,
composite
0.5
to
1.0
Used
to
sample
liquids,
powders,
or
small
solids
at
a
distance
up
to
12
feet.
Adaptable
to
different
container
sizes.
Not
suitable
for
discrete
samples.
Can
sample
a
wide
variety
of
locations.

Syringe
sampler
E.
3.3
ASTM
D
5743
ASTM
D
6063
Shallow,
discrete,
disturbed
0.2
to
0.5
Recommended
for
highly
viscous
liquids,
sludges
and
tar­
like
substances.
Easy
to
decontaminate.
Obtains
samples
at
discrete
depths
but
limited
to
length
of
device.
Waste
must
be
viscous
enough
to
stay
in
sampler.

Thin­
walled
tube
E.
4.5
ASTM
D
1587
ASTM
D
4823
ASTM
D
4700
Surface
or
depth,
undisturbed
0.5
to
5.0
Useful
for
collecting
an
undisturbed
sample
(depends
on
extension).
May
require
a
catcher
to
retain
soil
samples.

Inexpensive,
easy
to
decontaminate.
Samples
for
VOAs
may
be
biased
when
sample
is
extruded.

Trier
E.
4.4
ASTM
D
5451
ASTM
D
6063
Surface,
relatively
undisturbed,
selective
0.1
to
0.5
Recommended
for
powdered
or
granular
materials
or
wastes
in
piles
or
in
bags,
drums,
or
similar
containers.
Best
for
moist
or
sticky
materials.
Will
introduce
sampling
bias
when
used
to
sample
coarse­
grained
materials.

Trowel
E.
7.5
ASTM
D
5633
ASTM
D
4700
ASTM
D
6063
Surface,
disturbed,
selective
0.1
to
0.6
Usually
for
surface
soil
and
solid
waste
samples.
Available
in
different
materials
and
simple
to
obtain.
May
bias
sample
because
of
particle
size,
and
may
exacerbate
loss
of
VOAs.

Valved
drum
sampler
E.
6.3
Shallow,
composite
0.3
to
1.6
Used
to
collect
a
vertical
column
of
liquid.
Available
in
various
materials
for
repeat
or
single
use.
High
viscosity
liquids
may
be
difficult
to
sample.
120
Table
10.
Descriptions
of
Media
Listed
in
Table
8.

Media
Description
Examples
Liquids
­­
no
distinct
layer
of
interest
Liquids
(aqueous
or
nonaqueous)
that
are
or
are
not
stratified
and
samples
from
discrete
intervals
are
not
of
interest.
Sampling
devices
for
this
medium
do
not
need
to
be
designed
to
collect
liquids
at
discrete
depths.
Containerized
leachates
or
spent
solvents;
leachates
or
other
liquids
released
from
a
spigot
or
discharged
from
a
pipe.

Liquids
­­
one
or
more
distinct
layers
of
interest
Liquids
(aqueous
or
nonaqueous)
that
are
stratified
with
distinct
layers
and
collection
of
samples
from
discrete
intervals
is
of
interest.
Sampling
devices
for
this
media
do
need
to
be
designed
to
collect
liquids
at
discrete
depths.
Mixtures
of
antifreeze
and
used
oil;
light
or
dense
non

aqueous
phase
liquids
and
water
in
a
container,
such
as
a
tank.

Sludges
or
slurries
Materials
that
are
a
mixture
of
liquids
and
solids
and
that
may
be
viscous
or
oily.
Includes
materials
with
suspended
solids.
Waste
water
treatment
sludges
from
electroplating;
slurry
created
by
combining
solid
waste
incinerator
ash
and
water.

Granular
solids,
unconsolidated
Solids
which
are
not
cemented,
or
do
not
require
significant
pressure
to
separate
into
particles,
and
are
comprised
of
relatively
small
particles
or
components.
Excavated
(ex
situ)
soil
in
a
staging
pile;
filter
press
cake;

fresh
cement
kiln
dust;
incinerator
ash.*

Other
solids,
unconsolidated
Solids
with
larger
particles
than
those
covered
by
granular
solids.
The
sampling
device
needs
to
collect
a
larger
diameter
or
volume
of
sample
to
accommodate
the
larger
particles.
Waste
pellets
or
catalysts.

*
For
EPA­
published
guidance
on
the
sampling
of
incinerator
ash,
see
Guidance
for
the
Sampling
and
Analysis
of
Municipal
Waste
Combustion
Ash
for
the
Toxicity
Characteristic
(USEPA
1995f).
121
Table
10.
Descriptions
of
Media
Listed
in
Table
8
(Continued).

Media
Description
Examples
Soil
(in­
situ)
and
other
unconsolidated
geologic
material
Soil
in
its
original
undisturbed
location
or
other
geologic
material
that
does
not
require
significant
pressure
to
separate
into
particles.
In
situ
soil
sampling
may
be
conducted
at
subsurface
or
surface
depths.
Surface
soils
generally
are
defined
as
soils
between
the
ground
surface
and
6
to
12
inches
below
the
ground
surface
(USEPA
1996b);
however,
the
definition
of
surface
soils
in
State
programs
may
vary
considerably
from
EPA's.
Subsurface
soil
at
a
land
treatment
unit;
surface
soil
contaminated
by
a
chemical
spill
on
top
of
the
ground
or
soil
near
a
leak
from
an
excavated
underground
storage
tank.*

Solids,
consolidated
Cemented
or
otherwise
dense
solids
that
require
significant
physical
pressure
to
break
apart
into
smaller
parts.
Concrete,
wood,
and
architectural
debris.

Air
For
the
purpose
of
RCRA
sampling,
air
includes
emissions
from
stationary
sources
or
indoor
air.
Emissions
from
boilers
and
industrial
furnaces
(BIFs).**

Sediment
Settled,
unconsolidated
solids
beneath
a
flowing
or
standing
liquid
layer.
Sediment
in
a
surface
water
body.

Soil
gas
or
vapor
Gas
or
vapor
phase
in
the
vadose
zone.
The
vadose
zone
is
the
hydrogeological
region
extending
from
the
soil
surface
to
the
top
of
the
principal
water
table.
Soil
gas
overlying
a
waste
disposal
site.

Ground
water
"Water
below
the
land
surface
in
a
zone
of
saturation"
(40
CFR
260.10).
Water
can
also
be
present
below
the
land
surface
in
the
unsaturated
(vadose)
zone.
Ground
water
in
monitoring
wells
surrounding
a
hazardous
waste
landfill.***

*
Detailed
guidance
on
soil
sampling
can
be
found
in
Preparation
of
Soil
Sampling
Protocols:
Sampling
Techniques
and
Strategies
(Mason
1992),
which
provides
a
discussion
of
the
advantages
and
disadvantages
of
various
sample
collection
methods
for
soil.

**
See
Chapter
Ten
of
SW­
846
for
EPA­
approved
methods
for
sampling
air
under
RCRA.

***
Detailed
guidance
on
ground­
water
sampling
can
be
found
in
RCRA
Ground­
Water
Monitoring
­­
Draft
Technical
Guidance
(USEPA
1992c),
which
updates
technical
information
in
Chapter
Eleven
of
SW­
846
(Rev.
0,
Sept.
1986)
and
the
Technical
Enforcement
Guidance
Document
(TEGD).
122
7.2
Conducting
Field
Sampling
Activities
This
section
provides
guidance
on
performing
field
sampling
activities
that
typically
are
performed
during
implementation
of
the
sampling
plan.
Additional
guidance
can
be
found
in
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual
(USEPA
1994a),
Environmental
Investigations
Standard
Operating
Procedures
and
Quality
Assurance
Manual,
U.
S.
EPA
Region
4,
May
1996
(USEPA
1996b),
other
USEPA
guidance
cited
in
the
reference
section
of
this
chapter,
and
various
ASTM
standards
summarized
in
Appendix
J
of
this
guidance.
See
also
Appendix
C
of
EPA's
Guidance
for
Quality
Assurance
Project
Plans
(USEPA
1998a).
The
latter
document
includes
extensive
checklists,
including
the
following:

°
Sample
handling,
preparation,
and
analysis
checklist
°
QAPP
review
checklist
°
Chain­
of­
custody
checklist.

In
this
section,
we
provide
guidance
on
the
following
topics:

°
Sample
containers
(Section
7.2.1)
°
Sample
preservation
and
holding
times
(Section
7.2.2)
°
Documentation
of
field
activities
(Section
7.2.3)
°
Field
quality
control
samples
(Section
7.2.4)
°
Sample
identification
and
chain­
of­
custody
procedures
(Section
7.2.5)
°
Decontamination
of
equipment
and
personnel
(Section
7.2.6)
°
Health
and
safety
(Section
7.2.7)
°
Sample
packaging
and
shipping
(Section
7.2.8).

7.2.1
Selecting
Sample
Containers
All
samples
should
be
placed
in
containers
of
a
size
and
construction
appropriate
for
the
volume
of
material
specified
in
the
sampling
plan
and
as
appropriate
for
the
requested
analyses.
If
sufficient
sample
volume
is
not
collected,
the
analysis
of
all
requested
parameters
and
complete
quality
control
determinations
may
not
be
possible.
In
addition,
minimum
sample
volumes
may
be
required
to
control
sampling
errors
(see
Section
6).
Chapters
Two,
Three,
and
Four
of
SW­
846
identify
the
appropriate
containers
for
RCRA­
related
analyses
by
SW­
846
methods.

It
is
important
to
understand
that
a
single
"sample"
may
need
to
be
apportioned
to
more
than
one
container
to
satisfy
the
volume
and
preservation
requirements
specified
by
different
categories
of
analytical
methods.
Furthermore,
the
analytical
plan
may
require
transport
of
portions
of
a
sample
to
more
than
one
laboratory.

Factors
to
consider
when
choosing
containers
are
compatibility
with
the
waste
components,
cost,
resistance
to
breakage,
and
volume.
Containers
must
not
distort,
rupture,
or
leak
as
a
result
of
chemical
reactions
with
constituents
of
waste
samples.
The
containers
must
have
adequate
wall
thickness
to
withstand
handling
during
sample
collection
and
transport.
For
analysis
of
non­
volatile
constituents,
containers
with
wide
mouths
are
often
desirable
to
facilitate
Chapters
Two,
Three,
and
Four
of
SW­
846
identify
some
of
the
appropriate
containers
for
RCRA­
related
analyses
by
SW­
846
methods.
2
For
example,
when
inspections
are
conducted
under
Section
3007
of
RCRA
(42
U.
S.
C.
§
6927),
and
samples
are
obtained,
EPA
must
provide
a
split
sample
to
the
facility,
upon
request.

123
transfer
of
samples
from
the
equipment.
The
containers
must
be
large
enough
to
contain
the
optimum
sample
volume
specified
in
the
DQO
Process.

You
should
store
samples
containing
light­
sensitive
organic
constituents
in
amber
glass
bottles
with
Teflon®­
lined
lids.
Polyethylene
containers
are
not
appropriate
for
use
when
the
samples
are
to
be
analyzed
for
organic
constituents
because
the
plastics
could
contribute
organic
contaminants
and
potentially
introduce
bias.
If
liquid
samples
are
to
be
submitted
for
analysis
of
volatile
compounds,
you
must
store
the
samples
in
air­
tight
containers
with
zero
head
space.
You
can
store
samples
intended
for
metals
and
other
inorganic
constituent
analyses
in
polyethylene
containers
with
polyethylene­
lined
lids.
We
recommend
that
you
consult
with
a
chemist
for
further
direction
regarding
chemical
compatibility
of
available
containers
and
the
media
to
be
sampled.
We
recommend
that
an
extra
supply
of
containers
be
available
at
the
sampling
location
in
case
you
want
to
collect
more
sample
material
than
originally
planned
or
you
need
to
retain
splits
of
each
sample.
2
Always
use
clean
sample
containers
of
an
assured
quality.
For
container
cleaning
procedures
and
additional
container
information,
refer
to
the
current
iteration
of
Specifications
and
Guidance
for
Contaminant­
Free
Sample
Containers
(USEPA
1992d).
You
may
wish
to
purchase
pre­
cleaned/
quality
assured
bottles
in
lieu
of
cleaning
your
own
bottles
(USEPA
2001g).

7.2.2
Sample
Preservation
and
Holding
Times
Samples
are
preserved
to
minimize
any
chemical
or
physical
changes
that
might
occur
between
the
time
of
sample
collection
and
analysis.
Preservation
can
be
by
physical
means
(e.
g.,
kept
at
a
certain
temperature)
or
chemical
means
(e.
g.,
with
the
addition
of
chemical
preservatives).
If
a
sample
is
not
preserved
properly,
the
levels
of
constituents
of
concern
in
the
sample
may
be
altered
through
chemical,
biological,
or
photo­
degradation,
or
by
leaching,
sorption,
or
other
chemical
or
physical
reactions
within
the
sample
container.

The
appropriate
method
for
preserving
a
sample
will
depend
on
the
physical
characteristics
of
the
sample
(such
as
soil,
waste,
water,
etc.),
the
concentration
of
constituents
in
the
sample,
and
the
analysis
to
be
performed
on
the
sample.
Addition
of
chemical
preservatives
may
be
required
for
samples
to
be
analyzed
for
certain
parameters.
You
should
not
chemically
preserve
highly
concentrated
samples.
Samples
with
low
concentrations,
however,
should
be
preserved.
You
should
consult
with
a
chemist
at
the
laboratory
regarding
the
addition
of
chemical
preservatives
and
the
possible
impact
on
the
concentration
of
constituents
in
the
sample.
Also,
be
aware
that
addition
of
some
chemical
preservatives
to
highly
concentrated
waste
samples
may
result
in
a
dangerous
reaction.

Regardless
of
preservation
measures,
the
concentrations
of
constituents
within
a
sample
can
degrade
over
time.
Therefore,
you
also
should
adhere
to
sample
holding
times
(time
from
sample
collection
to
analysis),
particularly
if
the
constituents
of
concern
are
volatiles
in
low
concentrations.
Analytical
data
generated
outside
of
the
specified
holding
times
are
considered
to
be
minimum
values
only.
You
may
use
such
data
to
demonstrate
that
a
waste
is
hazardous
124
where
the
value
of
a
constituent­
of­
concern
is
above
the
regulatory
threshold,
but
you
cannot
use
the
data
to
demonstrate
that
a
waste
is
not
hazardous.
Exceeding
a
holding
time
when
the
results
are
above
a
decision
level
does
not
invalidate
the
data.

Appropriate
sample
preservation
techniques
and
sample
holding
times
for
aqueous
matrices
are
listed
in
Chapters
Two,
Three,
and
Four
of
SW­
846.
You
should
also
consult
the
methods
to
be
used
during
analysis
of
the
sampled
waste.
In
addition,
Standard
Guide
for
Sampling
Waste
and
Soil
for
Volatile
Organic
Compounds
(ASTM
D
4547­
98)
provides
information
regarding
the
preservation
of
volatile
organic
levels
in
waste
and
soil
samples.

7.2.3
Documentation
of
Field
Activities
This
section
provides
guidance
on
documenting
field
activities.
Records
of
field
activities
should
be
legible,
identifiable,
retrievable
and
protected
against
damage,
deterioration,
and
loss.
You
should
record
all
documentation
in
waterproof,
non­
erasable
ink.
If
you
make
an
error
in
any
of
these
documents,
make
corrections
by
crossing
a
single
line
through
the
error
and
entering
the
correct
information
adjacent
to
it.
The
corrections
should
then
be
initialed
and
dated.
Stick­
on
labels
of
information
should
not
be
removable
without
evidence
of
the
tampering.
Do
not
put
labels
over
previously
recorded
information.

Keep
a
dedicated
logbook
for
each
sampling
project
with
the
name
of
the
project
leader,
team
members,
and
project
name
written
inside
the
front
cover.
Document
all
aspects
of
sample
collection
and
handling
in
the
logbook.
Entries
should
be
legible,
accurate,
and
complete.
The
language
should
be
factual
and
objective.

You
also
should
include
information
regarding
sample
collection
equipment
(use
and
decontamination),
field
analytical
equipment
and
the
measurements,
calculations
and
calibration
data,
the
name
of
the
person
who
collected
the
sample,
sample
numbers,
sample
location
description
and
diagram
or
map,
sample
description,
time
of
collection,
climatic
conditions,
and
observations
of
any
unusual
events.
Document
the
collection
of
QC
samples
and
any
deviations
from
procedural
documents,
such
as
the
QAPP
and
SOPs.

When
videos,
slides,
or
photographs
are
taken,
you
should
number
them
to
correspond
to
logbook
entries.
The
name
of
the
photographer,
date,
time,
site
location,
and
site
description
should
be
entered
sequentially
into
the
logbook
as
photos
are
taken.
A
series
entry
may
be
used
for
rapid
aperture
settings
and
shutter
speeds
for
photographs
taken
within
the
normal
automatic
exposure
range.
Special
lenses,
films,
filters,
or
other
image
enhancement
techniques
must
be
noted
in
the
logbook.
Chain­
of­
custody
procedures
for
photoimages
depend
on
the
subject
matter,
type
of
film,
and
the
processing
it
requires.
Adequate
logbook
notations
and
receipts
may
be
used
to
account
for
routine
film
processing.
Once
developed,
the
slides
or
photographic
prints
should
be
serially
numbered
corresponding
to
the
logbook
descriptions
and
labeled
(USEPA
1992e).

7.2.4
Field
Quality
Control
Samples
Quality
control
samples
are
collected
during
field
studies
to
monitor
the
performance
of
sample
collection
and
the
risk
of
sampling
bias
or
errors.
Field
QC
samples
could
include
the
following:
125
[Name
of
Sampling
Organization]

Sample
Description
Plant:

Date:

Time:

Media:

Sample
Type:

Sampled
By:

Sample
ID
No.:
Location:

Station:

Preservative:

Figure
29.
Sample
label
Equipment
blank:
A
rinse
sample
of
the
decontaminated
sampling
equipment
using
organic/
analyte
free
water
under
field
conditions
to
evaluate
the
effectiveness
of
equipment
decontamination
or
to
detect
sample
cross
contamination.

Trip
blank:
A
sample
prepared
prior
to
the
sampling
event
and
stored
with
the
samples
throughout
the
event.
It
is
packaged
for
shipment
with
the
samples
and
not
opened
until
the
shipment
reaches
the
laboratory.
The
sample
is
used
to
identify
any
contamination
that
may
be
attributed
to
sample
handling
and
shipment.

Field
blank:
A
sample
prepared
in
the
field
using
organic/
analyte
free
water
to
evaluate
the
potential
for
contamination
by
site
contaminants
not
associated
with
the
sample
collected
(e.
g.,
airborne
organic
vapors)

Field
split
sample:
Two
or
more
representative
portions
taken
from
the
same
sample
and
submitted
for
analysis
to
different
laboratories.
Field
split
samples
are
used
to
estimate
interlaboratory
precision.

In
addition
to
collecting
field
QC
samples,
other
QC
procedures
include
sample
storage,
handling,
and
documentation
protocols.
These
procedures
are
covered
separately
in
the
following
sections.
In
addition,
Chapter
One
of
SW­
846,
entitled
"Quality
Control",
contains
guidance
regarding
both
field
and
laboratory
QA/
QC.
We
also
recommend
reviewing
the
following
for
information
on
field
QA/
QC:

°
EPA
Guidance
for
Quality
Assurance
Project
Plans
(USEPA
1998a)

°
Standard
Practice
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities:
Quality
Assurance
and
Quality
Control
Planning
and
Implementation
(ASTM
D
5283­
92).

7.2.5
Sample
Identification
and
Chain­
of­
Custody
Procedures
You
should
identify
samples
for
laboratory
analysis
with
sample
tags
or
labels.
An
example
of
a
sample
label
is
given
in
Figure
29.
Typically,
information
on
the
sample
label
should
include
the
sample
identification
code
or
number,
date,
time
of
collection,
preservative
used,
media,
location,
initials
of
the
sampler,
and
analysis
requested.
While
not
required,
you
may
elect
to
seal
each
sample
container
with
a
custody
seal
(Figure
30).

You
should
use
chain­
of­
custody
procedures
to
record
the
custody
of
the
samples.
Chain­
of­
custody
is
the
custody
of
samples
from
time
of
collection
through
shipment
to
analysis.
A
sample
is
in
one's
custody
if:
126
Figure
30.
Custody
seal
°
It
is
in
the
actual
possession
of
an
investigator
°
It
is
in
the
view
of
an
investigator,
after
being
in
their
physical
possession
°
It
is
in
the
physical
possession
of
an
investigator,
who
secures
it
to
prevent
tampering
°
It
is
placed
in
a
designated
secure
area.

All
sample
sets
should
be
accompanied
by
a
chain­
of­
custody
form.
This
record
also
serves
as
the
sample
logging
mechanism
for
the
laboratory
sample
custodian.
Figure
31
illustrates
the
content
of
a
chain­
of­
custody
form.
When
the
possession
of
samples
is
transferred,
both
the
individual
relinquishing
the
samples
and
the
individual
receiving
the
samples
should
sign,
date,
and
note
the
time
on
the
chain­
of­
custody
document.
If
you
use
overnight
shipping
service
to
transport
the
samples,
record
the
air
bill
number
on
the
chain­
of­
custody
form.
This
chain­
ofcustody
record
represents
the
official
documentation
for
all
transfers
of
the
sample
custody
until
the
samples
have
arrived
at
the
laboratory.
The
original
form
of
the
chain­
of­
custody
record
should
accompany
each
shipment.
A
copy
should
be
retained
by
a
representative
of
the
sampling
team.

When
sample
custody
is
transferred
between
individuals,
the
samples
or
coolers
containing
the
samples
are
sealed
with
a
custody
seal.
This
seal
cannot
be
removed
or
broken
without
destruction
of
the
seal,
providing
an
indicator
that
custody
has
been
terminated.

EPA's
Superfund
Program
has
developed
software
called
Field
Operations
and
Records
Management
System
(FORMS)
II
Lite™
that
automates
the
printing
of
sample
documentation
in
the
field,
reduces
time
spent
completing
sample
collection
and
transfer
documentation,
and
facilitates
electronic
capture
of
data
prior
to
and
during
field
sampling
activities.
For
information
on
FORMS
II
Lite™,
see
http://
www.
epa.
gov/
superfund/
programs/
clp/
f2lite.
htm.

For
additional
information
on
chain­
of­
custody
procedures,
we
recommend
ASTM
D
4840,
Standard
Guide
for
Sampling
Chain­
of­
Custody
Procedures.
127
Figure
31.
Chain­
of­
custody
form
128
7.2.6
Decontamination
of
Equipment
and
Personnel
Decontamination
of
sampling
equipment
refers
to
the
physical
and
chemical
steps
taken
to
remove
any
chemical
or
material
contamination.
Equipment
decontamination
helps
prevent
sampling
bias.
All
equipment
that
comes
in
contact
with
the
sampled
material
should
be
free
of
components
that
could
influence
(contaminate)
the
true
physical
or
chemical
composition
of
the
material.
Besides
the
equipment
used
to
collect
the
samples,
any
containers
or
equipment
used
for
sample
compositing
or
for
field
subsampling
should
be
free
of
contamination.

Equipment
decontamination
also
prevents
cross­
contamination
of
samples
when
the
equipment
is
used
to
collect
more
than
one
sample.
Disposable
equipment
or
the
use
of
dedicated
equipment
provides
the
most
effective
means
of
avoiding
cross­
contamination;
however,
the
use
of
such
equipment
is
not
always
practical.

You
should
decontaminate
equipment
to
a
level
that
meets
the
minimum
requirements
for
your
data
collection
effort.
Your
decontamination
steps
(e.
g.,
use
of
solvents
versus
use
of
only
soap
and
water),
therefore,
should
be
selected
based
on
the
constituents
present,
their
concentration
levels
in
the
waste
or
materials
sampled,
and
their
potential
to
introduce
bias
in
the
sample
analysis
results
if
not
removed
from
the
sampling
equipment.
You
should
describe
the
projectspecific
decontamination
procedures
in
your
planning
document
for
the
sampling
effort.
In
addition,
items
used
to
clean
the
equipment,
such
as
bottle
brushes,
should
be
free
of
contamination.

The
following
procedure
is
an
example
of
one
you
could
use
to
decontaminate
a
sampling
device
to
be
used
for
collecting
samples
for
trace
organic
or
inorganic
constituent
analyses
(from
USEPA
1996b):

1.
Clean
the
device
with
tap
water
and
soap,
using
a
brush
if
necessary
to
remove
particulate
matter
and
surface
films.

2.
Rinse
thoroughly
with
tap
water.

3.
Rinse
thoroughly
with
analyte­
or
organic­
free
water.

4.
Rinse
thoroughly
with
solvent.
Do
not
solvent­
rinse
PVC
or
plastic
items.

5.
Rinse
thoroughly
with
organic/
analyte
free
water,
or
allow
equipment
to
dry
completely.

6.
Remove
the
equipment
from
the
decontamination
area.
Equipment
stored
overnight
should
be
wrapped
in
aluminum
foil
and
covered
with
clean,
unused
plastic.

The
specifications
for
the
cleaning
materials
are
as
follows
(you
should
justify
and
document
the
use
of
substitutes):

°
"Soap"
should
be
a
phosphate­
free
laboratory
detergent
such
as
Liquinox®.
It
must
be
kept
in
clean
plastic,
metal,
or
glass
containers
until
used
and
poured
directly
from
the
container
when
in
use.
129
°
"Solvent"
should
be
pesticide­
grade
isopropanol.
It
must
be
stored
in
the
unopened
original
containers
until
used.
It
may
be
applied
using
the
low
pressure
nitrogen
system
fitted
with
a
Teflon®
nozzle,
or
using
Teflon®
squeeze
bottles.
For
equipment
highly
contaminated
with
organics
(such
as
oily
waste),
a
laboratory­
grade
hexane
may
be
a
more
suitable
alternative
to
isopropanol.

°
"Tap
water"
may
be
used
from
any
municipal
water
treatment
system.
Use
of
an
untreated
potable
water
supply
is
not
an
acceptable
substitute.
Tap
water
may
be
kept
in
clean
tanks,
hand
pressure
sprayers,
squeeze
bottles,
or
applied
directly
from
a
hose
or
tap.

°
"Analyte
free
water"
(deionized
water)
is
tap
water
treated
by
passing
it
through
a
standard
deionizing
resin
column.
At
a
minimum,
it
must
contain
no
detectable
heavy
metals
or
other
inorganic
compounds
as
defined
by
a
standard
ICP
(or
equivalent)
scan.
It
may
be
obtained
by
other
methods
as
long
as
it
meets
the
analytical
criteria.
Analyte
free
water
must
be
stored
in
clean
glass,
stainless
steel,
or
plastic
containers
that
can
be
closed
prior
to
use.
It
can
be
applied
from
plastic
squeeze
bottles.

°
"Organic/
analyte
free
water"
is
tap
water
that
has
been
treated
with
activated
carbon
and
deionizing
units.
A
portable
system
to
produce
such
water
under
field
conditions
is
available.
At
a
minimum,
the
water
must
meet
the
criteria
of
analyte
free
water
and
not
contain
detectable
pesticides,
herbicides,
or
extractable
organic
compounds,
and
no
volatile
organic
compounds
above
minimum
detectable
levels
as
determined
for
a
given
set
of
analyses.
Organic/
analyte
free
water
obtained
by
other
methods
is
acceptable,
as
long
as
it
meets
the
analytical
criteria.
It
must
be
stored
in
clean
glass,
Teflon®,
or
stainless
steel
containers.
It
may
be
applied
using
Teflon®
squeeze
bottles
or
with
the
portable
system.

Clean
the
field
equipment
prior
to
field
use.
Designate
a
decontamination
zone
at
the
site
and,
if
necessary,
construct
a
decontamination
pad
at
a
location
free
of
surface
contamination.
You
should
collect
wastewater
from
decontamination
(e.
g.,
via
a
sump
or
pit)
and
remove
it
frequently
for
appropriate
treatment
or
disposal.
The
pad
or
area
should
not
leak
contaminated
water
into
the
surrounding
environment.
You
also
should
collect
solvent
rinses
for
proper
disposal.

You
should
always
handle
field­
cleaned
equipment
in
a
manner
that
prevents
recontamination.
For
example,
after
decontamination
but
prior
to
use,
store
the
equipment
in
a
location
away
from
the
cleaning
area
and
in
an
area
free
of
contaminants.
If
it
is
not
immediately
reused,
you
should
cover
it
with
plastic
or
aluminum
foil
to
prevent
recontamination.

Decontamination
will
generate
a
quantity
of
wastes
called
investigation
derived
waste
(IDW).
You
should
address
the
handling
and
disposal
of
IDW
in
your
sampling
plan.
You
must
handle
this
material
in
accordance
with
whether
it
is
nonhazardous
or
suspected
of,
or
known
to
be,
hazardous.
You
should
minimize
the
generation
of
hazardous
IDW
and
keep
it
separated
from
nonhazardous
IDW.
For
example,
you
should
control
the
volume
of
spent
solvents
during
equipment
decontamination
by
applying
the
minimum
amount
of
liquid
necessary
and
capturing
130
it
separately
from
the
nonhazardous
washwater.
For
additional
guidance
on
handling
IDW,
see
Management
of
Investigation­
Derived
Wastes
(USEPA
1992f).

Decontamination
of
personnel
and
their
protective
gear
also
is
often
necessary
during
hazardous
waste
sampling.
This
important
type
of
decontamination
protects
personnel
from
chemical
exposure
and
prevents
cross­
contamination
when
personnel
change
locations.
The
level
or
degree
of
such
decontamination
will
depend
on
site­
specific
considerations,
such
as
the
health
hazards
posed
by
exposure
to
the
sampled
waste.
You
should
address
these
decontamination
procedures
in
your
health
and
safety
plan.

For
additional
information
regarding
decontamination,
see
ASTM
D
5088,
Standard
Practice
for
Decontamination
of
Field
Equipment
Used
at
Nonradioactive
Waste
Sites.
Another
source
of
additional
information
is
"Sampling
Equipment
Decontamination"
(USEPA
1994f),
issued
by
EPA's
Environmental
Response
Team.

7.2.7
Health
and
Safety
Considerations
Regulations
published
by
the
Occupational
Safety
and
Health
Administration
(OSHA)
at
29
CFR
Part
1910.120
govern
workers
at
hazardous
waste
sites
and
include
requirements
for
training,
equipment,
medical
monitoring,
and
other
practices.
Many
sampling
activities
covered
by
this
guidance
may
require
compliance
with
OSHA's
health
and
safety
regulations.
Specific
guidance
on
worker
health
and
safety
is
beyond
the
scope
of
this
chapter;
however,
development
and
use
of
a
project­
specific
health
and
safety
plan
may
be
required.
It
is
the
responsibility
of
the
sampling
team
leader
and
others
in
charge
to
ensure
worker
safety.

Some
important
health
and
safety
considerations
follow:

°
Field
personnel
should
be
up­
to­
date
in
their
health
and
safety
training.

°
Field
personnel
should
have
a
medical
examination
at
the
initiation
of
sampling
activities
and
routinely
thereafter,
as
appropriate
and
as
required
by
the
OSHA
regulations.
Unscheduled
examinations
should
be
performed
in
the
event
of
an
accident
or
suspected
exposure
to
hazardous
materials.

°
Staff
also
should
be
aware
of
the
common
routes
of
exposure
at
a
site
and
be
instructed
in
the
proper
use
of
safety
equipment
and
protective
clothing
and
equipment.
Safe
areas
should
be
designated
for
washing,
drinking,
and
eating.

°
To
minimize
the
impact
of
an
emergency
situation,
field
personnel
should
be
aware
of
basic
first
aid
and
have
immediate
access
to
a
first
aid
kit.

The
guidance
manual
Occupational
Safety
and
Health
Guidance
Manual
for
Hazardous
Waste
Site
Activities
(OSHA
1985,
revised
1998)
was
jointly
developed
by
the
National
Institute
for
Occupational
Safety
and
Health
(NIOSH),
OSHA,
the
United
States
Coast
Guard
(USCG),
and
EPA.
Its
intended
audience
is
those
who
are
responsible
for
occupational
safety
and
health
programs
at
hazardous
waste
sites.
131
7.2.8
Sample
Packaging
and
Shipping
During
transport
of
waste
samples,
you
should
follow
all
State
and
Federal
regulations
governing
environmental
sample
packaging
and
shipment
and
ship
according
to
U.
S.
Department
of
Transportation
(DOT)
and
International
Air
Transportation
Association
(IATA)
regulations.
Minimum
guidelines
for
sample
packaging
and
shipping
procedures
follow
in
the
next
subsections;
however,
the
rules
and
regulations
for
sample
packaging
and
shipping
are
complex,
and
for
some
samples
and
shipping
situations
the
procedures
outlined
below
may
need
to
be
exceeded.

7.2.8.1
Sample
Packaging
You
should
package
and
label
samples
in
an
area
free
of
contamination.
You
also
should
ship
or
transport
samples
to
a
laboratory
within
a
time
frame
that
meets
recommended
sample
holding
times
for
the
respective
analyses.
Additional
guidelines
follow:

°
Aqueous
samples
for
inorganic
analysis
and
volatile
organic
analysis
may
require
chemical
preservation.
The
specific
preservation
requirements
will
depend
on
the
analytical
method
to
be
used.

°
Make
sure
all
lids/
caps
are
tight
and
will
not
leak.

°
Make
sure
sample
labels
are
intact
and
covered
with
a
piece
of
clear
tape
for
protection.

°
Enclose
the
sample
container
in
a
clear
plastic
bag
and
seal
the
bag.
Make
sure
the
sample
labels
are
visible.
If
bubble
wrap
or
other
wrapping
material
will
be
placed
around
the
labeled
containers,
write
the
sample
number
and
fraction
(e.
g.,
"BLH01­
VOCs")
so
that
it
is
visible
on
the
outside
of
the
wrap,
then
place
the
wrapped
container
in
a
clear
plastic
bag
and
seal
the
bag.

°
Make
sure
that
all
samples
that
need
to
be
kept
cold
(4
±
2
o
C)
have
been
thoroughly
cooled
before
placing
in
packing
material
so
that
the
packing
material
serves
to
insulate
the
cold.
Change
the
ice
prior
to
shipment
as
needed.
Ideally,
pack
the
cooled
samples
into
shipping
containers
that
have
already
been
chilled.
(Of
course,
these
precautions
are
not
necessary
if
none
of
the
samples
in
the
shipping
container
need
to
be
kept
cold.)

°
Any
soil/
sediment
samples
suspected
to
be
of
medium/
high
concentration
or
containing
dioxin
must
be
enclosed
in
a
metal
can
with
a
clipped
or
sealable
lid
(e.
g.,
paint
cans)
to
achieve
double
containment
of
those
samples.
Place
suitable
absorbent
packing
material
around
the
sample
container
in
the
can.
Make
sure
the
sample
is
securely
stored
in
a
can
and
the
lid
is
sealed.
Label
the
outer
metal
container
with
the
sample
number
and
fraction
of
the
sample
inside.

°Use
clean
waterproof
metal
or
hard
plastic
ice
chests
or
coolers
that
are
in
good
repair
for
shipping
samples.

°
Remove
the
inapplicable
previous
shipping
labels.
Make
sure
any
drain
plugs
132
are
shut.
Seal
plugs
shut
on
the
inside
and
outside
with
a
suitable
tape
such
as
duct
tape.
Line
the
cooler
with
plastic
(e.
g.,
large
heavy­
duty
garbage
bag)
before
inserting
samples.

°
Ship
samples
at
4
±
2
o
C,
place
double­
bagged
ice
on
top
of
samples.
Ice
must
be
sealed
in
double
plastic
bags
to
prevent
melting
ice
from
soaking
the
packing
material.
Loose
ice
should
not
be
poured
into
the
cooler.

°
Conduct
an
inventory
of
sample
numbers,
fractions,
and
containers
when
placing
samples
into
the
coolers.
Check
the
inventory
against
the
corresponding
chainof
custody
form
before
sealing
the
cooler
to
make
sure
that
all
samples
and
containers
are
present.

°
Pack
the
lined
shipping
containers
with
noncombustible
absorbent
packing
material,
such
as
vermiculite
or
rock
wool.
Place
the
packing
material
on
the
bottom
of
the
shipping
container
(inside
the
plastic
liner)
and
around
sample
bottles
or
metal
cans
to
avoid
breakage
during
shipment.
Never
use
earth,
ice,
paper,
or
styrofoam
to
pack
samples.
Earth
is
a
contaminant,
melted
ice
may
cause
complications
and
allow
the
sample
containers
to
bang
together
when
the
shipping
container
is
moved,
and
styrofoam
presents
a
disposal
problem
(it
also
may
easily
blow
out
of
the
shipping
container
at
the
site).

°
For
samples
that
need
to
be
shipped
at
4
±
2º
C,
place
double­
bagged
ice
on
top
of
samples
and
fill
remaining
space
with
packing
material.
If
sample
bottles
have
been
protected
with
packaging
material
such
as
bubble
wrap,
then
some
doublebagged
ice
or
ice
packs
also
may
be
placed
between
samples.

°
Use
tape
to
securely
fasten
the
top
of
the
plastic
used
to
line
the
shipping
container.
It
is
a
good
idea
to
then
place
a
completed
custody
seal
around
the
top
of
the
bag
that
contains
the
sample
in
case
the
outer
seals
placed
across
the
cooler
lid
are
inadvertently
damaged
during
shipment.

°
Enclose
all
sample
documentation
(i.
e.,
chain­
of­
custody
forms
and
cooler
return
shipping
documents)
in
a
waterproof
plastic
bag,
and
tape
the
bag
to
the
underside
of
the
cooler
lid.
This
documentation
should
address
all
samples
in
the
cooler,
but
not
address
samples
in
any
other
cooler.

°
If
more
than
one
cooler
is
being
used,
place
separate
sample
documentation
in
each
cooler.
Instructions
for
returning
the
cooler
should
be
documented
inside
the
cooler
lid.
Write
a
return
name
and
address
for
the
sample
cooler
on
the
inside
of
the
cooler
lid
in
permanent
ink
to
ensure
return
of
the
cooler.

°
Tape
the
cooler
shut
using
strapping
tape
over
the
hinges.
Place
completed
custody
seals
across
the
top
and
sides
of
the
cooler
lid
so
that
lid
cannot
be
opened
without
breaking
the
seal.

°
Place
clear
tape
over
the
seal
to
prevent
inadvertent
damage
to
the
seal
during
shipment.
Do
not
place
clear
tape
over
the
seals
in
a
manner
that
would
allow
the
seals
to
be
lifted
off
with
the
tape
and
then
reaffixed
without
breaking
the
133
seal.

For
additional
detailed
guidance
on
sample
documentation,
packaging,
and
shipping,
we
recommend
the
Contract
Laboratory
Program
(CLP)
Guidance
for
Field
Samplers
­
Draft
Final
(USEPA
2001g).

7.2.8.2
Sample
Shipping
In
general,
samples
of
drinking
water,
most
ground
waters
and
ambient
surface
waters,
soil,
sediment,
treated
waste
waters,
and
other
low
concentration
samples
can
be
shipped
as
environmental
samples;
however,
shipment
of
high
concentration
waste
samples
may
require
shipment
as
dangerous
goods
(not
as
"hazardous
waste").
Note
that
RCRA
regulations
specifically
exempt
samples
of
hazardous
waste
from
RCRA
waste
identification,
manifest,
permitting,
and
notification
requirements
(see
40
CFR
§261.4(
d)).
The
shipment
of
samples
to
and
from
a
laboratory,
however,
must
comply
with
U.
S.
DOT,
U.
S.
Postal
Service,
or
any
other
applicable
shipping
requirements.
If
a
sample
is
a
hazardous
waste,
once
received
at
the
laboratory,
it
must
be
managed
as
a
hazardous
waste.

In
recent
years,
commercial
overnight
shipping
services
have
adopted
the
regulations
of
the
IATA
for
shipment
of
dangerous
goods
by
air.
The
IATA
Dangerous
Goods
Regulations
contain
all
provisions
mandated
by
the
International
Civil
Aviation
Organization
and
all
rules
universally
agreed
to
by
airlines
to
correctly
package
and
safely
transport
dangerous
goods
by
air.
Contact
IATA
for
a
copy
of
the
IATA
Dangerous
Goods
Regulations
and
for
assistance
in
locating
suppliers
of
specialized
packaging
for
dangerous
goods.

When
shipping
samples,
perform
the
following
activities:

°
Clearly
label
the
cooler
and
fill
out
appropriate
shipping
papers.

°
Place
return
address
labels
clearly
on
the
outside
of
the
cooler.

°
If
more
than
one
cooler
is
being
shipped,
mark
each
cooler
as
"1
of
2,"
"2
of
2,"
etc.

°
Ship
samples
through
a
commercial
carrier.
Use
appropriate
packaging,
mark
and
label
packages,
and
fill
out
all
required
government
and
commercial
carrier
shipping
papers
according
to
DOT
and
IATA
commercial
carrier
regulations.

°
Ship
all
samples
by
overnight
delivery
in
accordance
with
DOT
and
IATA
regulations.
For
information
on
shipping
dangerous
goods
visit
the
International
Air
Transport
Association
(IATA)
Dangerous
Goods
Information
Online
at
http://
www.
iata.
org/
cargo/
dg/
index.
htm
or
call
1­
800­
716­
6326.
134
7.3
Using
Sample
Homogenization,
Splitting,
and
Subsampling
Techniques
7.3.1
Homogenization
Techniques
The
objective
of
homogenization
(mixing)
is
to
minimize
grouping
and
segregation
of
particles
so
they
are
randomly
distributed
within
the
sample.
While
homogenization
can
reduce
grouping
and
segregation
of
particles,
it
will
not
eliminate
it
and
will
not
make
the
material
"homogeneous."
If
homogenization
is
successful,
subsamples
of
the
homogenized
material
will
show
less
variability
than
if
the
material
was
not
homogenized.
Homogenization,
combined
with
a
composite
sampling
strategy,
can
be
an
efficient
method
for
improving
the
accuracy
and
precision
in
sampling
of
particulate
material
(Jenkins,
et
al.
1996).
Homogenization
can
be
applied
to
solids,
liquids,
slurries,
and
sludges.

Pitard
(1993)
recognizes
two
processes
for
homogenization:

Stationary
processes
­
in
which
the
material
is
not
mixed
but
is
redistributed
so
that
any
correlation
between
the
characteristics
of
individual
fragments
or
particles
is
lost
or
minimized.
An
example
of
this
process
is
the
collection
of
many
small
increments
to
form
an
individual
sample
(ideally
we
would
pick
many
individual
particles
at
random
to
form
the
sample,
but
this
is
not
possible).

Dynamic
processes
­
in
which
the
material
is
mechanically
mixed
to
remove
or
minimize
correlation
between
the
characteristics
of
the
fragment
or
particle
and
its
position
within
the
sample.
Examples
of
this
process
include
mechanical
mixing
within
a
container
and
use
of
magnetic
stirrers
in
a
beaker.

Note
that
the
benefits
of
homogenization
may
be
temporary
because
gravity­
induced
segregation
can
occur
during
shipment,
storage,
and
handling
of
samples.
For
this
reason,
consider
carrying
out
homogenization
(mixing)
immediately
prior
to
analysis.

Some
homogenization
techniques
work
better
than
others.
The
strengths
and
limitations
of
homogenization
equipment
and
procedures
(cone
and
quartering,
riffle
splitters,
rotary
splitters,
multiple
cone
splitters,
and
V­
blenders)
have
been
reviewed
in
the
literature
by
Pitard
(1993),
Schumacher,
et
al.
(1991),
ASTM
(Standard
D
6051­
96),
and
others.
The
preferred
techniques
for
use
within
the
laboratory
follow:

°
Riffling
(see
also
Section
7.3.2)
°
Fractional
shoveling
(see
also
Section
7.3.2)
°
Mechanical
mixing
°
Cone
and
quartering
°
Magnetic
stirrers
(e.
g.,
to
homogenize
the
contents
of
an
open
beaker)
°
V­
blenders.

Fractional
shoveling
and
mechanical
mixing
also
can
be
used
in
the
field.
Note
that
some
techniques
for
homogenization,
such
as
riffling
and
fractional
shoveling,
also
are
used
for
splitting
and
subsampling.
Note
that
Pitard
(1993)
discourages
the
use
of
"sheet
mixing"
(also
called
"mixing
square")
and
vibratory
spatulas
because
they
tend
to
segregate
particles
of
different
density
and
size.
135
Sample
One
Sample
Two
Sample
Three
Sample
Five
Sample
Four
Lot
Figure
32.
Fractional
shoveling
as
a
sample
splitting
method
(after
Pitard
1993)
7.3.2
Sample
Splitting
Splitting
is
employed
when
a
field
sample
is
significantly
larger
than
the
required
analytical
sample.
The
goal
of
splitting
is
to
reduce
the
mass
of
the
retained
sample
and
obtain
an
aliquot
of
the
field
sample
that
reflects
the
average
properties
of
the
entire
field
sample.
It
is
often
necessary
to
repeat
the
splitting
process
a
number
of
times
to
achieve
a
sufficient
reduction
in
mass
for
analytical
purposes.

Splitting
can
be
used
to
generate
a
reduced
mass
aliquot
that
can
be
analyzed
in
its
entirety
or
a
much
reduced
and
homogenized
mass
from
which
an
analytical
or
subsample
can
be
collected.
ASTM's
Standard
Guide
for
Laboratory
Subsampling
of
Media
Related
to
Waste
Management
Activities
(ASTM
D
6323­
98),
lists
and
discusses
a
variety
of
splitting
equipment
(such
as
sectorial
splitters
and
riffle
splitters)
and
splitting
procedures
(such
as
cone
and
quartering
and
the
alternate
scoop
method).
Gerlach,
et
al.
(2002)
also
evaluated
sample
splitting
methods
(riffle
splitting,
paper
cone
riffle
splitting,
fractional
shoveling,
coning
and
quartering,
and
grab
sampling)
and
found
that
riffle
splitting
methods
performed
the
best.

A
simple
alternative
to
riffle
splitting
a
sample
of
solid
media
is
a
technique
called
"fractional
shoveling."
To
perform
fractional
shoveling,
deal
out
small
increments
from
the
larger
sample
in
sequence
into
separate
piles,
randomly
select
one
of
the
piles
and
retain
it
as
the
subsample
(or
retain
more
than
one
if
a
portion
of
the
sample
is
to
be
"split"
with
another
party
and/
or
retained
for
archive
purposes),
and
reject
the
others
(see
Figure
32).

7.3.3
Subsampling
The
size
of
the
sample
submitted
to
the
laboratory
(either
an
individual
sample
or
a
composite)
by
field
personnel
typically
far
exceeds
that
required
for
analysis.
Consequently,
subsampling
is
needed.
A
subsample
is
defined
as
"a
portion
of
material
taken
from
a
larger
quantity
for
the
purpose
of
estimating
properties
or
the
composition
of
the
whole
sample"
(ASTM
D
4547­
98).
Taking
a
subsample
may
be
as
simple
as
collecting
the
required
mass
from
a
larger
mass,
or
it
may
involve
one
or
more
preparatory
steps
such
as
grinding,
homogenization,
and/
or
splitting
of
the
larger
mass
prior
to
removal
of
the
subsample.

Specific
procedures
for
maintaining
sample
integrity
(e.
g.,
minimizing
fundamental
error)
during
splitting
and
subsampling
operations
typically
are
not
addressed
in
quality
assurance,
sampling,
or
analytical
plans,
and
error
may
be
introduced
unknowingly
in
subsampling
and
sample
preparation.
Many
environmental
laboratories
do
not
have
adequate
SOPs
for
subsampling;
therefore,
it
is
important
for
the
data
users
to
provide
the
laboratory
personnel
clear
instruction
if
any
special
subsampling
or
sample
handling
procedures
are
needed
(such
as
instructions
on
mixing
of
the
sample
prior
to
analysis,
removing
particles
greater
than
a
certain
size,
analyzing
136
phases
separately,
etc.).
If
proper
subsampling
procedures
are
not
specified
in
planning
documents,
SOPs,
or
documents
shipped
with
the
samples,
it
may
be
difficult
to
assess
the
usability
of
the
results.

The
following
sections
provide
general
guidance
on
obtaining
subsamples
of
liquids,
mixtures
of
liquids
and
solids,
and
soils
and
solid
media.
For
additional
guidance
and
detailed
procedures,
see
Standard
Guide
for
Composite
Sampling
and
Field
Subsampling
for
Environmental
Waste
Management
Activities
(ASTM
D
6051­
96)
and
Standard
Guide
for
Laboratory
Subsampling
of
Media
Related
to
Waste
Management
Activities
(ASTM
D
6323­
98).

7.3.3.1
Subsampling
Liquids
In
the
case
of
subsampling
a
liquid,
special
precautions
may
be
warranted
if
the
liquid
contains
suspended
solids
and/
or
the
liquid
comprises
multiple
liquid
phases.
In
practice,
samples
may
contain
solids
and/
or
separate
phases
that
are
subject
to
gravitational
action
(Gy
1998).
Even
a
liquid
that
appears
clear
(absent
of
solids
and
without
iridescence)
may
not
be
"homogeneous."

Subsampling
of
liquids
(containing
solids
and/
or
in
multiple
phases)
can
be
addressed
by
using
one
or
the
other
of
two
possible
approaches:

°
Mixing
the
sample
such
that
all
phases
are
homogenized,
and
then
taking
a
subsample
(using
a
pipette,
for
example)

°
Allowing
all
of
the
phases
to
separate
followed
by
subsampling
and
analysis
of
each
phase
separately.

Of
course,
the
characteristics
of
the
waste
and
the
type
of
test
must
be
considered.
For
example,
mixing
of
multi­
phasic
wastes
to
be
analyzed
for
volatiles
should
be
avoided
due
to
the
potential
loss
of
constituents.
Some
multi­
phasic
liquid
wastes
can
form
an
emulsion
when
mixed.
Others,
in
spite
of
mixing,
will
quickly
separate
back
into
distinct
phases.

7.3.3.2
Subsampling
Mixtures
of
Liquids
and
Solids
If
the
sample
is
a
mixture
of
liquids
and
solids,
subsampling
usually
requires
that
the
phases
be
separated.
The
separate
phases
are
then
separately
subsampled.
Subsampling
of
the
liquid
phase
can
be
accomplished
as
described
above,
while
subsampling
of
the
solid
phase
should
be
done
according
to
sampling
theory,
as
summarized
below.

7.3.3.3
Subsampling
Soils
and
Solid
Media
To
correctly
subsample
soil
or
solid
media,
use
sampling
tools
and
techniques
that
minimize
delimitation
and
extraction
error.
If
the
particles
in
the
sample
are
too
coarse
to
maintain
fundamental
error
within
desired
limits,
it
may
be
necessary
to
perform
a
series
of
steps
of
particle
size
reduction
followed
by
subsampling
(see
Appendix
D).
If
the
field
sample
mass
is
equal
to
or
less
than
the
specified
analytical
size,
the
field
sample
can
be
analyzed
in
its
entirety.
If
the
mass
of
the
field
sample
is
greater
than
the
specified
analytical
sample
size,
subsampling
will
be
required.

One
possible
alternative
to
particle­
size
reduction
prior
to
subsampling
is
to
simply
remove
the
137
Flat­
bottom
Spatula
Figure
33.
Example
of
correctly
designed
device
for
subsampling.
Flat
bottom
and
vertical
side
walls
minimize
increment
delimitation
error.
coarse
particles
(e.
g.,
via
a
sieve
or
visually)
from
the
sample.
This
selective
removal
technique
is
not
recommended
in
situations
in
which
the
larger
particles
contribute
to
the
overall
concentration
of
the
constituent
of
concern
in
the
waste.
In
other
words,
do
not
remove
the
large
particles
if
the
constituents
of
concern
tend
to
be
concentrated
in
the
large
particles
relative
to
the
smaller
particles.

If
the
largest
particle
size
of
the
field
sample
exceeds
the
allowable
size
for
maintaining
the
fundamental
error
specified
by
the
DQO
and
the
analyte
of
interest
is
volatile,
it
may
be
necessary
to
analyze
the
sample
as
is
and
accept
a
large
fundamental
error.
Guidance
on
handling
VOCs
in
samples
can
be
found
in
Section
6.3.4
and
in
ASTM
Standard
D
4547­
98.

The
Standard
Guide
for
Laboratory
Subsampling
of
Media
Related
to
Waste
Management
Activities
(ASTM
D
6323­
98)
lists
a
variety
of
equipment
for
performing
particle­
size
reduction
(e.
g.,
cutting
mills,
jar
mills,
disc
mills,
dish
and
puck
mills,
mortar
grinders
and
jaw
crushers)
and
tabulates
their
uses
and
limitations.

The
techniques
discussed
below
are
most
relevant
to
subsampling
of
solid
particulate
matter
for
analysis
of
nonvolatile
constituents.
Mason
(1992,
page
5­
7)
provides
a
field
procedure
that
can
be
used
to
reduce
the
volume
of
a
field
soil
sample
for
submission
to
the
laboratory.

The
issues
regarding
the
subsampling
of
particulate­
containing
materials
are
identical
to
those
considered
when
collecting
the
original
field
samples
and
are
as
follows:

°
The
tool
used
to
collect
the
analytical
sample
must
be
correct
and
not
discriminate
against
any
portion
of
the
sample
(in
other
words,
the
tool
should
not
introduce
increment
delimitation
and
increment
extraction
errors).

°
The
mass
of
the
subsample
must
be
enough
to
accommodate
the
largest
of
the
particles
contained
within
the
parent
sample
(to
reduce
fundamental
error).

°
The
sample
mass
and
the
manner
in
which
it
is
collected
must
accommodate
the
short­
term
heterogeneity
within
the
field
sample
(to
reduce
grouping
and
segregation
error).

The
sampling
tool
must
be
constructed
such
that
its
smallest
dimension
is
at
least
three
times
greater
than
the
largest
particle
size
contained
within
the
material
being
subsampled.
The
construction
of
the
sampling
tool
must
be
such
that
it
does
not
discriminate
against
certain
areas
of
the
material
being
sampled.
For
example,
Pitard
(1993)
argues
that
all
scoops
for
subsampling
should
be
rectangular
or
square
in
design
with
flat
bottoms
as
opposed
to
having
curved
surfaces
(Figure
33).

Pitard
(1993)
and
ASTM
D
6323­
98
suggest
138
INCORRECT
CORRECT
(b)
Spatula
Trajectory
(a)

Figure
34.
Correct
(a)
and
incorrect
(b)
laboratory
techniques
for
obtaining
subsamples
of
granular
solid
media
((
a)
modified
after
Pitard
1993).
subsampling
from
relatively
flat
elongated
piles
using
a
transversal
subsampling
technique
that
employs
a
sampling
scoop
or
spatula
and
a
flat
working
surface
(Figure
34(
a)).
The
objective
is
to
convert
the
sampling
problem
to
a
one­
dimensional
approach.
Specifically,
Pitard
(1993)
recommends
the
following
procedure:

°
Empty
the
sample
from
the
sample
container
onto
a
smooth
and
clean
surface
or
appropriate
material.

°
Do
not
try
to
homogenize
the
sample,
as
this
may
promote
segregation
of
particles.

°
Reduce
the
sample
by
using
the
fractional
shoveling
technique
(Figure
32)
until
a
sample
5
to
10
times
larger
than
the
analytical
sample
is
obtained.

°
Shape
the
remaining
material
into
an
elongated
pile
with
uniform
width
and
thickness
(Figure
34(
a)).

°
Take
increments
all
across
the
pile
through
the
entire
thickness.

°
Reshape
the
pile
perpendicular
to
its
long
axis,
and
continue
to
take
increments
across
the
pile
until
the
appropriate
sample
weight
is
reached.

Fractional
shoveling
and
alternate
scoop
techniques
alone
(Figure
32)
also
can
be
used
to
generate
subsamples.

When
using
these
techniques,
several
stages
or
iterations
of
subsampling
followed
by
particle
size
reduction
may
be
needed
to
minimize
fundamental
error
(also
see
Appendix
D).
At
each
stage,
the
number
of
increments
should
be
at
least
10
and
preferably
25
to
control
grouping
and
segregation
(short­
term
heterogeneity)
within
the
sample.
In
the
final
stage,
however,
where
very
small
analytical
samples
are
required,
the
number
of
increments
required
will
be
much
less.

The
subsampling
procedures
described
above
offer
a
more
correct
and
defensible
alternative
to
an
approach
to
subsampling
in
which
the
analyst
simply
opens
the
sample
jar
or
vial
and
removes
a
small
increment
from
the
top
for
preparation
and
analysis
(Figure
34(
b)).
139
DATA
VERIFICATION/
VALIDATION
°
Sampling
Assessment
°
Analytical
Assessment
DATA
QUALITY
ASSESSMENT
°
Review
DQOs
and
design
°
Prepare
data
for
statistical
analysis
°
Conduct
preliminary
data
review
and
check
assumptions
°
Select
and
perform
statistical
tests
°
Draw
conclusions
and
report
results
Conclusions
Drawn
from
Data
Verified
and
Validated
Data
ASSESSMENT
Figure
35.
Elements
of
the
quality
assurance
assessment
process
(modified
after
USEPA
1998a)
8
ASSESSMENT:
ANALYZING
AND
INTERPRETING
DATA
This
section
presents
guidance
for
the
assessment
of
sampling
and
analytical
results.
In
performing
data
assessment,
evaluate
the
data
set
to
determine
whether
the
data
are
sufficient
to
make
the
decisions
identified
in
the
DQO
Process.
The
data
assessment
process
includes
(1)
sampling
assessment
and
analytical
assessment,
and
(2)
data
quality
assessment
(DQA)
(Figure
35)
and
follows
a
series
of
logical
steps
to
determine
if
the
data
were
collected
as
planned
and
to
reach
conclusions
about
a
waste
relative
to
RCRA
requirements.

At
the
end
of
the
process,
EPA
recommends
reconciliation
with
the
DQOs
to
ensure
that
they
were
achieved
and
to
decide
whether
additional
data
collection
activities
are
needed.

8.1
Data
Verification
and
Validation
Data
verification
and
validation
are
performed
to
ensure
that
the
sampling
and
analysis
protocols
specified
in
the
QAPP
or
WAP
were
followed
and
that
the
measurement
systems
performed
in
accordance
with
the
criteria
specified
in
the
QAPP
or
WAP.
The
process
is
divided
into
two
parts:

°
sampling
assessment
(Section
8.1.1),
and
°
analytical
assessment
(Section
8.1.2).

Guidance
on
analytical
assessment
is
provided
in
Chapter
One
of
SW­
846
and
in
the
individual
analytical
methods.
Additional
guidance
can
be
found
in
Guidance
on
Environmental
Data
Verification
and
Data
Validation
EPA
QA/
G­
8,
published
by
EPA's
Office
of
Environmental
Information
(USEPA
2001c).
For
projects
generating
data
for
input
into
risk
assessments,
see
EPA's
Guidance
for
Data
Usability
in
Risk
Assessment,
Final
(USEPA
1992g).

8.1.1
Sampling
Assessment
Sampling
assessment
is
the
process
of
reviewing
field
sampling
and
sample
handling
methods
to
check
conformance
with
the
requirements
specified
in
the
QAPP.
Sampling
assessment
activities
include
a
review
of
the
sampling
design,
sampling
methods,
documentation,
sampling
handling
and
custody
procedures,
and
preparation
and
use
of
quality
control
samples.
140
The
following
types
of
information
are
useful
in
assessing
the
sampling
activity:

°
Copies
of
the
sampling
plan,
QAPP,
and
SOPs.

°
Copies
of
logbooks,
chain­
of­
custody
records,
bench
sheets,
well
logs,
sampling
sequence
logs,
field
instrument
calibration
records
and
performance
records,
and/
or
other
records
(including
electronic
records
such
as
calculations)
that
describe
and/
or
record
all
sampling
operations,
observations,
and
results
associated
with
samples
(including
all
QC
samples)
while
in
the
custody
of
the
sampling
team.
Records/
results
from
the
original
sampling
and
any
resampling,
regardless
of
reason,
should
be
retained.
Also,
retain
copies
of
the
shipping
manifest
and
excess
sample
disposition
(disposal)
records
describing
the
ultimate
fate
of
any
sample
material
remaining
after
submission
to
the
laboratory.

°
Copies
of
all
records/
comments
associated
with
the
sample
team
review
of
the
original
data,
senior
staff
review,
and
QA/
QC
review
of
the
sampling
activity.
Copies
of
any
communication
(telephone
logs,
faxes,
E­
mail,
other
records)
between
the
sampling
team
and
the
customer
dealing
with
the
samples
and
any
required
resampling
or
reporting
should
be
provided.

The
following
subsections
outline
the
types
of
sampling
information
that
should
be
assessed.

8.1.1.1
Sampling
Design
Review
the
documentation
of
field
activities
to
check
if
the
number
and
type
of
samples
called
for
in
the
sampling
plan
were,
in
fact,
obtained
and
collected
from
the
correct
locations.
Perform
activities
such
as
those
described
below:

°
Sampling
Design:
Document
any
deviations
from
the
sampling
plan
made
during
the
field
sampling
effort
and
state
what
impact
those
modifications
might
have
on
the
sampling
results.

°
Sample
Locations/
Times:
Confirm
that
the
locations
of
the
samples
in
time
or
space
match
those
specified
in
the
plan.

°
Number
of
Samples:
Check
for
completeness
in
the
sampling
in
terms
of
the
number
of
samples
obtained
compared
to
the
number
targeted.
Note
the
cause
of
the
deficiencies
such
as
structures
covering
planned
locations,
limited
access
due
to
unanticipated
events,
samples
lost
in
shipment
or
in
the
laboratory,
etc.

°
Discrete
versus
Composite
Samples:
If
composite
sampling
was
employed,
confirm
that
each
component
sample
was
of
equal
mass
or
volume.
If
not,
determine
if
sufficient
information
is
presented
to
allow
adjustments
to
any
calculations
made
on
the
data.
Both
field
and
laboratory
records
should
be
reviewed
because
compositing
can
occur
at
either
location.
141
8.1.1.2
Sampling
Methods
Details
of
how
a
sample
was
obtained
from
its
original
time/
space
location
are
important
for
properly
interpreting
the
measurement
results.
Review
the
selection
of
sampling
and
ancillary
equipment
and
procedures
(including
equipment
decontamination)
for
compliance
with
the
QAPP
and
sampling
theory.
Acceptable
departures
(for
example,
alternate
equipment)
from
the
QAPP
and
the
action
to
be
taken
if
the
requirements
cannot
be
satisfied
should
be
specified
for
each
critical
aspect.
Note
potentially
unacceptable
departures
from
the
QAPP
and
assess
their
potential
impact
on
the
quality
and
usefulness
of
the
data.
Comments
from
field
surveillance
on
deviations
from
written
sampling
plans
also
should
be
noted.

Sampling
records
should
be
reviewed
to
determine
if
the
sample
collection
and
field
processing
were
appropriate
for
the
analytes
being
measured.
For
example,
sampling
for
volatiles
analysis
poses
special
problems
due
to
the
likely
loss
of
volatiles
during
sample
collection.
Also,
determination
of
the
appropriate
"sample
support"
should
be
reviewed,
whether
it
was
obtained
correctly
in
the
field,
whether
any
large
particles
or
fragments
were
excluded
from
the
sample,
and
whether
any
potential
biases
were
introduced.

Laboratory
subsampling
and
sample
preparation
protocols
should
be
examined
for
the
same
types
of
potential
bias
as
the
field
procedures.
When
found,
they
should
be
discussed
in
the
assessment
report.

8.1.1.3
Sample
Handling
and
Custody
Procedures
Details
of
how
a
sample
is
physically
treated
and
handled
between
its
original
site
or
location
and
the
actual
measurement
site
are
extremely
important.
Sample
handling
activities
should
be
reviewed
to
confirm
compliance
with
the
QAPP
or
WAP
for
the
following
areas:

°
Sample
containers
°
Preservation
(physical
and
chemical)

°
Chain­
of­
custody
procedures
and
documentation
°
Sample
shipping
and
transport
°
Conditions
for
storage
(before
analysis)

°
Holding
times.

8.1.1.4
Documentation
Field
records
generally
consist
of
bound
field
notebooks
with
prenumbered
pages,
sample
collection
forms,
sample
labels
or
tags,
sample
location
maps,
equipment
maintenance
and
calibration
forms,
chain­
of­
custody
forms,
sample
analysis
request
forms,
and
field
change
request
forms.
Documentation
also
may
include
maps
used
to
document
the
location
of
sample
collection
points
or
photographs
or
video
to
record
sampling
activities.

Review
field
records
to
verify
they
include
the
appropriate
information
to
support
technical
142
interpretations,
judgments,
and
discussions
concerning
project
activities.
Records
should
be
legible,
identifiable,
and
retrievable
and
protected
against
damage,
deterioration,
or
loss.
Especially
note
any
documentation
of
deviations
from
SOPs
and
the
QAPP.

8.1.1.5
Control
Samples
Assess
whether
the
control
samples
were
collected
or
prepared
as
specified
in
the
QAPP
or
WAP.
Control
samples
include
blanks
(e.
g.,
trip,
equipment,
and
laboratory),
duplicates,
spikes,
analytical
standards,
and
reference
materials
that
are
used
in
different
phases
of
the
data
collection
process
from
sampling
through
transportation,
storage,
and
analysis.
There
are
many
types
of
control
samples,
and
the
appropriate
type
and
number
of
control
samples
to
be
used
will
depend
on
the
data
quality
specifications.

See
Section
7.2.4
for
guidance
on
the
type
of
control
samples
for
RCRA
waste­
testing
programs.
Additional
guidance
on
the
preparation
and
use
of
QC
samples
can
be
found
in
the
following
publications:

°
Test
Methods
for
Evaluating
Solid
Waste,
SW­
846
(USEPA
1986a),
Chapter
One
°
EPA
Guidance
for
Quality
Assurance
Project
Plans,
EPA
QA/
G­
5
(USEPA
1998a),
Appendix
D
°
Contract
Laboratory
Program
(CLP)
Guidance
for
Field
Samplers
­
Draft
Final
(USEPA
2001g),
Section
3.1.1.

8.1.2
Analytical
Assessment
Analytical
assessment
includes
an
evaluation
of
analytical
and
method
performance
and
supporting
documentation
relative
to
the
DQOs.
Proper
data
review
is
necessary
to
minimize
decision
errors
caused
by
out­
of­
control
laboratory
processes
or
calculation
or
transcription
errors.
The
level
and
depth
of
analytical
assessment
is
determined
during
the
planning
process
and
is
dependent
on
the
types
of
analyses
performed
and
the
intended
use
of
the
data.

Analytical
records
needed
to
perform
the
assessment
of
laboratory
activities
may
include
the
following:

°
Contract
Statement
of
Work
requirements
°SOPs
°
QAPP
or
WAP
°
Equipment
maintenance
documentation
°
Quality
assurance
information
on
precision,
bias,
method
quantitation
limits,
spike
recovery,
surrogate
and
internal
standard
recovery,
laboratory
control
standard
recovery,
checks
on
reagent
purity,
and
checks
on
glassware
cleanliness
143
°
Calibration
records
°
Traceability
of
standards/
reagents
(which
provide
checks
on
equipment
cleanliness
and
laboratory
handling
procedures)

°
Sample
management
records
°
Raw
data
°
Correspondence
°
Logbooks
and
documentation
of
deviation
from
procedures.

If
data
gaps
are
identified,
then
the
assessor
should
prepare
a
list
of
missing
information
for
correspondence
and
discussion
with
the
appropriate
laboratory
representative.
At
that
time,
the
laboratory
should
be
requested
to
supply
the
information
or
to
attest
that
it
does
not
exist
in
any
form.

8.1.2.1
Analytical
Data
Verification
The
term
data
verification
is
confirmation
by
examination
and
provision
of
objective
evidence
that
specified
requirements
have
been
fulfilled.
Data
verification
is
the
process
of
evaluating
the
completeness,
correctness,
and
conformance/
compliance
of
a
specific
data
set
against
the
method,
procedural,
or
contractual
requirements.
The
goal
of
data
verification
is
to
ensure
that
the
data
are
what
they
purport
to
be,
that
is,
that
the
reported
results
reflect
what
was
actually
done,
and
to
document
that
the
data
fulfill
specific
requirements.
When
deficiencies
in
the
data
are
identified,
then
those
deficiencies
should
be
documented
for
the
data
user's
review
and,
where
possible,
resolved
by
corrective
action
(USEPA
2001c).

Data
verification
may
be
performed
by
personnel
involved
with
the
collection
of
samples
or
data,
generation
of
analytical
data,
and/
or
by
an
external
data
verifier.
The
verification
process
normally
starts
with
a
list
of
requirements
that
apply
to
an
analytical
data
package.
It
compares
the
laboratory
data
package
to
the
requirements
and
produces
a
report
that
identifies
those
requirements
that
were
met
and
not
met.
Requirements
that
were
not
met
can
be
referred
to
as
exceptions
and
may
result
in
flagged
data.
Examples
of
the
types
of
exceptions
that
are
found
and
reported
are
listed
below:

°
Failure
to
analyze
samples
within
the
required
holding
times
°
Required
steps
not
carried
out
by
the
laboratory
(i.
e.,
failure
to
maintain
sample
custody,
lack
of
proper
signatures,
etc.)

°
Procedures
not
conducted
at
the
required
frequency
(i.
e.,
too
few
blanks,
duplicates,
etc.)

°
Contamination
found
in
storage,
extraction,
or
analysis
of
blanks
°
Procedures
that
did
not
meet
pre­
set
acceptance
criteria
(poor
laboratory
control,
poor
sample
matrix
spike
recovery,
unacceptable
duplicate
precision,
etc).
144
The
verification
report
should
detail
all
exceptions
found
with
the
data
packages.
If
the
laboratory
was
able
to
provide
the
missing
information
or
a
suitable
narrative
explanation
of
the
exceptions,
they
should
be
made
part
of
the
report
and
included
in
the
data
package
for
use
by
the
people
who
determine
the
technical
defensibility
of
the
data.

8.1.2.2
Analytical
Data
Validation
(Evaluation)

The
term
data
validation
(also
known
as
"evaluation")
is
the
confirmation
by
examination
and
provision
of
objective
evidence
that
the
particular
requirements
for
a
specific
intended
use
are
fulfilled.
Data
validation
is
an
analyte­
and
sample­
specific
process
that
extends
the
evaluation
of
data
beyond
method,
procedural,
or
contractual
compliance
(i.
e.,
data
verification)
to
determine
the
analytical
quality
of
a
specific
data
set.
Data
validation
criteria
are
based
upon
the
measurement
quality
objectives
developed
in
the
QAPP
or
similar
planning
document,
or
presented
in
the
sampling
or
analytical
method.
Data
validation
includes
a
determination,
where
possible,
of
the
reasons
for
any
failure
to
meet
method,
procedural,
or
contractual
requirements,
and
an
evaluation
of
the
impact
of
such
failure
on
the
overall
data
set
(USEPA
2001c)

Data
validation
includes
inspection
of
the
verified
data
and
both
field
and
analytical
laboratory
data
verification
documentation;
a
review
of
the
verified
data
to
determine
the
analytical
quality
of
the
data
set;
and
the
production
of
a
data
validation
report
and,
where
applicable,
qualified
data.
A
focused
data
validation
may
also
be
required
as
a
later
step.
The
goals
of
data
validation
are
to
evaluate
the
quality
of
the
data,
to
ensure
that
all
project
requirements
are
met,
to
determine
the
impact
on
data
quality
of
those
requirements
that
were
not
met,
and
to
document
the
results
of
the
data
validation
and,
if
performed,
the
focused
data
validation.
The
main
focus
of
data
validation
is
determining
data
quality
in
terms
of
accomplishment
of
measurement
quality
objectives.

As
in
the
data
verification
process,
all
planning
documents
and
procedures
not
only
must
exist,
but
they
should
also
be
readily
available
to
the
data
validators.
A
data
validator's
job
cannot
be
completed
properly
without
the
knowledge
of
the
specific
project
requirements.
In
many
cases,
the
field
and
analytical
laboratory
documents
and
records
are
validated
by
different
personnel.
Because
the
data
validation
process
requires
knowledge
of
the
type
of
information
to
be
validated,
a
person
familiar
with
field
activities
usually
is
assigned
to
the
validation
of
the
field
documents
and
records.
Similarly,
a
person
with
knowledge
of
analytical
laboratory
analysis,
such
as
a
chemist
(depending
on
the
nature
of
the
project),
usually
is
assigned
to
the
validation
of
the
analytical
laboratory
documents
and
records.
The
project
requirements
should
assist
in
defining
the
appropriate
personnel
to
perform
the
data
validation
(USEPA
2001c).

The
personnel
performing
data
validation
should
also
be
familiar
with
the
project­
specific
data
quality
indicators
(DQIs)
and
associated
measurement
quality
objectives.
One
of
the
goals
of
the
data
validation
process
is
to
evaluate
the
quality
of
the
data.
In
order
to
do
so,
certain
data
quality
attributes
are
defined
and
measured.
DQIs
(such
as
precision,
bias,
comparability,
sensitivity,
representativeness,
and
completeness)
are
typically
used
as
expressions
of
the
quality
of
the
data
(USEPA
2001c).

The
outputs
that
may
result
from
data
validation
include
validated
data,
a
data
validation
report,
and
a
focused
validation
report.
For
detailed
guidance
on
data
validation,
see
Chapter
One
of
SW­
846
and
Guidance
on
Environmental
Data
Verification
and
Data
Validation
EPA
QA/
G­
8
145
DATA
QUALITY
ASSESSMENT
Review
DQOs
and
Sampling
Design
Prepare
Data
for
Statistical
Analysis
Conduct
Preliminary
Review
of
Data
and
Check
Statistical
Assumptions
°
Compute
statistical
quantities
(mean,
standard
deviation,
etc.)
°
Determine
proportion
of
data
reported
as
"non­
detect"
°
Check
distributional
assumptions
°
Check
for
outliers
Select
and
Perform
the
Statistical
Test
Draw
Conclusion
from
the
Data
Figure
36.
The
DQA
Process
(modified
from
USEPA
2000d)
(USEPA
2001c).

8.2
Data
Quality
Assessment
Data
quality
assessment
(DQA)
is
the
scientific
and
statistical
evaluation
of
data
to
determine
if
the
data
are
of
the
right
type,
quality,
and
quantity
to
support
their
intended
purpose
(USEPA
2000d).
The
focus
of
the
DQA
process
is
on
the
use
of
statistical
methods
for
environmental
decision
making
–
though
not
every
environmental
decisions
necessarily
must
be
made
based
on
the
outcome
of
a
statistical
test
(see
also
Section
3).
If
the
sampling
design
established
in
the
planning
process
requires
estimation
of
a
parameter
or
testing
of
a
hypothesis,
then
the
DQA
process
can
be
used
to
evaluate
the
sample
analysis
results.

The
DQA
process
described
in
this
section
includes
five
steps:
(1)
reviewing
the
DQOs
and
study
design,
(2)
preparing
the
data
for
statistical
analysis,
(3)
conducting
a
preliminary
review
of
the
data
and
checking
statistical
assumptions,
(4)
selecting
and
performing
statistical
test,
and
(5)
drawing
conclusions
from
the
data
(Figure
36).

Detailed
guidance
on
the
statistical
analysis
of
data
can
be
found
in
Appendix
F.
Additional
guidance
can
be
found
in
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d).
A
list
of
software
tools
to
help
you
implement
the
DQA
is
provided
in
Appendix
H.

8.2.1
Review
the
DQOs
and
the
Sampling
Design
Review
the
DQO
outputs
to
ensure
that
they
are
still
applicable.
Refer
back
to
Sections
4
and
5
of
this
document
for
more
information
on
the
DQO
Process
or
see
USEPA
2000a
or
2000b.
A
clear
understanding
of
the
original
project
objectives,
as
determined
during
the
systematic
planning
process,
is
critical
to
selecting
the
appropriate
statistical
tests
(if
needed)
and
interpreting
the
results
relative
to
the
applicable
RCRA
regulatory
requirements.

8.2.2
Prepare
Data
for
Statistical
Analysis
After
data
validation
and
verification
and
before
the
data
are
available
in
a
form
for
further
analysis,
several
intermediate
steps
usually
are
required.
For
most
situations,
EPA
146
recommends
you
prepare
the
data
in
computer­
readable
format.
Steps
in
preparing
data
for
statistical
analysis
are
outlined
below
(modified
from
Ott
1988):

1.
Receive
the
verified
and
validated
source
from
the
QA
reports.
Data
are
supplied
to
the
user
in
a
variety
of
formats
and
readiness
for
use,
depending
on
the
size
and
complexity
of
the
study
and
the
types
of
analyses
requested.
Most
laboratories
supply
a
QA
evaluation
package
that
includes
the
verification/
validation
review,
a
narrative,
tabulated
summary
forms
(including
the
results
of
analyses
of
field
samples,
laboratory
standards,
and
QC
samples),
copies
of
logbook
pages,
and
copies
of
chain­
of­
custody
records.
From
this
information,
you
can
create
a
data
base
for
statistical
analysis.

2.
Create
a
data
base
from
the
verified
and
validated
data
source.
For
most
studies
in
which
statistical
analyses
are
scheduled,
a
computer­
readable
data
base
is
the
most
efficient
method
for
managing
the
data.
The
steps
required
to
create
the
data
base
and
the
format
used
will
depend
on
the
software
systems
used
to
perform
the
analysis.
For
example,
the
data
base
may
be
as
simple
as
a
string
of
concentration
values
for
a
single
constituent
input
into
a
spreadsheet
or
word
processor
(such
as
required
for
use
of
EPA's
DataQUEST
software
(USEPA
1997b)),
or
it
may
be
more
complex,
requiring
multiple
and
related
data
inputs,
such
as
sample
number,
location
coordinates,
depth,
date
and
time
of
collection,
constituent
name
and
concentration,
units
of
measurements,
test
method,
quantitation
limit
achieved,
QC
information,
etc.

If
the
data
base
is
created
via
manual
data
entry,
the
verified
and
validated
data
should
be
checked
for
legibility.
Any
questions
pertaining
to
illegible
information
should
be
resolved
before
the
data
are
entered.
Any
special
coding
considerations,
such
as
indicating
values
reported
as
"nondetect"
should
be
specified
in
a
coding
guide
or
in
the
QAPP.
For
very
large
projects,
it
may
be
appropriate
to
prepare
a
separate
detailed
data
management
plan
in
advance.

3.
Check
and
edit
the
data
base.
After
creation
of
the
data
set,
the
data
base
should
be
checked
against
the
data
source
to
verify
accurate
data
entry
and
to
correct
any
errors
discovered.
Even
if
the
data
base
is
received
from
the
laboratory
in
electronic
format,
it
should
be
checked
for
obvious
errors,
such
as
unit
errors,
decimal
errors,
missing
values,
and
quantitation
limits.

4.
Create
data
files
from
the
data
base.
From
the
original
data
files,
work
files
are
created
for
use
within
the
statistical
software
package.
This
step
could
entail
separating
data
by
constituent
and
by
DQO
decision
unit
and
separating
any
QA/
QC
data
from
the
record
data.
When
creating
the
final
data
files
for
use
in
the
statistical
software,
be
sure
to
use
a
file
naming
and
storage
convention
that
facilitates
easy
retrieval
for
future
use,
reference,
or
reporting.
Steps
in
Preparing
Data
for
Statistical
Analysis
1.
Receive
the
verified
and
validated
data
source.
2.
Create
a
data
base
from
the
verified
and
validated
data
source.
3.
Check
and
edit
the
data
base.
4.
Create
data
files
from
the
data
base.
147
8.2.3
Conduct
Preliminary
Review
of
the
Data
and
Check
Statistical
Assumptions
Many
statistical
tests
and
procedures
require
that
certain
assumptions
be
met
for
their
use.
Failure
to
satisfy
these
assumptions
can
result
in
biased
estimates
of
the
parameter
of
interest;
therefore,
it
is
important
to
conduct
preliminary
analyses
of
the
data
to
learn
about
the
characteristics.
EPA
recommends
that
you
compute
statistical
quantities,
determine
the
proportion
of
the
data
reported
as
"nondetect"
for
each
constituent
of
concern,
check
whether
the
data
exhibit
a
normal
distribution,
then
determine
if
there
are
any
"outliers"
that
deserve
a
closer
look.
The
outputs
of
these
activities
are
used
to
help
select
and
perform
the
appropriate
statistical
tests.

8.2.3.1
Statistical
Quantities
To
help
"visualize"
and
summarize
the
data,
calculate
basic
statistical
quantities
such
as
the:

°
Mean
°
Maximum
°
Percentiles
°
Variance
°
Standard
deviation
°
Coefficient
of
variation.

Calculate
the
quantities
for
each
constituent
of
concern.
Example
calculations
of
the
mean,
variance,
standard
deviation,
and
standard
error
of
the
mean
are
given
in
Section
3.
Detailed
guidance
on
the
calculation
of
statistical
quantities
is
provided
in
Chapter
Two
of
EPA's
QA/
G­
9
guidance
document
(USEPA
2000d).
The
useful
quantities
easily
can
be
computed
using
EPA's
DataQUEST
software
(USEPA
1997b,
see
also
Appendix
H)
or
any
similar
statistical
software
package.

When
calculating
statistical
quantities,
determine
which
data
points
were
reported
as
below
a
limit
of
detection
or
quantitation
­
known
as
"nondetects"
(NDs).
See
also
Section
8.2.4.2
("
Treatment
of
Nondetects").

8.2.3.2
Checking
Data
for
Normality
Check
the
data
sets
for
normality
by
using
graphical
methods,
such
as
histograms,
box
and
whisker
plots,
and
normal
probability
plots
(see
also
Section
3.1.3),
or
by
using
numerical
tests,
such
as
the
Shapiro­
Wilk
test
for
normality
(see
Appendix
F).
Table
11
provides
a
summary
of
recommended
methods.
Detailed
guidance
on
the
use
of
graphical
and
statistical
methods
can
be
found
in
USEPA
1989b,
1992b,
1997b,
and
2000d.
148
Table
11.
Recommended
Graphical
and
Statistical
Methods
for
Checking
Distributional
Assumptions
Test
Use
Reference
Graphical
Methods
Histograms
and
frequency
plots
Provides
visual
display
of
probability
or
frequency
distribution
See
USEPA
2000d.
Construct
via
EPA's
DataQUEST
software
(USEPA
1997b)
or
use
a
commercial
software
package.

Normal
probability
plot
Provides
visual
display
of
deviation
from
expected
normality
See
USEPA
2000d.
Construct
via
EPA's
DataQUEST
software
(USEPA
1997b)
or
use
a
commercial
software
package.

Box
and
Whisker
Plot
Provides
visual
display
of
potential
"outliers"
or
extreme
values
See
USEPA
2000d.
Construct
via
EPA's
DataQUEST
software
(USEPA
1997b)
or
use
a
commercial
software
package.

Numerical
Tests
for
Normality
Shapiro­
Wilk
Test
Use
for
sample
sizes
of
 
50
See
procedure
in
Appendix
F,
Section
F.
1.2.
This
test
also
can
be
performed
using
EPA's
DataQUEST
software
(USEPA
1997b).

Filliben's
Statistic
Use
for
sample
sizes
of
>
50
See
USEPA
2000d.
This
test
can
be
performed
using
EPA's
DataQUEST
software
(USEPA
1997b).

Graphical
methods
allow
you
to
visualize
the
central
tendency
of
the
data,
the
variability
in
the
data,
the
location
of
extreme
data
values,
and
any
obvious
trends
in
the
data.
For
example,
a
symmetrical
"mound"
shape
of
a
histogram
is
an
indicator
of
an
approximately
normal
distribution.
If
a
normal
probability
plot
is
constructed
on
the
data
(see
Figure
5
in
Section
3.1.3),
a
straight
line
plot
usually
is
an
indicator
of
normality.
(Note
that
interpretation
of
a
probability
plot
depends
on
the
method
used
to
construct
it.
For
example,
in
EPA's
DataQUEST
software,
normally
distributed
data
will
form
an
"S"­
shaped
curve
rather
than
a
straight
line
on
a
normal
probability
plot.)

The
Shapiro­
Wilk
test
is
recommended
as
a
superior
method
for
testing
normality
of
the
data.
The
specific
method
for
implementing
the
Shapiro­
Wilk
Test
is
provided
in
Appendix
F.
The
method
also
is
described
in
Gilbert
(1987),
EPA's
guidance
on
the
statistical
analysis
of
groundwater
monitoring
data
(USEPA
1992b),
and
can
be
performed
with
EPA's
DataQUEST
software
or
other
commercially
available
statistical
software.

8.2.3.3
How
To
Assess
"Outliers"

A
measurement
that
is
very
different
from
other
values
in
the
data
set
is
sometimes
referred
to
as
an
"outlier."
EPA
cautions
that
the
term
"outlier"
be
used
advisedly,
since
a
common
reaction
to
the
presence
of
"outlying"
values
has
been
to
"cleanse
the
data,"
thereby
removing
any
"outliers"
prior
to
further
analysis.
In
fact,
such
discrepant
values
can
occur
for
many
reasons,
149
including
(1)
a
catastrophic
event
such
as
a
spill
or
process
upset
that
impacts
measurements
at
the
sampling
point,
(2)
inconsistent
sampling
or
analytical
chemistry
methodology
that
may
result
in
laboratory
contamination
or
other
anomalies,
(3)
errors
in
the
transcription
of
data
values
or
decimal
points,
and
(4)
true
but
extreme
hazardous
constituent
measurements.

While
any
one
of
these
events
can
cause
an
apparent
"outlier,"
it
should
be
clear
that
the
appropriate
response
to
an
outlier
will
be
very
different
depending
on
the
origin.
Because
high
values
due
to
contaminated
media
or
waste
are
precisely
what
one
may
be
trying
to
identify,
it
would
not
be
appropriate
to
eliminate
such
data
in
the
guise
of
"screening
for
outliers."
Furthermore,
depending
on
the
form
of
the
underlying
population,
unusually
high
concentrations
may
be
real
but
infrequent
such
as
might
be
found
in
lognormally
distributed
data.
Again,
it
would
not
be
appropriate
to
remove
such
data
without
adequate
justification.

A
statistical
outlier
is
defined
as
a
value
originating
from
a
different
underlying
population
than
the
rest
of
the
data
set.
If
the
value
is
not
consistent
with
the
distributional
behavior
of
the
remaining
data
and
is
"too
far
out
in
one
of
the
tails"
of
the
assumed
underlying
population,
it
may
test
out
as
a
statistical
outlier.
Defined
as
it
is
strictly
in
statistical
terms,
however,
an
outlier
test
may
identify
values
as
discrepant
when
no
physical
reason
can
be
given
for
the
aberrant
behavior.
One
should
be
especially
cautious
about
indiscriminate
testing
for
statistical
outliers
for
this
reason.

If
an
outlier
is
suspected,
an
initial
and
helpful
step
is
to
construct
a
probability
plot
of
the
data
set
(see
also
Section
3.1.3
and
USEPA
2000d).
A
probability
plot
is
designed
to
judge
whether
the
sample
data
are
consistent
with
an
underlying
normal
population
model.
If
the
rest
of
the
data
follow
normality,
but
the
outlier
comes
from
a
distinctly
different
population
with
higher
(or
lower)
concentrations,
this
behavior
will
tend
to
show
up
on
a
probability
plot
as
a
lone
value
"out
of
line"
with
the
remaining
observations.
If
the
data
are
lognormal
instead,
but
the
outlier
is
again
from
a
distinct
population,
a
probability
plot
on
the
logged
observations
should
be
constructed.
Neither
of
these
plots
is
a
formal
test;
still,
they
provide
invaluable
visual
evidence
as
to
whether
the
suspected
outlier
should
really
be
considered
as
such.

Methods
for
conducting
outlier
tests
are
described
in
Chapter
4
of
EPA's
QA/
G­
9
guidance
document
(USEPA
2000d),
and
statistical
tests
are
available
in
the
DataQUEST
software
(for
example,
Rosner's
Test
and
Walsh's
Test)
(USEPA
1997b).

8.2.4
Select
and
Perform
Statistical
Tests
This
section
provides
guidance
on
how
you
can
select
the
appropriate
statistical
test
to
make
a
decision
about
the
waste
or
media
that
is
the
subject
of
the
study.
It
is
important
to
select
the
appropriate
statistical
test
because
decisions
and
conclusions
derived
from
incorrectly
used
statistics
can
be
expensive
(Singh,
et
al.
1997).

Prior
to
selecting
the
statistical
test,
consider
the
following
factors:

°
The
objectives
of
the
study
(identified
in
DQO
Step
2)

°
Whether
assumptions
of
the
test
are
fulfilled
°
The
nature
of
the
underlying
distribution
150
°
The
decision
rule
and
null
hypothesis
(identified
in
DQO
Step
5)

°
The
relative
performance
of
the
candidate
tests
(for
example,
parametric
tests
generally
are
more
efficient
than
their
nonparametric
counterparts)

°
The
proportion
of
the
data
that
are
reported
as
nondetects
(NDs).

The
decision­
tree
presented
in
Figure
37
provides
a
starting
point
for
selecting
the
appropriate
statistical
test.
The
statistical
methods
are
offered
as
guidance
and
should
not
be
used
as
a
"cook
book"
approach
to
data
analysis.
The
methods
presented
here
usually
will
be
adequate
for
the
tests
conducted
under
the
specified
conditions
(see
also
Appendix
F).
An
experienced
statistician
should
be
consulted
whenever
there
are
questions.

Based
on
the
study
objective
(DQO
Step
2),
determine
which
category
of
statistical
tests
to
use.
Note
the
statistical
methods
recommended
in
the
flow
charts
in
Figure
38
and
Figure
39
are
for
use
when
the
objective
is
to
compare
the
parameter
of
interest
to
a
fixed
standard.
Other
methods
will
be
required
if
the
objective
is
different
(e.
g.,
when
comparing
two
populations,
detecting
trends,
and
evaluating
spatial
patterns
or
relationships
of
sampling
points).

8.2.4.1
Data
Transformations
in
Statistical
Tests
Users
of
this
guidance
may
encounter
data
sets
that
show
significant
evidence
of
non­
normality.
Due
to
the
assumption
of
underlying
normality
in
most
parametric
tests,
a
common
statistical
strategy
when
encountering
this
predicament
is
to
search
for
a
mathematical
transformation
that
will
lead
to
normally­
distributed
data
on
the
transformed
scale.
Unfortunately,
because
of
the
complexities
associated
with
interpreting
statistical
results
from
data
that
have
been
transformed
to
another
scale
and
the
common
occurrence
of
lognormal
patterns
in
environmental
data,
EPA
generally
recommends
that
the
choice
of
scale
be
limited
to
either
the
original
measurements
(for
normal
data)
or
a
log­
transformed
scale
(for
lognormal
data).
If
neither
of
these
scales
results
in
approximate
normality,
it
is
typically
easiest
and
wisest
to
switch
to
a
nonparametric
(or
"distribution­
free")
version
of
the
same
test.

If
a
transformation
to
the
log
scale
is
needed,
and
a
confidence
limit
on
the
mean
is
desired,
special
techniques
are
required.
If
a
data
set
exhibits
a
normal
distribution
on
the
logtransformed
scale,
it
is
a
common
mistake
to
assume
that
a
standard
normal­
based
confidence
interval
formula
can
be
applied
to
the
transformed
data
with
the
confidence
interval
endpoints
retransformed
to
the
original
scale
to
obtain
the
confidence
interval
on
the
mean.
Invariably,
such
an
interval
will
be
biased
to
the
low
side.
In
fact,
the
procedure
just
described
actually
produces
a
confidence
interval
around
the
median
of
a
lognormal
population,
rather
than
the
higher
mean.
To
correctly
account
for
this
"transformation
bias",
special
procedures
are
required
(Land
1971
and
1975,
Gilbert
1987).
See
Section
F.
2.3
in
Appendix
F
for
detailed
guidance
on
calculating
confidence
limits
for
the
mean
of
a
lognormal
population.
151
Start
Conduct
Spatial
Analysis,
such
as
a
Geostatistical
Study.

Percentile
or
a
"Not­
to

Exceed"
Standard?

Mean
Perform
a
"Two­
Sample"
Test.

Identify
the
Parameter
of
Interest
(DQO
Step
5).

Identify
the
Decision
(DQO
Step
2)

Test
Compliance
With
a
Fixed
Standard
(e.
g.,

TC
or
UTS)?
Evaluate
Spatial
Patterns?

Compare
Two
Populations?
Yes
Yes
Yes
Go
to
Flow
Chart
in
Figure
38.
Go
to
Flow
Chart
in
Figure
39.
See
Section
3.4.
3.
See
Section
3.4.
4
Seek
Other
Guidance
for
Objectives
Not
Discussed
in
This
Document.

No
No
No
See
EPA
QA/
G­
9
(USEPA
2000d)

Figure
37.
Flow
chart
for
selecting
a
statistical
method
152
Start
(from
Fig.
37)

>50%
Non
Detects?

>15%
Non
Detects?

Set
Non­
Detects
Equal
to
1/
2
Detection
Limit.

Are
the
Data
Normally
Distributed?
Calculate
Parametric
UCL
on
the
Mean
(See
Appendix
F,
Section
F.
2.1).
Are
the
Logged
Data
Normally
Distributed?

Transform
the
Data
Using
a
Natural
Log
Calculate
UCL
on
the
Mean
Using
Land's
H

Statistic
or
Other
Appropriate
Method
(See
Appendix
F,
Section
F.
2.3).
Use
Regression
on
Order
Statistics,
Helsel's
Robust
Method,
or
Test
for
Proportions
(See
Appendix
F,
Sec.
F.
4.1).

Calculate
Cohen's
Adjusted
UCL
on
the
Mean
(See
Appendix
F,
Section
F.
4.
2).

Calculate
Cohen's
Adjusted
Mean
and
Standard
Deviation.

No
Yes
No
Yes
No
No
Yes
Yes
Cohen's
Model
OK?
(See
Appendix
F,

Section
F.
4.2).
Yes
No
See
Cautionary
Note
in
Appendix
F,

Section
F.
2.3.

Methods
for
Comparing
the
Mean
to
a
Fixed
Standard
(null
hypothesis:
concentration
exceeds
the
standard)
Calculate
UCL
on
the
Mean
Using
the
Bootstrap
or
Jackknife
Method
(See
Appendix
F,
Section
F.
2.
4).

Figure
38.
Flowchart
of
statistical
methods
for
comparing
the
mean
to
a
fixed
standard
(null
hypothesis
is
"concentration
exceeds
the
standard")
153
Start
(from
Fig.
37)

>50%
Non
Detects?

>15%
Non
Detects?

Set
Non­Detects
Equal
to
1/
2
Detection
Limit
Use
a
Nonparametric
Test
Are
the
Data
Normally
Distributed?

Calculate
Parametric
UCL
on
Upper
Percentile
(See
Appendix
F,
Section
F.
3.1).
Are
the
Logged
Data
Normally
Distributed?
Transform
the
Data
Using
a
Natural
Log
Apply
an
"Exceedance
Rule"

(see
Appendix
F,
Section
F.
3.2)
or
a
One­
Sample
Proportion
Test
(see
Appendix
F,
Section
F.
3).

Calculate
Cohen's
Adjusted
UCL
on
the
Upper
Percentile
(see
Appendix
F,
Section
F.
4.2).

Calculate
Cohen's
Adjusted
Mean
and
Standard
Deviation
No
No
No
Yes
No
No
Yes
Yes
Cohen's
Model
OK?
(See
Appendix
F,

Section
F.
4.2).
Yes
No
Calculate
UCL
on
the
Logged
Data.
Exponentiate
the
Limit.

Methods
for
Comparing
an
Upper
Proportion
or
Percentile
To
a
Fixed
Standard
(null
hypothesis:

concentration
exceeds
the
standard)
Results
expressed
as
pass/
fail?

Yes
Yes
Figure
39.
Flowchart
of
statistical
methods
for
comparing
an
upper
proportion
or
percentile
to
a
fixed
standard
(null
hypothesis
is
"concentration
exceeds
the
standard")
154
If
the
number
of
samples
is
small,
it
may
not
be
possible
to
tell
whether
the
distribution
is
normal,
lognormal,
or
any
other
specific
function.
You
are
urged
not
to
read
too
much
into
small
data
sets
and
not
to
attempt
overly
sophisticated
evaluations
of
data
distributions
based
on
limited
information.
If
the
distribution
of
data
appears
to
be
highly
skewed,
it
is
best
to
take
operational
measures
(such
as
more
samples
or
samples
of
a
larger
physical
size)
to
better
characterize
the
waste.

8.2.4.2
Treatment
of
Nondetects
If
no
more
than
approximately
15
percent
of
the
samples
for
a
given
constituent
are
nondetect
(i.
e.,
reported
as
below
a
detection
or
quantitation
limit),
the
results
of
parametric
statistical
tests
will
not
be
substantially
affected
if
nondetects
are
replaced
by
half
their
detection
limits
(known
as
a
substitution
method)
(USEPA
1992b).
When
a
larger
percentage
of
the
sample
analysis
results
are
nondetect,
however,
the
treatment
of
nondetects
is
more
crucial
to
the
outcome
of
statistical
procedures.
Indeed,
simple
substitution
methods
(such
as
replacing
the
detection
limit
with
one­
half
the
detection
limit)
tend
to
perform
poorly
in
statistical
tests
when
the
nondetect
percentage
is
substantial
(Gilliom
and
Helsel
1986,
Helsel
1990).

Guidance
on
selecting
an
approach
for
handling
nondetects
in
statistical
intervals
is
given
in
Appendix
F,
Section
F.
4.
Guidance
also
is
given
in
Section
4.7
of
EPA's
Guidance
for
Data
Quality
Assessment
Practical
Methods
for
Data
Analysis
EPA
QA/
G­
9
(USEPA
2000d).

8.2.5
Draw
Conclusions
and
Report
Results
The
final
step
in
the
DQA
Process
is
to
draw
conclusions
from
the
data,
determine
if
further
sampling
is
required,
and
report
the
results.
This
step
brings
the
planning,
implementation,
and
assessment
process
"full
circle"
in
that
you
attempt
to
resolve
the
problem
and
make
the
decision
identified
in
Steps
1
and
2
of
the
DQO
Process.

In
the
DQO
Process,
you
establish
a
"null
hypothesis"
and
attempt
to
gather
evidence
via
sampling
that
will
allow
you
to
reject
that
hypothesis;
otherwise,
the
null
hypothesis
must
be
accepted.
If
the
decision
making
process
involves
use
of
a
statistical
method
(such
as
the
calculation
of
a
statistical
confidence
limit
or
use
of
a
statistical
hypothesis
test),
then
the
outcome
of
the
statistical
test
should
be
reported
along
with
the
uncertainty
associated
with
the
result.
If
other
decision
making
criteria
are
used
(such
as
use
of
a
simple
exceedance
rule
or
a
"weight
of
evidence"
approach),
then
the
outcome
of
that
decision
making
process
should
be
reported.

Detailed
guidance
on
the
use
and
interpretation
of
statistical
methods
for
decision
making
can
be
found
in
Appendix
F.
Additional
guidance
can
found
in
EPA's
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d).
155
Standard
Concentration
0
LCL
UCL
x
LCL
UCL
x
LCL
UCL
x
Null
Hypothesis:
"Mean
concentration
exceeds
the
standard."

Conclusion:
Mean
is
less
than
the
standard.

Conclusion:
Need
to
take
more
samples,
otherwise
conclude
mean
exceeds
the
standard.

Conclusion:
Mean
exceeds
the
standard.
A
B
C
Figure
40.
Using
confidence
limits
on
the
mean
to
compare
waste
concentrations
to
a
fixed
standard.
Most
of
the
statistical
methods
suggested
in
this
document
involve
the
construction
of
one­
sided
confidence
limits
(or
bounds).
The
upper
confidence
limit,
whether
calculated
on
a
mean,
median,
or
percentile,
provides
a
value
below
which
one
can
claim
with
specified
confidence
that
the
true
value
of
the
parameter
lies.
Figure
40
demonstrates
how
you
can
use
a
confidence
limit
to
test
a
hypothesis:
In
the
situation
depicted
at
"A,"
the
upper
confidence
limit
calculated
from
the
sample
data
is
less
than
the
applicable
standard
and
provides
the
evidence
needed
to
reject
the
null
hypothesis.
The
decision
can
be
made
that
the
waste
concentration
is
below
the
standard
with
sufficient
confidence
and
without
further
analysis.

In
situation
"B,"
we
cannot
reject
the
null
hypothesis;
however,
because
the
interval
"straddles"
the
standard,
it
is
possible
that
the
true
mean
lies
below
the
standard
and
a
Type
II
(false
acceptance)
error
has
been
made
(i.
e.,
to
conclude
the
concentration
is
above
the
standard,
when
in
fact
it
is
not).
One
possible
remedy
to
this
situation
is
to
obtain
more
data
to
"tighten"
the
confidence
interval.

In
situation
"C,"
the
Type
II
(false
acceptance)
decision
error
rate
is
satisfied
and
we
must
conclude
that
the
mean
concentration
exceeds
the
standard.

One
simple
method
for
checking
the
performance
of
the
statistical
test
is
use
the
information
obtained
from
the
samples
to
retrospectively
estimate
the
number
of
samples
required.
For
example,
the
sample
variance
can
be
input
into
the
sample
size
equation
used
(see
Section
5.4
and
5.5,
DQO
Process
Step
7).
(An
example
of
this
approach
is
presented
in
Appendix
I.)
If
this
theoretical
sample
size
is
less
than
or
equal
to
the
number
of
samples
actually
taken,
then
the
test
is
sufficiently
powerful.
If
the
required
number
of
samples
is
greater
than
the
number
actually
collected,
then
additional
samples
would
be
required
to
satisfy
the
data
user's
performance
criteria
for
the
statistical
test.
See
EPA's
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d)
for
additional
guidance
on
this
topic.

Finally,
if
a
simple
exceedance
rule
is
used
to
measure
compliance
with
a
standard,
then
interpretation
of
the
results
is
more
straightforward.
For
example,
if
zero
exceedances
are
allowed,
and
one
or
more
samples
exceeds
the
standard,
then
there
is
evidence
of
noncompliance
with
that
standard
(see
Appendix
F,
Section
F.
3.2).
156
This
page
intentionally
left
blank
157
APPENDIX
A
GLOSSARY
OF
TERMS*

Accuracy
­
A
measure
of
the
closeness
of
an
individual
measurement
or
the
average
of
a
number
of
measurements
to
the
true
value.
Accuracy
includes
a
combination
of
random
error
(precision)
and
systematic
error
(bias)
components
that
are
due
to
sampling
and
analytical
operations.
EPA
recommends
using
the
terms
"precision"
and
"bias,"
rather
than
the
term
"accuracy,"
to
convey
the
information
usually
associated
with
accuracy.
Pitard
(1993)
indicates
that
a
sample
is
accurate
when
the
absolute
value
of
the
bias
is
smaller
than
an
acceptable
standard
of
accuracy.

Action
Level
­
The
numerical
value
that
causes
the
decision
maker
to
choose
one
of
the
alternative
actions
(for
example,
compliance
or
noncompliance).
It
may
be
a
regulatory
threshold
standard,
such
as
the
maximum
contaminant
level
for
drinking
water,
a
risk­
based
concentration
level,
a
technological
limitation,
or
a
reference­
based
standard
(ASTM
D
5792­
95).

Alternative
Hypothesis
­
See
Hypothesis.

Assessment
­
The
evaluation
process
used
to
measure
the
performance
or
effectiveness
of
a
system
and
its
elements.
As
used
here,
assessment
is
an
all­
inclusive
term
used
to
denote
any
of
the
following:
audit,
performance
evaluation
(PE),
management
systems
review
(MSR),
peer
review,
inspection,
or
surveillance.

Audit
(quality)
­
A
systematic
and
independent
examination
to
determine
whether
quality
activities
and
related
results
comply
with
planned
arrangements
and
whether
these
arrangements
are
implemented
effectively
and
are
suitable
to
achieve
objectives.

Audit
of
Data
Quality
­
A
qualitative
and
quantitative
evaluation
of
the
documentation
and
procedures
associated
with
environmental
measurements
to
verify
that
the
resulting
data
are
of
acceptable
quality.

Baseline
Condition
­
A
tentative
assumption
to
be
proven
either
true
or
false.
When
hypothesis
testing
is
applied
to
a
site
assessment
decision,
the
data
are
used
to
choose
between
a
presumed
baseline
condition
of
the
environment
and
an
alternative
condition.
The
baseline
condition
is
retained
until
overwhelming
evidence
indicates
that
the
baseline
condition
is
false.
This
is
often
called
the
null
hypothesis
in
statistical
tests.

Bias
­
The
systematic
or
persistent
distortion
of
a
measured
value
from
its
true
value
(this
can
occur
during
sampling
design,
the
sampling
process,
or
laboratory
analysis).

*
The
definitions
in
this
appendix
are
from
USEPA
1998a,
2000b,
2000e,
and
2001b,
unless
otherwise
noted.
Some
definitions
were
modified
based
on
comments
received
from
technical
reviewers
during
development
of
this
document.
These
definitions
do
not
constitute
the
Agency's
official
use
of
the
terms
for
regulatory
purposes
and
should
not
be
construed
to
alter
or
supplant
other
terms
in
use.

Note:
Terms
in
italics
also
are
defined
in
this
glossary.
Appendix
A
158
Blank
­
A
sample
that
is
intended
to
contain
none
of
the
analytes
of
interest
and
is
subjected
to
the
usual
analytical
or
measurement
process
to
establish
a
zero
baseline
or
background
value.
Sometimes
used
to
adjust
or
correct
routine
analytical
results.
A
blank
is
used
to
detect
contamination
during
sample
handling
preparation
and/
or
analysis
(see
also
Rinsate,
Method
Blank,
Trip
Blank,
and
Field
Blank).

Boundaries
­
The
spatial
and
temporal
limits
and
practical
constraints
under
which
environmental
data
are
collected.
Boundaries
specify
the
area
or
volume
(spatial
boundary)
and
the
time
period
(temporal
boundary)
to
which
the
decision
will
apply.
Samples
are
then
collected
within
these
boundaries.

Calibration
­
Comparison
of
a
measurement
standard,
instrument,
or
item
with
a
standard
or
instrument
of
higher
accuracy
to
detect
and
quantify
inaccuracies
and
to
report
or
eliminate
those
inaccuracies
by
adjustments.
Calibration
also
is
used
to
quantify
instrument
measurements
of
a
given
concentration
in
a
given
sample.

Calibration
Drift
­
The
deviation
in
instrument
response
from
a
reference
value
over
a
period
of
time
before
recalibration.

Chain
of
Custody
­
An
unbroken
trail
of
accountability
that
ensures
the
physical
security
of
samples,
data,
and
records.

Characteristic
­
Any
property
or
attribute
of
a
datum,
item,
process,
or
service
that
is
distinct,
describable,
and/
or
measurable.

Coefficient
of
Variation
(CV)
­
A
dimensionless
quantity
used
to
measure
the
spread
of
data
relative
to
the
size
of
the
numbers.
For
a
normal
distribution,
the
coefficient
of
variation
is
given
by
.
Also
known
as
the
relative
standard
deviation
(RSD).
s
x
/

Colocated
Samples
­
Two
or
more
portions
collected
as
close
as
possible
at
the
same
point
in
time
and
space
so
as
to
be
considered
identical.
If
obtained
in
the
field,
these
samples
also
are
known
as
"field
replicates."

Comparability
­
A
measure
of
the
confidence
with
which
one
data
set
or
method
can
be
compared
to
another.

Completeness
­
A
measure
of
the
amount
of
valid
data
obtained
from
a
measurement
system
compared
to
the
amount
that
was
expected
to
be
obtained
under
correct,
normal
conditions.

Component
­
An
easily
identified
item
such
as
a
large
crystal,
an
agglomerate,
rod,
container,
block,
glove,
piece
of
wood,
or
concrete
(ASTM
D
5956­
96).
An
elementary
part
or
a
constituent
that
can
be
separated
and
quantified
by
analysis
(Pitard
1993).

Composite
Sample
­
A
physical
combination
of
two
or
more
samples
(ASTM
D
6233­
98).
A
sample
collected
across
a
temporal
or
spatial
range
that
typically
consists
of
a
set
of
discrete
samples
(or
"individual"
samples)
that
are
combined
or
"composited."
Area­
wide
or
long­
term
compositing
should
not
be
confused
with
localized
compositing
in
which
a
sample
of
the
desired
support
is
created
from
many
small
increments
taken
at
a
single
location.
Four
types
of
composite
samples
are
listed
below:
Appendix
A
159
1.
Time
Composite
­
a
sample
comprising
a
varying
number
of
discrete
samples
collected
at
equal
time
intervals
during
the
compositing
period.
The
time
composite
sample
is
typically
used
to
sample
waste
water
or
streams.

2.
Flow
Proportioned
Composite
(FPC)
­
a
sample
collected
proportional
to
the
flow
during
the
compositing
period
by
either
a
time­
varying/
constant
volume
(TVCV)
or
a
time­
constant/
varying
volume
method
(TCVV).
The
TVCV
method
typically
is
used
with
automatic
samplers
that
are
paced
by
a
flow
meter.
The
TCVV
method
is
a
manual
method
that
individually
proportions
a
series
of
discretely
collected
samples.
The
FPC
is
typically
used
when
sampling
waste
water.

3.
Areal
Composite
­
sample
composited
from
individual
equal­
size
samles
collected
on
an
areal
or
horizontal
cross­
sectional
basis.
Each
discrete
sample
is
collected
in
an
identical
manner.
Examples
include
sediment
composites
from
quarter­
point
sampling
of
streams
and
soil
samples
from
within
grids.

4.
Vertical
Composite
­
a
sample
composited
from
individual
equal
samples
collected
from
a
vertical
cross
section.
Each
discrete
sample
is
collected
in
an
identical
manner.
Examples
include
vertical
profiles
of
soil/
sediment
columns,
lakes,
and
estuaries
(USEPA
1996c).

Confidence
Level
­
The
probability,
usually
expressed
as
a
percent,
that
a
confidence
interval
will
contain
the
parameter
of
interest
(ASTM
D
5792­
95).
Also
known
as
the
confidence
coefficient.

Confidence
Limits
­
Upper
and/
or
lower
limit(
s)
within
which
the
true
value
of
a
parameter
is
likely
to
be
contained
with
a
stated
probability
or
confidence
(ASTM
D
6233­
98).

Conformance
­
An
affirmative
indication
or
judgment
that
a
product
or
service
has
met
the
requirements
of
the
relevant
specifications,
contract,
or
regulation.
Also
the
state
of
meeting
the
requirements.

Consensus
Standard
­
A
standard
established
by
a
group
representing
a
cross
section
of
a
particular
industry
or
trade,
or
a
part
thereof.

Control
Sample
­
A
quality
control
sample
introduced
into
a
process
to
monitor
the
performance
of
the
system
(from
Chapter
One,
SW­
846).

Data
Collection
Design
­
A
design
that
specifies
the
configuration
of
the
environmental
monitoring
effort
to
satisfy
the
data
quality
objectives.
It
includes:
the
types
of
samples
or
monitoring
information
to
be
collected;
where,
when,
and
under
what
conditions
they
should
be
collected;
what
variables
are
to
be
measured;
and
the
quality
assurance/
quality
control
(QA/
QC)
components
that
ensure
acceptable
sampling
design
error
and
measurement
error
to
meet
the
decision
error
rates
specified
in
the
DQOs.
The
data
collection
design
is
the
principal
part
of
the
quality
assurance
project
plan
(QAPP).
Appendix
A
160
Data
of
Known
Quality
­
Data
that
have
the
qualitative
and
quantitative
components
associated
with
their
derivation
documented
appropriately
for
their
intended
use,
and
when
such
documentation
is
verifiable
and
defensible.

Data
Quality
Assessment
(DQA)
Process
­
A
statistical
and
scientific
evaluation
of
the
data
set
to
assess
the
validity
and
performance
of
the
data
collection
design
and
statistical
test
and
to
establish
whether
a
data
set
is
adequate
for
its
intended
use.

Data
Quality
Indicators
(DQIs)
­
The
quantitative
statistics
and
qualitative
descriptors
that
are
used
to
interpret
the
degree
of
acceptability
or
utility
of
data
to
the
user.
The
principal
data
quality
indicators
are
bias,
precision,
accuracy
(precision
and
bias
are
preferred
terms),
comparability,
completeness,
and
representativeness.

Data
Quality
Objectives
(DQOs)
­
Qualitative
and
quantitative
statements
derived
from
the
DQO
Process
that
clarify
study
technical
and
quality
objectives,
define
the
appropriate
type
of
data,
and
specify
tolerable
levels
of
potential
decision
errors
that
will
be
used
as
the
basis
for
establishing
the
quality
and
quantity
of
data
needed
to
support
decisions.

Data
Quality
Objectives
(DQO)
Process
­
A
systematic
strategic
planning
tool
based
on
the
scientific
method
that
identifies
and
defines
the
type,
quality,
and
quantity
of
data
needed
to
satisfy
a
specified
use.
The
key
elements
of
the
process
include:

°
concisely
defining
the
problem
°
identifying
the
decision
to
be
made
°
identifying
the
key
inputs
to
that
decision
°
defining
the
boundaries
of
the
study
°
developing
the
decision
rule
°
specifying
tolerable
limits
on
potential
decision
errors
°
selecting
the
most
resource
efficient
data
collection
design.

Data
Reduction
­
The
process
of
transforming
the
number
of
data
items
by
arithmetic
or
statistical
calculations,
standard
curves,
and
concentration
factors,
and
collating
them
into
a
more
useful
and
understandable
form.
Data
reduction
generally
results
in
a
reduced
data
set
and
an
associated
loss
of
detail.

Data
Usability
­
The
process
of
ensuring
or
determining
whether
the
quality
of
the
data
produced
meets
the
intended
use
of
the
data.

Data
Validation
­
See
Validation.

Debris
­
Under
40
CFR
268.2(
g)
(Land
Disposal
Restrictions
regulations)
debris
includes
"solid
material
exceeding
a
60
mm
particle
size
that
is
intended
for
disposal
and
that
is
a
manufactured
object;
or
plant
or
animal
matter;
or
natural
geologic
material."
268.2(
g)
also
identifies
materials
that
are
not
debris.
In
general,
debris
includes
materials
of
either
a
large
particle
size
or
variation
in
the
items
present.
When
the
constituent
items
are
more
than
2
or
3
inches
in
size
or
are
of
different
compositions,
representative
sampling
becomes
more
difficult.

Decision
Error
­
An
error
made
when
drawing
an
inference
from
data
in
the
context
of
hypothesis
testing
such
that
variability
or
bias
in
the
data
mislead
the
decision
maker
to
draw
a
Appendix
A
161
conclusion
that
is
inconsistent
with
the
true
or
actual
state
of
the
population
under
study.
See
also
False
Negative
Decision
Error,
and
False
Positive
Decision
Error.

Decision
Performance
Curve
­
A
graphical
representation
of
the
quality
of
a
decision
process.
In
statistical
terms
it
is
known
as
a
power
curve
or
function
(or
a
reverse
power
curve
depending
on
the
hypotheses
being
tested).

Decision
Performance
Goal
Diagram
(DPGD)
­
A
graphical
representation
of
the
tolerable
risks
of
decision
errors.
It
is
used
in
conjunction
with
the
decision
performance
curve.

Decision
Unit
­
A
volume
or
mass
of
material
(such
as
waste
or
soil)
about
which
a
decision
will
be
made.

Defensible
­
The
ability
to
withstand
any
reasonable
challenge
related
to
the
veracity,
integrity,
or
quality
of
the
logical,
technical,
or
scientific
approach
taken
in
a
decision­
making
process.

Design
­
Specifications,
drawings,
design
criteria,
and
performance
requirements.
Also,
the
result
of
deliberate
planning,
analysis,
mathematical
manipulations,
and
design
processes
(such
as
experimental
design
and
sampling
design).

Detection
Limit
­
A
measure
of
the
capability
of
an
analytical
method
to
distinguish
samples
that
do
not
contain
a
specific
analyte
from
samples
that
contain
low
concentrations
of
the
analyte.
The
lowest
concentration
or
amount
of
the
target
analyte
that
can
be
determined
to
be
different
from
zero
by
a
single
measurement
at
a
stated
level
of
probability.
Detection
limits
are
analyte­
and
matrix­
specific
and
may
be
laboratory­
dependent.

Discrete
Sample
­
A
sample
that
represents
a
single
location
or
short
time
interval.
A
discrete
sample
can
be
composed
of
more
than
one
increment.
The
term
has
the
same
meaning
as
"individual
sample."

Distribution
­
A
probability
function
(density
function,
mass
function,
or
distribution
function)
used
to
describe
a
set
of
observations
(statistical
sample)
or
a
population
from
which
the
observations
are
generated.

Duplicate
Samples
­
Two
samples
taken
from
and
representative
of
the
same
population
and
carried
through
all
steps
of
the
sampling
and
analytical
procedures
in
an
identical
manner.
Duplicate
samples
are
used
to
assess
the
variance
of
the
total
method,
including
sampling
and
analysis.
See
also
Colocated
Sample
and
Field
Duplicate
Samples.

Dynamic
Work
Plan
­
A
work
plan
that
allows
the
project
team
to
make
decisions
in
the
field
about
how
subsequent
site
activities
will
progress
(for
example,
by
use
field
analytical
methods
that
provide
near
real­
time
sample
analysis
results).
Dynamic
work
plans
provide
the
strategy
for
how
dynamic
field
activities
will
take
place.
As
such,
they
document
a
flexible,
adaptive
sampling
and
analytical
strategy.
(Adopted
from
EPA
Superfund
web
site
at
http://
www.
epa.
gov/
superfund/
programs/
dfa/
dynwork.
htm).

Environmental
Conditions
­
The
description
of
a
physical
medium
(e.
g.,
air,
water,
soil,
sediment)
or
a
biological
system
expressed
in
terms
of
its
physical,
chemical,
radiological,
or
biological
characteristics.
Appendix
A
162
Environmental
Data
­
Any
measurements
or
information
that
describe
environmental
processes,
location,
or
conditions;
ecological
or
health
effects
and
consequences;
or
the
performance
of
environmental
technology.
For
EPA,
environmental
data
include
information
collected
directly
from
measurements,
produced
from
models,
and
compiled
from
other
sources,
such
as
data
bases
or
the
scientific
literature.

Environmental
Monitoring
­
The
process
of
measuring
or
collecting
environmental
data
for
evaluating
a
change
in
the
environment
(e.
g.,
ground­
water
monitoring).

Environmental
Processes
­
Manufactured
or
natural
processes
that
produce
discharges
to
or
that
impact
the
ambient
environment.

Equipment
Blank
­
See
Rinsate.

Estimate
­
A
characteristic
from
the
sample
from
which
inferences
about
population
parameters
can
be
made.

Evaluation
­
See
validation.

Evidentiary
Records
­
Records
identified
as
part
of
litigation
and
subject
to
restricted
access,
custody,
use,
and
disposal.

False
Negative
(False
Acceptance)
Decision
Error
(
)
­
A
false
negative
decision
error
 
occurs
when
the
decision
maker
does
not
reject
the
null
hypothesis
when
the
null
hypothesis
actually
is
false.
In
statistical
terminology,
a
false
negative
decision
error
also
is
called
a
Type
II
error.
The
measure
of
the
size
of
the
error
is
expressed
as
a
probability,
usually
referred
to
as
"beta"
(
).
This
probability
also
is
called
the
complement
of
power
(where
"power"
is
 
expressed
as
).
()
1   
 
False
Positive
(False
Rejection)
Decision
Error
(
)
­
A
false
positive
decision
error
occurs
 
when
a
decision
maker
rejects
the
null
hypothesis
when
the
null
hypothesis
is
true.
In
statistical
terminology,
a
false
positive
decision
error
also
is
called
a
Type
I
error.
The
measure
of
the
size
of
the
error
is
expressed
as
a
probability,
usually
referred
to
as
"alpha"
(
),
the
"level
of
 
significance,"
or
"size
of
the
critical
region."

Field
Blank
­
A
blank
used
to
provide
information
about
contaminants
that
may
be
introduced
during
sample
collection,
storage,
and
transport.
The
clean
sample
is
carried
to
the
sampling
site,
exposed
to
sampling
conditions,
returned
to
the
laboratory,
and
treated
as
an
environmental
sample.

Field
Duplicates
­
Independent
samples
that
are
collected
as
close
as
possible
to
the
same
point
in
space
and
time.
Two
separate
samples
are
taken
from
the
same
source,
stored
in
separate
containers,
and
analyzed
independently.
These
duplicates
are
useful
in
documenting
the
precision
of
the
sampling
process
(from
Chapter
One,
SW­
846,
July
1992).

Field
(matrix)
Spike
­
A
sample
prepared
at
the
sampling
point
(i.
e.,
in
the
field)
by
adding
a
known
mass
of
the
target
analyte
to
a
specified
amount
of
the
sample.
Field
matrix
spikes
are
Appendix
A
163
used,
for
example,
to
determine
the
effect
of
the
sample
preservation,
shipment,
storage,
matrix,
and
preparation
on
analyte
recovery
efficiency
(the
analytical
bias).

Field
Split
Samples
­
Two
or
more
representative
portions
taken
from
the
same
sample
and
usually
submitted
for
analysis
to
different
laboratories
to
estimate
interlaboratory
precision.

Fundamental
Error
­
The
fundamental
error
results
when
discrete
units
of
the
material
to
be
sampled
have
different
compositions
with
respect
to
the
property
of
interest.
The
error
is
referred
to
as
"fundamental"
because
it
is
an
incompressible
minimum
sampling
error
that
depends
on
the
mass,
composition,
shape,
fragment
size
distribution,
and
liberation
factor
of
the
material
and
is
not
affected
by
homogenization
or
mixing.
The
fundamental
error
is
the
only
error
that
remains
when
the
sampling
operation
is
"perfect,"
i.
e.,
when
all
parts
of
the
sample
are
obtained
in
a
probabilistic
manner
and
each
part
is
independent.
The
fundamental
error
is
never
zero
(unless
the
population
is
completely
homogeneous
or
the
entire
population
is
submitted
for
exhaustive
analysis)
and
it
never
"cancels
out."
It
can
be
reduced
by
taking
larger
physical
samples
and
by
using
particle­
size
reduction
steps
in
preparing
the
analytical
sample.

Geostatistics
­
A
branch
of
statistics,
originating
in
the
mining
industry
and
greatly
developed
in
the
1950s,
that
assesses
the
spatial
correlation
among
samples
and
incorporates
this
information
into
the
estimates
of
population
parameters.

Goodness­
of­
Fit
Test
­
In
general,
the
level
of
agreement
between
an
observed
set
of
values
and
a
set
wholly
or
partly
derived
from
a
model
of
the
data.

Grab
Sample
­
A
one­
time
sample
taken
from
any
part
of
the
waste
(62
FR
91,
page
26047,
May
12,
1997).

Graded
Approach
­
The
process
of
basing
the
level
of
application
of
managerial
controls
applied
to
an
item
or
work
according
to
the
intended
use
of
the
results
and
the
degree
of
confidence
needed
in
the
quality
of
the
results.
(See
also
Data
Quality
Objectives
Process.)

Gray
Region
­
A
range
of
values
of
the
population
parameter
of
interest
(such
as
mean
contaminant
concentration)
within
which
the
consequences
of
making
a
decision
error
are
relatively
minor.
The
gray
region
is
bounded
on
one
side
by
the
action
level.
The
width
of
the
gray
region
is
denoted
by
in
this
guidance.
 
Guidance
­
A
suggested
practice
that
is
not
mandatory,
but
rather
intended
as
an
aid
or
example
in
complying
with
a
standard
or
requirement.

Guideline
­
A
suggested
practice
that
is
nonmandatory
in
programs
intended
to
comply
with
a
standard.

Hazardous
Waste
­
Any
waste
material
that
satisfies
the
definition
of
"hazardous
waste"
as
given
in
40
CFR
Part
261,
"Identification
and
Listing
of
Hazardous
Waste."

Heterogeneity
­
The
condition
of
the
population
under
which
items
of
the
population
are
not
identical
with
respect
to
the
parameter
of
interest
(ASTM
D
6233­
98).
(See
Section
6.2.1).

Holding
Time
­
The
period
of
time
a
sample
may
be
stored
prior
to
its
required
analysis.
While
Appendix
A
164
exceeding
the
holding
time
does
not
necessarily
negate
the
veracity
of
analytical
results,
it
causes
the
qualifying
or
"flagging"
of
any
data
not
meeting
all
of
the
specified
acceptance
criteria.

Homogeneity
­
The
condition
of
the
population
under
which
all
items
of
the
population
are
identical
with
respect
to
the
parameter
of
interest
(ASTM
D
6233­
98).
The
condition
of
a
population
or
lot
in
which
the
elements
of
that
population
or
lot
are
identical;
it
is
an
inaccessible
limit
and
depends
on
the
"scale"
of
the
elements.

Hot
Spots
­
Strata
that
contain
high
concentrations
of
the
characteristic
of
interest
and
are
relatively
small
in
size
when
compared
with
the
total
size
of
the
materials
being
sampled
(ASTM
D
6009­
96).

Hypothesis
­
A
tentative
assumption
made
to
draw
out
and
test
its
logical
or
empirical
consequences.
In
hypothesis
testing,
the
hypothesis
is
labeled
"null"
(for
the
baseline
condition)
or
"alternative,"
depending
on
the
decision
maker's
concerns
for
making
a
decision
error.
The
baseline
condition
is
retained
until
overwhelming
evidence
indicates
that
the
baseline
condition
is
false.
See
also
baseline
condition.

Identification
Error
­
The
misidentification
of
an
analyte.
In
this
error
type,
the
contaminant
of
concern
is
unidentified
and
the
measured
concentration
is
incorrectly
assigned
to
another
contaminant.

Increment
­
A
group
of
particles
extracted
from
a
batch
of
material
in
a
single
operation
of
the
sampling
device.
It
is
important
to
make
a
distinction
between
an
increment
and
a
sample
that
is
obtained
by
the
reunion
of
several
increments
(from
Pitard
1989).

Individual
Sample
­
See
Discrete
Sample.

Inspection
­
The
examination
or
measurement
of
an
item
or
activity
to
verify
conformance
to
specific
requirements.

Internal
Standard
­
A
standard
added
to
a
test
portion
of
a
sample
in
a
known
amount
and
carried
through
the
entire
determination
procedure
as
a
reference
for
calibrating
and
assessing
the
precision
and
bias
of
the
applied
analytical
method.

Item
­
An
all­
inclusive
term
used
in
place
of
the
following:
appurtenance,
facility,
sample,
assembly,
component,
equipment,
material,
module,
part,
product,
structure,
subassembly,
subsystem,
system,
unit,
documented
concepts,
or
data.

Laboratory
Split
Samples
­
Two
or
more
representative
portions
taken
from
the
same
sample
for
laboratory
analysis.
Often
analyzed
by
different
laboratories
to
estimate
the
interlaboratory
precision
or
variability
and
the
data
comparability.

Limit
of
Quantitation
­
The
minimum
concentration
of
an
analyte
or
category
of
analytes
in
a
specific
matrix
that
can
be
identified
and
quantified
above
the
method
detection
limit
and
within
specified
limits
of
precision
and
bias
during
routine
analytical
operating
conditions.

Limits
on
Decision
Errors
­
The
tolerable
maximum
decision
error
probabilities
established
by
Appendix
A
165
the
decision
maker.
Potential
economic,
health,
ecological,
political,
and
social
consequences
of
decision
errors
should
be
considered
when
setting
the
limits.

Matrix
Spike
­
A
sample
prepared
by
adding
a
known
mass
of
a
target
analyte
to
a
specified
amount
of
sample
matrix
for
which
an
independent
estimate
of
the
target
analyte
concentration
is
available.
Spiked
samples
are
used,
for
example,
to
determine
the
effect
of
the
matrix
on
a
method's
recovery
efficiency.

Mean
(arithmetic)
(
)
­
The
sum
of
all
the
values
of
a
set
of
measurements
divided
by
the
x
number
of
values
in
the
set;
a
measure
of
central
tendency.

Mean
Square
Error
(
)
­
A
statistical
term
equivalent
to
the
variance
added
to
the
square
MSE
of
the
bias.
An
overall
measure
of
the
representativeness
of
a
sample.

Measurement
Error
­
The
difference
between
the
true
or
actual
state
and
that
which
is
reported
from
measurements.

Median
­
The
middle
value
for
an
ordered
set
of
values.
Represented
by
the
central
value
n
when
is
odd
or
by
the
average
of
the
two
most
central
values
when
is
even.
The
median
n
n
is
the
50th
percentile.

Medium
­
A
substance
(e.
g.,
air,
water,
soil)
that
serves
as
a
carrier
of
the
analytes
of
interest.

Method
­
A
body
of
procedures
and
techniques
for
performing
an
activity
(e.
g.,
sampling,
chemical
analysis,
quantification)
systematically
presented
in
the
order
in
which
they
are
to
be
executed.

Method
Blank
­
A
blank
prepared
to
represent
the
sample
matrix
as
closely
as
possible
and
analyzed
exactly
like
the
calibration
standards,
samples,
and
QC
samples.
Results
of
method
blanks
provide
an
estimate
of
the
within­
batch
variability
of
the
blank
response
and
an
indication
of
bias
introduced
by
the
analytical
procedure.

Natural
Variability
­
The
variability
that
is
inherent
or
natural
to
the
media,
objects,
or
subjects
being
studied.

Nonparametric
­
A
term
describing
statistical
methods
that
do
not
assume
a
particular
population
probability
distribution,
and
are
therefore
valid
for
data
from
any
population
with
any
probability
distribution,
which
can
remain
unknown
(Conover
1999).

Null
Hypothesis
­
See
Hypothesis.

Observation
­
(1)
An
assessment
conclusion
that
identifies
a
condition
(either
positive
or
negative)
that
does
not
represent
a
significant
impact
on
an
item
or
activity.
An
observation
may
identify
a
condition
that
has
not
yet
caused
a
degradation
of
quality.
(2)
A
datum.

Outlier
­
An
observation
that
is
shown
to
have
a
low
probability
of
belonging
to
a
specified
data
population.
Appendix
A
166
Parameter
­
A
quantity,
usually
unknown,
such
as
a
mean
or
a
standard
deviation
characterizing
a
population.
Commonly
misused
for
"variable,"
"characteristic,"
or
"property."

Participant
­
When
used
in
the
context
of
environmental
programs,
an
organization,
group,
or
individual
that
takes
part
in
the
planning
and
design
process
and
provides
special
knowledge
or
skills
to
enable
the
planning
and
design
process
to
meet
its
objective.

Percent
Relative
Standard
Deviation
(%
RSD)
­
The
quantity,
100(
RSD)%.

Percentile
­
The
specific
value
of
a
distribution
that
divides
the
distribution
such
that
p
percent
of
the
distribution
is
equal
to
or
below
that
value.
For
example,
if
we
say
"the
95th
percentile
is
X,"
then
it
means
that
95
percent
of
the
values
in
the
statistical
sample
are
less
than
or
equal
to
X.

Planning
Team
­
The
group
of
people
that
will
carry
out
the
DQO
Process.
Members
include
the
decision
maker
(senior
manager),
representatives
of
other
data
users,
senior
program
and
technical
staff,
someone
with
statistical
expertise,
and
a
QA/
QC
advisor
(such
as
a
QA
Manager).

Population
­The
total
collection
of
objects,
media,
or
people
to
be
studied
and
from
which
a
sample
is
to
be
drawn.
The
totality
of
items
or
units
under
consideration
(ASTM
D
5956­
96).

Precision
­
A
measure
of
mutual
agreement
among
individual
measurements
of
the
same
property,
usually
under
prescribed
similar
conditions,
expressed
generally
in
terms
of
the
sample
standard
deviation.
See
also
the
definition
for
precision
in
Chapter
One,
SW­
846.

Probabilistic
Sample
­
See
statistical
sample.

Process
­
A
set
of
interrelated
resources
and
activities
that
transforms
inputs
into
outputs.
Examples
of
processes
include
analysis,
design,
data
collection,
operation,
fabrication,
and
calculation.

Qualified
Data
­
Any
data
that
have
been
modified
or
adjusted
as
part
of
statistical
or
mathematical
evaluation,
data
validation,
or
data
verification
operations.

Quality
­
The
totality
of
features
and
characteristics
of
a
product
(including
data)
or
service
that
bears
on
its
ability
to
meet
the
stated
or
implied
needs
and
expectations
of
the
user
(i.
e.,
fitness
for
use).

Quality
Assurance
(QA)
­
An
integrated
system
of
management
activities
involving
planning,
implementation,
assessment,
reporting,
and
quality
improvement
to
ensure
that
a
process,
item,
or
service
is
of
the
type
and
quality
needed
and
expected
by
the
client.

Quality
Assurance
Manager
­
The
individual
designated
as
the
principal
manager
within
the
organization
having
management
oversight
and
responsibilities
for
planning,
coordinating,
and
assessing
the
effectiveness
of
the
quality
system
for
the
organization.

Quality
Assurance
Project
Plan
(QAPP)
­
A
formal
document
describing,
in
comprehensive
detail,
the
necessary
QA,
QC,
and
other
technical
activities
that
must
be
implemented
to
ensure
Appendix
A
167
that
the
results
of
the
work
performed
will
satisfy
the
stated
performance
criteria.

Quality
Control
(QC)
­
The
overall
system
of
technical
activities
that
measures
the
attributes
and
performance
(quality
characteristics)
of
a
process,
item,
or
service
against
defined
standards
to
verify
that
they
meet
the
stated
requirements
established
by
the
customer.
Operational
techniques
and
activities
that
are
used
to
fulfill
requirements
for
quality.
The
system
of
activities
and
checks
used
to
ensure
that
measurement
systems
are
maintained
within
prescribed
limits,
providing
protection
against
"out­
of­
control"
conditions
and
ensuring
the
results
are
of
acceptable
quality.

Quality
Control
(QC)
Sample
­
An
uncontaminated
sample
matrix
spiked
with
known
amounts
of
analytes
from
a
source
independent
of
the
calibration
standards.
Generally
used
to
establish
intralaboratory
or
analyst­
specific
precision
and
bias
or
to
assess
the
performance
of
all
or
a
portion
of
the
measurement
system.

Quality
Management
­
That
aspect
of
the
overall
management
system
of
the
organization
that
determines
and
implements
the
quality
policy.
Quality
management
includes
strategic
planning,
allocation
of
resources,
and
other
systematic
activities
(e.
g.,
planning,
implementation,
and
assessment)
pertaining
to
the
quality
system.

Quality
Management
Plan
­
A
formal
document
that
describes
the
quality
system
in
terms
of
the
organization's
structure,
the
functional
responsibilities
of
management
and
staff,
the
lines
of
authority,
and
the
required
interfaces
for
those
planning,
implementing,
and
assessing
all
activities
conducted.

Quality
System
­
A
structured
and
documented
management
system
describing
the
policies,
objectives,
principles,
organizational
authority,
responsibilities,
accountability,
and
implementation
plan
of
an
organization
for
ensuring
quality
in
its
work
processes,
products
(items),
and
services.
The
quality
system
provides
the
framework
for
planning,
implementing,
and
assessing
work
performed
by
the
organization
and
for
carrying
out
required
QA
and
QC.

Random
Error
­
The
chance
variation
encountered
in
all
measurement
work,
characterized
by
the
random
occurrence
of
deviations
from
the
mean
value.

Range
­
The
numerical
difference
between
the
minimum
and
maximum
of
a
set
of
values.

Relative
Standard
Deviation
­
See
Coefficient
of
Variation.

Remediation
­
The
process
of
reducing
the
concentration
of
a
contaminant
(or
contaminants)
in
air,
water,
or
soil
media
to
a
level
that
poses
an
acceptable
risk
to
human
health.

Repeatability
­
The
degree
of
agreement
between
independent
test
results
produced
by
the
same
analyst
using
the
same
test
method
and
equipment
on
random
aliquots
of
the
same
sample
within
a
short
time
period;
that
is,
within­
rum
precision
of
a
method
or
set
of
measurements.

Reporting
Limit
­
The
lowest
concentration
or
amount
of
the
target
analyte
required
to
be
reported
from
a
data
collection
project.
Reporting
limits
are
generally
greater
than
detection
limits
and
usually
are
not
associated
with
a
probability
level.
Appendix
A
168
Representative
Sample
­
RCRA
regulations
define
a
representative
sample
as
"a
sample
of
a
universe
or
whole
(e.
g.,
waste
pile,
lagoon,
ground
water)
which
can
be
expected
to
exhibit
the
average
properties
of
the
universe
or
whole"
(40
CFR
§
260.10).

Representativeness
­
A
measure
of
the
degree
to
which
data
accurately
and
precisely
represent
a
characteristic
of
a
population,
parameter
variations
at
a
sampling
point,
a
process
condition,
or
an
environmental
condition.

Reproducible
­
The
condition
under
which
there
is
no
statistically
significant
difference
in
the
results
of
measurements
of
the
same
sample
made
at
different
laboratories.

Reproducibility
­
The
degree
of
agreement
between
independent
test
results
produced
by
the
same
method
or
set
of
measurements
for
very
similar,
but
not
identical,
conditions
(e.
g.,
at
different
times,
by
different
technicians,
using
different
glassware,
laboratories,
or
samples);
that
is,
the
between­
run
precision
of
a
method
or
set
of
measurements.

Requirement
­
A
formal
statement
of
a
need
and
the
expected
manner
in
which
it
is
to
be
met.

Rinsate
(Equipment
Rinsate)
­
A
sample
of
analyte­
free
medium
(such
as
HPLC­
grade
water
for
organics
or
reagent­
grade
deionized
or
distilled
water
for
inorganics)
which
has
been
used
to
rinse
the
sampling
equipment.
It
is
collected
after
completion
of
decontamination
and
prior
to
sampling.
This
blank
is
useful
in
documenting
the
adequate
decontamination
of
sampling
equipment
(modified
from
Chapter
One,
SW­
846).

Sample
­
A
portion
of
material
that
is
taken
from
a
larger
quantity
for
the
purpose
of
estimating
the
properties
or
the
composition
of
the
larger
quantity
(ASTM
D
6233­
98).

Sample
Support
­
See
Support.

Sampling
­
The
process
of
obtaining
representative
samples
and/
or
measurements
of
a
population
or
subset
of
a
population.

Sampling
Design
Error
­
The
error
due
to
observing
only
a
limited
number
of
the
total
possible
values
that
make
up
the
population
being
studied.
It
should
be
distinguished
from:
errors
due
to
imperfect
selection;
bias
in
response;
and
errors
of
observation,
measurement,
or
recording,
etc.

Scientific
Method
­
The
principles
and
processes
regarded
as
necessary
for
scientific
investigation,
including
rules
for
concept
or
hypothesis
formulation,
conduct
of
experiments,
and
validation
of
hypotheses
by
analysis
of
observations.

Sensitivity
­
The
capability
of
a
method
or
instrument
to
discriminate
between
measurement
responses
representing
different
levels
of
a
variable
of
interest
(i.
e.,
the
slope
of
the
calibration).

Set
of
Samples
­
More
than
one
individual
sample.

Split
Samples
­
Two
or
more
representative
portions
taken
from
one
sample
and
often
analyzed
by
different
analysts
or
laboratories
as
a
type
of
QC
sample
used
to
assess
analytical
variability
and
comparability.
Appendix
A
169
Standard
Deviation
­
A
measure
of
the
dispersion
or
imprecision
of
a
sample
or
population
distribution
expressed
as
the
positive
square
root
of
the
variance
and
that
has
the
same
unit
of
measurement
as
the
mean.
See
variance.

Standard
Operating
Procedure
(SOP)
­
A
written
document
that
details
the
method
for
an
operation,
analysis,
or
action
with
thoroughly
prescribed
techniques
and
steps
and
that
is
officially
approved
(usually
by
the
quality
assurance
officer)
as
the
method
for
performing
certain
routine
or
repetitive
tasks.

Statistic
­
A
function
of
the
sample
measurements;
e.
g.,
the
sample
mean
or
standard
deviation.
A
statistic
usually,
but
not
necessarily,
serves
as
an
estimate
of
a
population
parameter.
A
summary
value
calculated
from
a
sample
of
observations.

Statistical
Sample
­
A
set
of
samples
or
measurements
selected
by
probabilistic
means
(i.
e.,
by
using
some
form
of
randomness).
Also
known
as
a
probabilistic
sample.

Statistical
Test
­
Any
statistical
method
that
is
used
to
determine
the
acceptance
or
rejection
of
a
hyothesis.

Stratum
­
A
subgroup
of
a
population
separated
in
space
or
time,
or
both,
from
the
remainder
of
the
population
and
being
internally
consistent
with
respect
to
a
target
constituent
or
property
of
interest
and
different
from
adjacent
portions
of
the
population
(ASTM
D
5956­
96).

Subsample
­
A
portion
of
material
taken
from
a
larger
quantity
for
the
purpose
of
estimating
properties
or
the
composition
of
the
whole
sample
(ASTM
D
4547­
98).

Support
­
The
physical
volume
or
mass,
orientation,
and
shape
of
a
sample,
subsample,
or
decision
unit.

Surrogate
Spike
or
Analyte
­
A
pure
substance
with
properties
that
mimic
the
analyte
of
interest.
It
is
unlikely
to
be
found
in
environmental
samples
and
is
added
to
them
to
establish
that
the
analytical
method
has
been
performed
properly.

Technical
Review
­
A
documented
critical
review
of
work
that
has
been
performed
within
the
state
of
the
art.
The
review
is
accomplished
by
one
or
more
qualified
reviewers
who
are
independent
of
those
who
performed
the
work,
but
are
collectively
equivalent
in
technical
expertise
to
those
who
performed
the
original
work.
The
review
is
an
indepth
analysis
and
evaluation
of
documents,
activities,
material,
data,
or
items
that
require
technical
verification
or
validation
for
applicability,
correctness,
adequacy,
completeness,
and
assurance
that
established
requirements
are
satisfied.

Total
Study
Error
­
The
combination
of
sampling
design
error
and
measurement
error.

Traceability
­
The
ability
to
trace
the
history,
application,
or
location
of
an
entity
by
means
of
recorded
identifications.
In
a
calibration
sense,
traceability
relates
measuring
equipment
to
national
or
international
standards,
primary
standards,
basic
physical
constants
or
properties,
or
reference
materials.
In
a
data
collection
sense,
it
relates
calculations
and
data
generated
throughout
the
project
back
to
the
requirements
for
the
project's
quality.
Appendix
A
170
Trip
Blank
­
A
clean
sample
of
a
matrix
that
is
taken
to
the
sampling
site
and
transported
to
the
laboratory
for
analysis
without
having
been
exposed
to
sampling
procedures.
A
trip
blank
is
used
to
document
contamination
attributable
to
shipping
and
field
handling
procedures.
This
type
of
blank
is
useful
in
documenting
contamination
of
volatile
organics
samples.

True
­
Being
in
accord
with
the
actual
state
of
affairs.

Type
I
Error
(
)
­
A
Type
I
error
occurs
when
a
decision
maker
rejects
the
null
hypothesis
 
when
it
is
actually
true.
See
also
False
Positive
Decision
Error.

Type
II
Error
(
)
­
A
Type
II
error
occurs
when
the
decision
maker
fails
to
reject
the
null
 
hypothesis
when
it
is
actually
false.
See
also
False
Negative
Decision
Error.

User
­
When
used
in
the
context
of
environmental
programs,
an
organization,
group,
or
individual
that
utilizes
the
results
or
products
from
environmental
programs.
A
user
also
may
be
the
client
for
whom
the
results
or
products
were
collected
or
created.

Vadose
Zone
­
In
soil,
the
unsaturated
zone,
limited
above
by
the
ground
surface
and
below
by
the
saturated
zone.

Validation
­
Confirmation
by
examination
and
provision
of
objective
evidence
that
the
particular
requirements
for
a
specific
intended
use
are
fulfilled.
In
design
and
development,
validation
concerns
the
process
of
examining
a
product
or
result
to
determine
conformance
to
user
needs.

Variable
­
The
attribute
of
the
environment
that
is
indeterminant.
A
quantity
which
may
take
any
one
of
a
specified
set
of
values.

Variance
­
A
measure
of
the
variability
or
dispersion
in
(1)
a
population
(population
variance,
),
or
(2)
a
sample
or
set
of
subsamples
(sample
variance,
).
The
variance
is
the
second
 
2
s
2
moment
of
a
frequency
distribution
taken
about
the
arithmetic
mean
as
the
origin.
For
a
normal
distribution,
it
is
the
sum
of
the
squared
deviations
of
the
(population
or
sample)
member
observation
about
the
(population
or
sample)
mean
divided
by
the
degrees
of
freedom
(
for
N
,
or
for
).
 
2
n
 
1
s
2
Verification
­
Confirmation
by
examination
and
provision
of
objective
evidence
that
specified
requirements
have
been
fulfilled.
In
design
and
development,
verification
concerns
the
process
of
examining
a
result
of
a
given
activity
to
determine
conformance
to
the
stated
requirements
for
that
activity.
171
APPENDIX
B
SUMMARY
OF
RCRA
REGULATORY
DRIVERS
FOR
CONDUCTING
WASTE
SAMPLING
AND
ANALYSIS
Through
RCRA,
Congress
provided
EPA
with
the
framework
to
develop
regulatory
programs
for
the
management
of
solid
and
hazardous
waste.
The
provisions
of
RCRA
Subtitle
C
establish
the
criteria
for
identifying
hazardous
waste
and
managing
it
from
its
point
of
generation
to
ultimate
disposal.
EPA's
regulations
set
out
in
40
CFR
Parts
260
to
279
are
the
primary
reference
for
information
on
the
hazardous
waste
program.
These
regulations
include
provisions
for
waste
sampling
and
testing
and
environmental
monitoring.
Some
of
these
RCRA
regulations
require
sampling
and
analysis,
while
others
do
not
specify
requirements
and
allow
sampling
and
analysis
to
be
performed
at
the
discretion
of
the
waste
handler
or
as
specified
in
individual
facility
permits.

Table
B­
1
provides
a
comprehensive
listing
of
the
regulatory
citations,
the
applicable
RCRA
standards,
requirements
for
demonstrating
attainment
or
compliance
with
the
standards,
and
relevant
USEPA
guidance
documents.
The
table
is
divided
into
three
major
sections
addressing
regulations
for
(1)
hazardous
waste
identification,
(2)
land
disposal
restrictions,
and
(3)
other
programs.
The
table
is
meant
to
be
used
as
a
general
reference
guide.
Consult
the
latest
40
CFR,
related
Federal
Register
notices,
and
EPA's
World
Wide
Web
site
(www.
epa.
gov)
for
new
or
revised
regulations
and
further
clarification
and
definitive
articulation
of
requirements.
In
addition,
because
some
states
have
requirements
that
differ
from
EPA
regulations
and
guidance,
we
recommend
that
you
consult
with
a
representative
from
your
State
if
your
State
is
authorized
to
implement
the
regulation.
Appendix
B
172
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
for
the
Hazardous
Waste
Identification
Program
§261.3(
a)(
2)(
v)
­
Used
oil
rebuttable
presumption
(see
also
Part
279,

Subpart
B
and
the
Part
279
standards
for
generators,
transporters,
processors,
re

refiners,
and
burners.)
Used
oil
that
contains
more
than
1,000
parts
per
million
(ppm)
of
total
halogens
is
presumed
to
have
been
mixed
with
a
regulated
halogenated
hazardous
waste
(e.
g.,
spent
halogenated
solvents),
and
is
therefore
subject
to
applicable
hazardous
waste
regulations.
The
rebuttable
presumption
does
not
apply
to
metalworking
oils
and
oils
from
refrigeration
units,
under
some
circumstances.
A
person
may
rebut
this
presumption
by
demonstrating,

through
analysis
or
other
documentation,
that
the
used
oil
has
not
been
mixed
with
halogenated
hazardous
waste.
One
way
of
doing
this
is
to
show
that
the
used
oil
does
not
contain
significant
concentrations
of
halogenated
hazardous
constituents
(50
FR
49176;
November
29,
1985).
If
the
presumption
is
successfully
rebutted,
then
the
used
oil
will
be
subject
to
the
used
oil
management
standards
instead
of
the
hazardous
waste
regulations.
Hazardous
Waste
Management
System;
Identification
and
Listing
of
Hazardous
Waste;
Recycled
Used
Oil
Management
Standards,
57
FR
41566;
September
10,
1992
Part
279
Requirements:
Used
Oil
Management
Standards,

EPA530­
H­
98­
001
§261.3(
c)(
2)(
ii)(
C)
­
Generic
exclusion
levels
for
K061,
K062,

and
F006
nonwastewater
HTMR
residues
To
be
excluded
from
the
definition
of
hazardous
waste,
residues
must
meet
the
generic
exclusion
levels
specified
at
§261.3(
c)(
2)(
ii)(
C)(
1)

and
exhibit
no
characteristics
of
hazardous
waste.
Testing
requirements
must
be
incorporated
in
a
facility's
waste
analysis
plan
or
a
generator's
self

implementing
waste
analysis
plan.

At
a
minimum,
composite
samples
of
residues
must
be
collected
and
analyzed
quarterly
and/
or
when
the
process
or
operation
generating
the
waste
changes.
Claimant
has
the
burden
of
proving
by
clear
and
convincing
evidence
that
the
material
meets
all
of
the
exclusion
requirements.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)
Appendix
B
173
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
for
the
Hazardous
Waste
Identification
Program
(continued)

§261.21­
Characteristic
of
Ignitability
A
solid
waste
exhibits
the
characteristic
of
ignitability
if
a
representative
sample
of
the
waste
is:
(1)
A
liquid
having
a
flashpoint
of
less
than
140
degrees
Fahrenheit
(60
degrees
Centigrade);
(2)
A
non­
liquid
which
causes
fire
through
friction,
absorption
of
moisture,
or
spontaneous
chemical
changes
and,
when
ignited,
burns
so
vigorously
and
persistently
it
creates
a
hazard;
(3)
An
ignitable
compressed
gas;
or
(4)
An
oxidizer.

(Aqueous
solutions
with
alcohol
content
less
than
24%
are
not
regulated.)
If
a
representative
sample
of
the
waste
exhibits
the
characteristic,

then
the
waste
exhibits
the
characteristic.
Appendix
I
of
40
CFR
Part
261
contains
references
to
representative
sampling
methods;
however
a
person
may
employ
an
alternative
method
without
formally
demonstrating
equivalency.
Also,
for
those
methods
specifically
prescribed
by
regulation,
the
generator
can
petition
the
Agency
for
the
use
of
an
alternative
method
(see
40
CFR
260.21).
See
Chapters
Seven
and
Eight
in
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

§261.22
­
Characteristic
of
Corrosivity
A
solid
waste
exhibits
the
characteristic
of
corrosivity
if
a
representative
sample
of
the
waste
is:
(1)
Aqueous,
with
a
pH
less
than
or
equal
to
2,
or
greater
than
or
equal
to
12.5;
or
(2)
Liquid
and
corrodes
steel
at
a
rate
greater
than
6.35
mm
per
year
when
applying
a
National
Association
of
Corrosion
Engineers
Standard
Test
Method.
If
a
representative
sample
of
the
waste
exhibits
the
characteristic,

then
the
waste
exhibits
the
characteristic.
Appendix
I
of
40
CFR
Part
261
contains
references
to
representative
sampling
methods;
however
a
person
may
employ
an
alternative
method
without
formally
demonstrating
equivalency.
Also,
for
those
methods
specifically
prescribed
by
regulation,
the
generator
can
petition
the
Agency
for
the
use
of
an
alternative
method
(see
40
CFR
260.21).
See
Chapters
Seven
and
Eight
in
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)
Appendix
B
174
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
for
the
Hazardous
Waste
Identification
Program
(continued)

§261.23
­
Characteristic
of
Reactivity
A
solid
waste
exhibits
the
characteristic
of
reactivity
if
a
representative
sample
of
the
waste:

(1)
Is
normally
unstable
and
readily
undergoes
violent
change;
(2)

Reacts
violently
with
water;
(3)

Forms
potentially
explosive
mixtures
with
water;
(4)
Generates
toxic
gases,
vapors,
or
fumes
when
mixed
with
water;
(5)
Is
a
cyanide
or
sulfide­
bearing
waste
which,

when
exposed
to
pH
conditions
between
2
and
12.5,
can
generate
toxic
gases,
vapors,
or
fumes;
(6)
Is
capable
of
detonation
or
explosion
if
subjected
to
a
strong
initiating
source
or
if
heated
under
confinement;
(7)
Is
readily
capable
of
detonation
or
explosive
decomposition
or
reaction
at
standard
temperature
and
pressure;

or
(8)
Is
a
forbidden
explosive
as
defined
by
DOT.
EPA
relies
on
these
narrative
criterion
to
define
reactive
wastes.

Waste
handlers
should
use
their
knowledge
to
determine
if
a
waste
is
sufficiently
reactive
to
be
regulated.
Also,
for
those
methods
specifically
prescribed
by
regulation,
the
generator
can
petition
the
Agency
for
the
use
of
an
alternative
method
(see
40
CFR
260.21).
EPA
currently
relies
on
narrative
standards
to
define
reactive
wastes,

and
withdrew
interim
guidance
related
to
sulfide
and
cyanide
levels
(see
a
Memorandum
entitled,

Withdrawal
of
Cyanide
and
Sulfide
Reactivity
Guidance"
from
David
Bussard
and
Barnes
Johnson
to
Diana
Love,
dated
April
21,
1998).

§
261.24
­
Toxicity
Characteristic
A
solid
waste
exhibits
the
characteristic
of
toxicity
if
the
extract
of
a
representative
sample
of
the
waste
contains
any
of
the
contaminants
listed
in
Table
1
in
261.24,
at
or
above
the
specified
regulatory
levels.
The
extract
should
be
obtained
through
use
of
the
Toxicity
Characteristic
Leaching
Procedure
(TCLP).
If
the
waste
contains
less
than
.5
percent
filterable
solids,
the
waste
itself,

after
filtering,
is
considered
to
be
the
extract.
Appendix
I
of
40
CFR
Part
261
contains
references
to
representative
sampling
methods;

however,
a
person
may
employ
an
alternative
method
without
formally
demonstrating
equivalency.
See
Chapters
Seven
and
Eight
in
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)
Appendix
B
175
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
for
the
Hazardous
Waste
Identification
Program
(continued)

§261.38(
c)(
8)(
iii)(
A)
­
Exclusion
of
Comparable
Fuels
from
the
Definition
of
Solid
and
Hazardous
Waste
For
each
waste
for
which
an
exclusion
is
claimed,
the
generator
of
the
hazardous
waste
must
test
for
all
of
the
constituents
on
Appendix
VIII
to
part
261,
except
those
that
the
generator
determines,
based
on
testing
or
knowledge,
should
not
be
present
in
the
waste.
The
generator
is
required
to
document
the
basis
for
each
determination
that
a
constituent
should
not
be
present.
For
waste
to
be
eligible
for
exclusion,
a
generator
must
demonstrate
that
"each
constituent
of
concern
is
not
present
in
the
waste
above
the
specification
level
at
the
95%
upper
confidence
limit
around
the
mean."
See
the
final
rule
from
June
19,1998
(63
FR
33781)

For
further
information
on
the
comparable
fuels
exclusion,
see
the
following
web
site:

http://
www.
epa.
gov/
combustion/
fast
rack/

Part
261­
Appendix
I


Representative
Sampling
Methods
Provides
sampling
protocols
for
obtaining
a
representative
sample.
For
the
purposes
of
Subpart
C,
a
sample
obtained
using
Appendix
I
sampling
methods
will
be
considered
representative.
The
Appendix
I
methods,
however,
are
not
formally
adopted
(see
comment
at
§261.20(
c)).
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

ASTM
Standards
Appendix
B
176
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
for
the
Land
Disposal
Restriction
Program
§268.6(
b)(
1)
­
Petitions
to
Allow
Land
Disposal
of
a
Waste
Prohibited
Under
Subpart
C
of
Part
268
(No­
Migration
Petition)
The
demonstration
must
meet
the
following
criteria:
(1)
All
waste
and
environmental
sampling,
test,
and
analysis
data
must
be
accurate
and
reproducible
to
the
extent
that
state­
of­
the­
art
techniques
allow;
(2)

All
sampling,
testing,
and
estimation
techniques
for
chemical
and
physical
properties
of
the
waste
and
all
environmental
parameters
must
have
been
approved
by
the
EPA
Administrator.
°
Waste
analysis
requirements
will
be
specific
to
the
petition.

°
Sampling
methods
are
specified
in
the
facility's
Waste
Analysis
Plan.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)

Land
Disposal
Restrictions
No
Migration
Variances;
Proposed
Rule.
Federal
Register,
August
11,

1992
(USEPA
1992)

§268.40
­
Land
Disposal
Restriction
(LDR)
concentration­
level
standards
For
total
waste
standards,
all
hazardous
constituents
in
the
waste
or
in
the
treatment
residue
must
be
at
or
below
the
values
in
the
table
at
268.40.
For
waste
extract
standards,
the
hazardous
constituents
in
the
extract
of
the
waste
or
in
the
extract
of
the
treatment
residue
must
be
at
or
below
the
values
in
the
table
at
268.40.
°
Sampling
methods
are
specified
in
the
facility's
Waste
Analysis
Plan.

°
Compliance
with
the
standards
for
nonwastewater
is
measured
by
an
analysis
of
grab
samples.

Compliance
with
wastewater
standards
is
based
on
composite
samples.
No
single
sample
may
exceed
the
applicable
standard.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)
Appendix
B
177
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
for
the
Land
Disposal
Restriction
Program
(continued)

§268.44
­
Land
Disposal
Restriction
Treatability
Variance
If
you
are
a
generator
or
treatment
facility
whose
wastes
cannot
be
treated
to
achieve
the
established
treatment
standards,
or
for
which
treatment
standards
are
not
appropriate,
you
may
petition
EPA
for
a
variance
from
the
treatment
standard.
A
treatment
variance
does
not
exempt
your
wastes
from
treatment,
but
rather
establishes
an
alternative
LDR
treatment
standard.
The
application
must
demonstrate
that
the
treatment
standard
for
the
waste
in
question
is
either
"unachievable"
or
"inappropriate."
Memorandum
entitled
"Use
of
Site

Specific
Land
Disposal
Restriction
Treatability
Variances
Under
40
CFR
268.44(
h)
During
Cleanups"

(Available
from
the
RCRA
Call
Center
or
on
EPA's
web
site
at
http://
www.
epa.
gov/
epaoswer/
hazw
aste/
ldr/
tv­
rule/
guidmem.
txt
Variance
Assistance
Document:

Land
Disposal
Restrictions
Treatability
Variances
&

Determinations
of
Equivalent
Treatment
(available
from
the
RCRA
Call
Center
or
on
EPA's
web
site
at
http://
www.
epa.
gov/
epaoswer/
hazw
aste/
ldr/
guidance2.
pdf
§268.49(
c)(
1)
­
Alternative
LDR
Treatment
Standards
for
Contaminated
Soil
All
constituents
subject
to
treatment
must
be
treated
as
follows:
(A)
For
non­
metals,
treatment
must
achieve
90
percent
reduction
in
total
constituent
concentrations
except
where
treatment
results
in
concentrations
less
that
10
times
the
Universal
Treatment
Standard
(UTS)
at
268.48.
(B)
For
metals,

treatment
must
achieve
90
percent
reduction
in
constituent
concentrations
as
measured
in
TCLP
leachate
from
the
treated
media
or
90
percent
reduction
in
total
concentrations
when
a
metal
removal
technology
is
used,
except
where
treatment
results
in
concentrations
less
that
10
times
the
UTS
at
268.48.
Sampling
methods
are
specified
in
the
facility's
Waste
Analysis
Plan.
Guidance
on
Demonstrating
Compliance
With
the
Land
Disposal
Restrictions
(LDR)
Alternative
Soil
Treatment
Standards
(USEPA
2002)
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)
Appendix
B
178
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
§260.10
­
Definitions
"Representative
sample"
means
a
sample
of
a
universe
or
whole
(e.
g.

waste
pile,
lagoon,
ground
water)

which
can
be
expected
to
exhibit
the
average
properties
of
the
universe
or
whole.
Representative
samples
may
be
required
to
measure
compliance
with
various
provisions
within
the
RCRA
regulations.
See
requirements
specified
in
the
applicable
regulation
or
implementation
guidance.
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

Part
260
­
Subpart
C
­
Rulemaking
Petitions
In
the
section
for
petitions
to
amend
Part
261
to
"delist"
a
hazardous
waste,
the
petitioner
must
demonstrate
that
the
waste
does
not
meet
any
of
the
criteria
under
which
the
waste
was
listed
as
a
hazardous
waste
(§
260.22).
Demonstration
samples
must
consist
of
enough
representative
samples,
but
in
no
case
less
than
four
samples,
taken
over
a
period
of
time
sufficient
to
represent
the
variability
or
the
uniformity
of
the
waste.
Petitions
to
Delist
Hazardous
Waste–
A
Guidance
Manual.
2
nd
ed.

(USEPA
1993d)

Region
6
RCRA
Delisting
Program
Guidance
Manual
for
the
Petitioner
(USEPA
1996d)

Part
262
­
Subpart
A
­
Purpose,

Scope,
and
Applicability
(including
§262.11
­
Hazardous
Waste
Determination)
Generators
must
make
the
following
determinations
if
a
secondary
material
is
a
solid
waste:
1)
whether
the
solid
waste
is
excluded
from
regulation;
2)
whether
the
waste
is
a
listed
waste;
and
3)
whether
the
waste
is
characteristic
waste
(§
262.11)
Generators
must
document
their
waste
determination
and
land
disposal
restriction
determination.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)

Part
262
­
Subpart
C
­
Pre

Transport
Requirements
Under
§262.34(
a)(
4),
if
generators
are
performing
treatment
within
their
accumulation
units,
they
must
comply
with
the
waste
analysis
plan
requirements
of
§268.7(
a)(
5).
Generators
must
develop
a
waste
analysis
plan
(kept
on­
site
for
three
years)
which
details
the
treatment
they
are
performing
to
meet
LDR
treatment
standards
and
the
type
of
analysis
they
are
performing
to
show
completion
of
treatment.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)
Appendix
B
179
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
264
­
Subpart
A
­
Purpose,

Scope,
and
Applicability
§264.1(
j)(
2)
­
In
an
exemption
established
by
the
HWIR­
media
rulemaking,
remediation
waste
can
be
exempt
under
circumstances
that
require
chemical
and
physical
analysis
of
a
representative
sample
of
the
hazardous
remediation
waste
to
be
managed
at
the
site.
The
analysis,
at
a
minimum,
must
contain
all
the
information
needed
to
treat,
store,
or
dispose
of
the
waste
according
to
Part
264
and
Part
268.
The
waste
analysis
must
be
accurate
and
up­
to­
date.
See
the
final
Federal
Register
notice
from
November
30,
1998
(63
FR
65873)
For
further
documentation,
see
the
following
web
site:

http://
www.
epa.
gov/
epaoswer/
hazw
aste/
id/
hwirmdia.
htm
Parts
264/
265
­
Subpart
B


General
Facility
Standards
§264/
265.13
­
General
waste
analysis
requirements
specify:
(a)

Detailed
chemical
and
physical
analysis
of
a
representative
sample
is
required
before
an
owner
treats,

stores,
or
disposes
of
any
hazardous
waste.
Sampling
method
may
be
those
under
Part
261;
and
(b)
Owner/
operator
must
develop
and
follow
a
written
waste

analysis
plan.
All
requirements
are
case­
by­
case
and
are
determined
in
the
facility
permit.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)
Appendix
B
180
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
264
­
Subpart
F
­
Groundwater
Monitoring
Groundwater
monitoring
wells
must
be
properly
installed
so
that
samples
will
yield
representative
results.
All
monitoring
wells
must
be
lined,
or
cased,
in
a
manner
that
maintains
the
integrity
of
the
monitoring
well
bore
hole
(§
264.97(
c)).
Poorly
installed
wells
may
give
false
results.

There
are
specific
monitoring
standards
for
all
three
sub

programs:
°
Detection
Monitoring
(§
264.98);

°
Compliance
Monitoring
(§
264.99);
and
°
Corrective
Action
Program
(§
264.100).

The
Corrective
Action
Program
is
specific
to
the
Groundwater
Monitoring
Program.
At
a
minimum,
there
must
be
procedures
and
techniques
for
sample
collection,
sample
preservation
and
shipment,

analytical
procedures,
and
chain­
of

custody
control
(§
264.97(
d)).

Sampling
and
analytical
methods
must
be
appropriate
for
groundwater
sampling
and
accurately
measure
the
hazardous
constituents
being
analyzed.
The
owner
and
operator
must
develop
an
appropriate
sampling
procedure
and
interval
for
each
hazardous
constituent
identified
in
the
facility's
permit.
The
owner
and
operator
may
use
an
alternate
procedure
if
approved
by
the
RA.
Requirements
and
procedures
for
obtaining
and
analyzing
samples
are
detailed
in
the
facility
permit,
usually
in
a
Sampling
and
Analysis
Plan.
Statistical
Analysis
of
Ground­
Water
Monitoring
Data
at
RCRA
Facilities
(Interim
Final
Guidance).
Office
of
Solid
Waste
(USEPA
1989b)

RCRA
Ground­
Water
Monitoring:

Draft
Technical
Guidance.
(USEPA
1992c)
Statistical
Analysis
of
Ground­
Water
Monitoring
Data
at
RCRA
Facilities
Addendum
to
Interim
Final
Guidance
(USEPA
1992b)

Methods
for
Evaluating
the
Attainment
of
Cleanup
Standards.

Volume
2:
Ground
Water
(USEPA.

1992i)
Appendix
B
181
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
265
­
Subpart
F
­
Ground

water
Monitoring
To
comply
with
Part
265,
Subpart
F,

the
owner/
operator
must
install,

operate,
and
maintain
a
ground

water
monitoring
system
capable
of
representing
the
background
groundwater
quality
and
detecting
any
hazardous
constituents
that
have
migrated
from
the
waste
management
area
to
the
uppermost
aquifer.
Under
Part
265,
Subpart
F,

there
are
two
types
of
groundwater
monitoring
programs:
an
indicator
evaluation
program
designed
to
detect
the
presence
of
a
release,

and
a
ground­
water
quality
assessment
program
that
evaluates
the
nature
and
extent
of
contamination.
To
determine
existing
ground­
water
conditions
at
an
interim
status
facility,
the
owner
and
operator
must
install
at
least
one
well
hydraulically
upgradient
from
the
waste
management
area.
The
well(
s)
must
be
able
to
accurately
represent
the
background
quality
of
ground
water
in
the
uppermost
aquifer.
The
owner
and
operator
must
install
at
least
three
wells
hydraulically
downgradient
at
the
limit
of
the
waste
management
area,
which
are
able
to
immediately
detect
any
statistically
significant
evidence
of
a
release.
A
separate
monitoring
system
for
each
management
unit
is
not
required
as
long
as
the
criteria
in
§265.91(
a)

are
met
and
the
system
is
able
to
detect
any
release
at
the
edge
of
the
waste
management
area.
Statistical
Analysis
of
Ground­
Water
Monitoring
Data
at
RCRA
Facilities
(Interim
Final
Guidance).
Office
of
Solid
Waste
(USEPA
1989b)

RCRA
Ground­
Water
Monitoring:

Draft
Technical
Guidance.
(USEPA
1992c)
Statistical
Analysis
of
Ground­
Water
Monitoring
Data
at
RCRA
Facilities
Addendum
to
Interim
Final
Guidance
(USEPA
1992b)

Part
264/
265
­
Subpart
G
­
Closure
and
Post­
Closure
The
closure
plan
must
include
a
detailed
description
of
the
steps
for
sampling
and
testing
surrounding
soils
and
criteria
for
determining
the
extent
of
decontamination
required
to
satisfy
the
closure
performance
standards.
(§
264/
265.112(
b)(
4))
All
requirements
are
facility­
specific
and
are
set
forth
in
the
facility
permit.
Closure/
Postclosure
Interim
Status
Standards
(40
CFR
265,
Subpart
G):
Standards
Applicable
to
Owners
and
Operators
of
Hazardous
Waste
Treatment,
Storage,
and
Disposal
Facilities
Under
RCRA,
Subtitle
C,

Section
3004
RCRA
Guidance
Manual
for
Subpart
G
Closure
and
Postclosure
Care
Standards
and
Subpart
H
Cost
Estimating
Requirements
(USEPA
1987)
Appendix
B
182
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
264
­
Subpart
I
­
Use
and
Management
of
Containers
Spilled
or
leaked
waste
and
accumulated
precipitation
must
be
removed
from
the
sump
or
collection
area
in
as
timely
a
manner
as
is
necessary
to
prevent
overflow
of
the
collection
system
(§
264.175).
If
the
collected
material
is
a
hazardous
waste
under
part
261
of
this
Chapter,
it
must
be
managed
as
a
hazardous
waste
in
accordance
with
all
applicable
requirements
of
parts
262
through
266
of
the
chapter.
If
the
collected
material
is
discharged
through
a
point
source
to
waters
of
the
United
States,
it
is
subject
to
the
requirements
of
section
402
of
the
Clean
Water
Act,
as
amended.

Testing
scope
and
requirements
are
site­
specific
and
are
set
forth
in
the
facility
waste
analysis
plan.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)

Guidance
for
Permit
Writers:

Facilities
Storing
Hazardous
Waste
in
Containers,
11/
2/
82,
PB88­
105
689
Model
RCRA
Permit
for
Hazardous
Waste
Management
Facilities,

9/
15/
88,
EPA530­
SW­
90­
049
Parts
264/
265
­
Subpart
J
­
Tank
Systems
Demonstrate
the
absence
or
presence
of
free
liquids
in
the
stored/
treated
waste
using
EPA
Method
9095
(Paint
Filter
Liquid
Tests)
of
SW­
846
(§§
264/
265.196).
The
Paint
Filter
Liquid
Test
is
a
positive
or
negative
test.
Method
9095
of
Test
Methods
for
Evaluating
Solid
Waste,

Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)
Appendix
B
183
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
264/
265
­
Subpart
M
­
Land
Treatment
To
demonstrate
adequate
treatment
(treatment
demonstration),
the
permittee
must
perform
testing,

analytical,
design,
and
operating
requirements.
(§
264.272)

Demonstration
that
food­
chain
crops
can
be
grown
on
a
treatment
unit
can
include
sample
collection
with
criteria
for
sample
selection,

sample
size,
analytical
methods,

and
statistical
procedures.

(§
264/
265.276)
Owner/
operator
must
collect
pore

water
samples
and
determine
if
there
has
been
a
statistically
significant
change
over
background
using
procedures
specified
in
the
permit.
(§
264/
265.278)

During
post­
closure
period,
owner
may
conduct
pore­
water
and
soil
sampling
to
determine
if
there
has
been
a
statistically
significant
change
in
the
concentration
of
hazardous
constituents.

(§
264/
265.280)
All
requirements
are
facility­
specific
and
are
set
forth
in
the
facility
permit.
See
Chapters
Twelve
in
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

Guidance
Manual
on
Hazardous
Waste
Land
Treatment
Closure/
Postclosure
(40
CFR
Part
265),
4/
14/
87,
PB87­
183
695
Hazardous
Waste
Land
Treatment,

4/
15/
83,
SW­
874
Permit
Applicants'
Guidance
Manual
for
Hazardous
Waste
Land
Treatment,
Storage,
and
Disposal
Facilities;
Final
Draft,
5/
15/
84,

EPA530­
SW­
84­
004
Permit
Guidance
Manual
on
Hazardous
Waste
Land
Treatment
Demonstrations,
7/
15/
86,
EPA530­

SW­
86­
032
Permit
Guidance
Manual
on
Unsaturated
Zone
Monitoring
for
Hazardous
Waste
Land
Treatment
Units,
10/
15/
86,
EPA530­
SW­
86­

040
Appendix
B
184
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
264
­
Subpart
O
­
Incinerators
There
are
waste
analysis
requirements
to
verify
that
waste
fed
to
the
incinerator
is
within
physical
and
chemical
composition
limits
specified
in
the
permit.

(§§
264/
265.341)

The
owner/
operator
must
conduct
sampling
and
analysis
of
the
waste
and
exhaust
emissions
to
verify
that
the
operating
requirements
established
in
the
permit
achieve
the
performance
standards
of
§264.343
(§§
264/
265.347)
All
requirements
are
facility­
specific
and
are
set
forth
in
the
facility
permit.
See
Chapter
Thirteen
in
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)
Appendix
B
185
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Corrective
Action
for
Solid
Waste
Management
Units
EPA
includes
corrective
action
in
permits
through
the
following
statutory
citations:
Section
3008(
h)
­
provides
authority
to
require
corrective
action
at
interim
status
facilities
Section
3004(
u)
­
requires
corrective
action
be
addressed
as
a
condition
of
a
facility's
Part
B
permit
Section
3004(
v)
­
provides
authority
to
require
corrective
action
for
releases
migrating
beyond
the
facility
boundary
Section
3005(
c)(
3)
­
provides
authority
to
include
additional
requirements
in
a
facility's
permit,

including
corrective
action
requirements
Section
7003
­
gives
EPA
authority
to
take
action
when
contamination
presents
an
imminent
hazard
to
human
health
or
the
environment
Often
the
first
activity
in
the
corrective
action
process
is
the
RCRA
facility
Assessment
(RFA),

which
identifies
potential
and
actual
releases
from
solid
waste
management
units
(SWMUs)
and
make
preliminary
determinations
about
releases,
the
need
for
corrective
action,
and
interim
measures.
Another
activity
in
the
corrective
action
process
is
the
RCRA
Facility
Investigation
(RFI),

which
takes
place
when
a
release
has
been
identified
and
further
investigation
is
necessary.
The
purpose
of
the
RFI
is
to
gather
enough
data
to
fully
characterize
the
nature,
extent,
and
rate
of
migration
of
contaminants
to
determine
the
appropriate
response
action.
Once
the
implementing
agency
has
selected
a
remedy,
the
facility
enters
the
Corrective
Measures
Implementation
(CMI)

phase,
in
which
the
owner
and
operator
of
the
facility
implements
the
chosen
remedy.
Corrective
action
may
include
various
sampling
and
monitoring
requirements.
There
is
a
substantial
body
of
guidance
and
publications
related
to
RCRA
corrective
action.
See
the
following
link
for
further
information:

http://
www.
epa.
gov/
epaoswer/
hazw
aste/
ca/
resource.
htm
§264.552
­
Corrective
Action
Management
Units
There
are
ground­
water
monitoring,

closure,
and
post­
closure
requirements
for
CAMUs.
All
requirements
are
case­
by­
case
and
are
determined
in
the
facility
permit.
There
are
numerous
guidance
documents
available.
See
the
following
link
for
further
information:

http://
www.
epa.
gov/
epaoswer/
hazw
aste/
ca/
resource.
htm
Appendix
B
186
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Parts
264/
265
­
Subpart
AA
­
Air
Emission
Standards
The
following
types
of
units
are
subject
to
the
Subpart
AA
process
vent
standards:

°
Units
subject
to
the
permitting
standards
of
Part
270
(i.
e.,

permitted
or
interim
status)

°
Recycling
units
located
at
hazardous
waste
management
facilities
otherwise
subject
to
the
permitting
standards
of
Part
270
(i.
e.,
independent
of
the
recycling
unit,
the
facility
has
a
RCRA
permit
or
is
in
interim
status)

°
Less
than
90­
day
large
quantity
generator
units.
Testing
and
statistical
methods
are
specified
in
the
regulations
at
§264.1034(
b).
The
primary
source
of
guidance
is
the
regulations.
See
also
the
final
rulemakings
that
promulgated
the
regulations:

June
21,
1990
(55
FR
25494)

November
25,
1996
(62
FR
52641)

June
13,
1997
(62
FR
32462)

Parts
264/
265
­
Subpart
BB
­
Air
Emission
Standards
The
following
types
of
units
are
subject
to
the
Subpart
BB
equipment
leak
standards:

°
Units
subject
to
the
permitting
standards
of
Part
270
(i.
e.,

permitted
or
interim
status)

°
Recycling
units
located
at
hazardous
waste
management
facilities
otherwise
subject
to
the
permitting
standards
of
Part
270
(i.
e.,
independent
of
the
recycling
unit,
the
facility
already
has
a
RCRA
permit
or
is
in
interim
status)

°
Less
than
90­
day
large
quantity
generator
units
The
standards
specify
the
type
and
frequency
of
all
inspection
and
monitoring
activities
required.

These
requirements
vary
depending
on
the
piece
of
equipment
at
the
facility.
Testing
and
statistical
methods
are
specified
in
the
regulations
at
§264.1063(
c).
The
primary
source
of
guidance
is
the
regulations.
See
also
the
final
rulemakings
that
promulgated
the
regulations:

June
21,
1990
(55
FR
25494)

June
13,
1997
(62
FR
32462)
Appendix
B
187
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

§266.112
­
Regulation
of
Residues
A
residue
from
the
burning
or
processing
of
hazardous
waste
may
be
exempt
from
hazardous
waste
determination
if
the
waste
derived
residue
is
either:
substantially
similar
to
normal
residue
or
below
specific
health
based
levels
for
both
metal
and
nonmetal
constituents.
Concentrations
must
be
determined
based
on
analysis
of
one
or
more
samples
obtained
over
a
24­
hour
period.
Multiple
samples
may
be
analyzed
and
composite
samples
may
be
used
provided
the
sampling
period
does
not
exceed
24
hours.
If
more
than
one
sample
is
analyzed
to
represent
the
24­
hour
period,
the
concentration
shall
be
the
arithmetic
mean
of
the
concentrations
in
the
samples.
The
regulations
under
§266.112
have
specific
sampling
and
analysis
requirements
Part
266,
Appendix
IX
Part
270
­
Subpart
B
­
Permit
Application,
Hazardous
Waste
Permitting
Provides
the
corresponding
permit
requirement
to
the
general
requirements
(including
the
requirement
for
a
waste
analysis
plan)
under
§270.14.
There
are
also
unit­
specific
waste
analysis,

monitoring,
and
sampling
requirements
incinerators
(§
270.19)

and
boilers
and
industrial
furnaces
(§
270.22).
There
are
also
specific
requirements
for
dioxin
listings
handled
in
waste
piles
(§
270.18)

and
landfills
(§
270.21).
The
permittee
must
conduct
appropriate
sampling
procedures,

and
retain
results
of
all
monitoring.

All
requirements
are
facility
specific
and
are
set
forth
in
the
permit
and
waste
analysis
plan.
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)

Part
270
­
Subpart
C
­
Conditions
Applicable
to
All
Permits
Under
§270.30,
there
are
specific
requirements
for
monitoring
and
recordkeeping.
Section270.31
requires
monitoring
to
be
detailed
in
the
permit.
The
permittee
must
conduct
appropriate
sampling
procedures,

and
retain
results
of
all
monitoring.

All
requirements
are
facility
specific
and
are
set
forth
in
the
permit
and
waste
analysis
plan.
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)
Appendix
B
188
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
270
­
Subpart
F
­
Special
Forms
of
Permits
Specifies
sampling
and
monitoring
requirements
based
on
trial
burns
for
incinerators
(§
270.62)
and
Boiler
and
Industrial
Furnaces
(§
270.66).
Waste
analysis
and
sampling
requirements
are
site
specific
and
set
forth
in
each
facility's
waste
analysis
plan
required
under
264.13.
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual,
EPA530­
R­
94­

024
(USEPA
1994a)

Part
273
­
Universal
Wastes
Handlers
and
transporters
of
universal
wastes
must
determine
if
any
material
resulting
from
a
release
is
a
hazardous
waste.

(§
273.17(
b)
for
small
quantity
handlers,
§273.37(
b)
for
large
quantity
handlers,
and
§273.54
for
transporters
of
universal
wastes)

Also,
if
certain
universal
wastes
are
dismantled,
such
as
batteries
or
thermostats,
in
certain
cases
the
resulting
materials
must
be
characterized
for
hazardous
waste
purposes.
(§§
273.13(
a)(
3)
and
(c)(
3)(
i))
Sampling
and
analysis
requirements
are
identical
to
hazardous
waste
identification
requirements.
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,

Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.

SW­
846.
(USEPA
1986a)

Universal
Waste
Final
Rule,
60
FR
25492;
May
11,
1995
Final
rule
adding
Flourescent
Lamps,
64
FR
36465;
July
6,
1999
Appendix
B
189
Table
B­
1.
Summary
of
Waste
Analysis
Drivers
for
Major
RCRA
Regulatory
Program
Areas
40
CFR
Citation
and
Description
Applicable
Standards
Requirements
for
Demonstrating
Attainment
of
or
Compliance
With
the
Standards
Relevant
USEPA
Guidance
Waste
Analysis
Drivers
in
Other
RCRA
Regulations
(continued)

Part
279
­
Standards
for
the
Management
of
Used
Oil
Specifies
sampling
and
analysis
procedures
for
owners
or
operators
of
used­
oil
processing
and
re

refining
facilities.
Under
§279.55,
owners
or
operators
of
used
oil
processing
and
re­
refining
facilities
must
develop
and
follow
a
written
analysis
plan
describing
the
procedures
that
will
be
used
to
comply
with
the
analysis
requirements
of
§279.53
and/
or
§279.72.
The
plan
must
be
kept
at
the
facility.
Sampling:
Part
261,
Appendix
I
Hazardous
Waste
Management
System;
Identification
and
Listing
of
Hazardous
Waste;
Recycled
Used
Oil
Management
Standards,
57
FR
41566,
September
10,
1992
Part
279
Requirements:
Used
Oil
Management
Standards,

EPA530­
H­
98­
001
190
This
page
intentionally
left
blank
191
APPENDIX
C
STRATEGIES
FOR
SAMPLING
HETEROGENEOUS
WASTES
C.
1
Introduction
"Heterogeneous
wastes"
include
structures,
demolition
debris,
waste­
construction
materials,
containers
(e.
g.,
drums,
tanks,
and
paint
cans),
solid
waste
from
laboratories
and
manufacturing
processes,
and
post­
consumer
wastes
(e.
g.,
electronics
components,
battery
casings,
and
shredded
automobiles)
(USEPA
and
USDOE
1992).
Heterogeneous
wastes
can
pose
challenges
in
the
development
and
implementation
of
a
sampling
program
due
to
the
physical
variety
in
size,
shape,
and
composition
of
the
material
and
the
lack
of
tools
and
approaches
for
sampling
heterogeneous
waste.
The
application
of
conventional
sampling
approaches
to
heterogeneous
waste
is
difficult
and
may
not
provide
a
representative
sample.

To
develop
a
sampling
strategy
for
heterogeneous
waste,
it
is
first
important
to
understand
the
scale,
type,
and
magnitude
of
the
heterogeneity.
This
appendix
provides
an
overview
of
largescale
heterogeneity
and
provides
some
strategies
that
can
be
used
to
obtain
samples
of
heterogeneous
wastes.
See
also
Section
6.2.1
for
a
description
of
other
types
of
heterogeneity
including
short
range
(small­
scale)
heterogeneity
(which
includes
distribution
and
constitution
heterogeneity).

Additional
guidance
on
sampling
heterogeneous
waste
can
be
found
in
the
following
documents:

°
Characterizing
Heterogeneous
Wastes:
Methods
and
Recommendations
(USEPA
and
USDOE
1992)

°
Standard
Guide
for
Sampling
Strategies
for
Heterogeneous
Waste
(ASTM
D
5956­
96)

°
Pierre
Gy's
Sampling
Theory
and
Sampling
Practice:
Heterogeneity,
Sampling
Correctness,
and
Statistical
Process
Control.
2
nd
ed.
(Chapter
21)
(Pitard
1993),
and
°
Geostatistical
Error
Management:
Quantifying
Uncertainty
for
Environmental
Sampling
and
Mapping
(Myers
1997).

C.
2
Types
of
Large­
Scale
Heterogeneity
The
notion
of
heterogeneity
is
related
to
the
scale
of
observation.
An
example
given
by
Pitard
(1993)
and
Myers
(1997)
is
that
of
a
pile
of
sand.
From
a
distance
of
a
few
feet,
a
pile
of
sand
appears
to
be
uniform
and
homogeneous;
however,
at
close
range
under
magnification
a
pile
of
sand
is
heterogeneous.
Substantial
differences
are
found
between
the
individual
grains
in
their
sizes,
shapes,
colors,
densities,
hardness,
mineral
composition,
etc.
For
some
materials,
the
differences
between
individual
grains
or
items
are
not
measurable
or
are
not
significant
relative
to
the
project
objectives.
In
such
a
case,
the
degree
of
heterogeneity
is
so
minor
that
for
practical
purposes
the
material
can
be
considered
homogeneous.
The
Standard
Guide
for
Sampling
Strategies
for
Heterogeneous
Waste
(ASTM
D
5956­
96)
refers
to
this
condition
as
Appendix
C
192
"practical
homogeneity,"
but
recognizes
that
true
homogeneity
does
not
exist.

At
a
larger
scale,
such
as
an
entire
waste
site,
long­
range
(or
large­
scale)
nonrandom
heterogeneity
is
of
interest.
Large­
scale
heterogeneity
reflects
local
trends
and
plays
an
important
role
in
deciding
whether
to
use
a
geostatistical
appraisal
to
identify
spatial
patterns
at
the
site,
to
use
stratified
sampling
design
to
estimate
a
parameter
(such
as
the
overall
mean),
or
to
define
the
boundaries
of
the
sampling
problem
so
that
it
comprises
two
or
more
decision
units
that
are
each
internally
relatively
homogeneous.

Items,
particles,
or
phases
within
a
waste
or
site
can
be
distributed
in
various
ways
to
create
distinctly
different
types
of
heterogeneity.
These
types
of
heterogeneity
include:

°
Random
heterogeneity
–
occurs
when
dissimilar
items
are
randomly
distributed
throughout
the
population.

°
Non­
random
heterogeneity
–
occurs
when
dissimilar
items
are
nonrandomly
distributed,
resulting
in
the
generation
of
strata.
The
term
strata
refers
to
subgroups
of
a
population
separated
in
space,
in
time,
or
by
component
from
the
remainder
of
the
population.
Strata
are
internally
consistent
with
respect
to
a
target
constituent
or
a
property
of
interest
and
are
different
from
adjacent
portions
of
the
population.

The
differences
between
items
or
particles
that
result
in
heterogeneity
are
due
to
differences
in
their
composition
or
properties.
One
of
these
properties
–
particle
size
–
deserves
special
consideration
because
significant
differences
in
particle
size
are
common
and
can
complicate
sampling
due
to
the
fundamental
error.
Fundamental
error
can
be
reduced
only
through
particle­
size
reduction
or
the
collection
of
sufficiently
large
samples.
(Section
6
describes
the
impacts
that
fundamental
error
and
particle
size
can
have
on
sampling
error.)

Figure
C­
1
depicts
populations
exhibiting
the
three
types
of
heterogeneity
described
in
ASTM
D
5956­
96
Standard
Guide
for
Sampling
Strategies
for
Heterogeneous
Waste:
(1)
homogeneous,
(2)
randomly
heterogeneous,
(3)
and
nonrandomly
heterogeneous
populations.
The
drum­
like
populations
portray
different
types
of
spatial
distributions
while
the
populations
being
discharged
through
the
pipes
represent
different
types
of
temporal
distributions.

In
the
first
scenario,
very
little
spatial
or
temporal
variation
is
found
between
the
identical
particles
of
the
"homogeneous"
population;
however,
in
the
second
scenario,
spatial
and
temporal
variations
are
present
due
to
the
difference
between
the
composition
of
the
particles
or
items
that
make
up
the
waste.
ASTM
D
5956­
96
refers
to
this
as
a
"randomly
heterogeneous"
population.
In
the
third
scenario,
the
overall
composition
of
the
particles
or
items
remain
the
same
as
in
the
second
scenario,
but
the
two
different
components
have
segregated
into
distinct
strata
(e.
g.,
due
to
gravity),
with
each
strata
being
internally
homogeneous.
ASTM
D
5956­
96
refers
to
waste
with
this
characteristic
as
"non­
randomly
heterogeneous."

C.
3
Magnitude
of
Heterogeneity
The
magnitude
of
heterogeneity
is
the
degree
to
which
there
are
differences
in
the
characteristic
of
interest
between
fragments,
particles,
or
volumes
within
the
population.
The
magnitude
of
heterogeneity
can
range
from
that
of
a
population
whose
items
are
so
similar
that
it
is
practically
Appendix
C
193
Figure
C­
1.
Different
types
of
spatial
and
temporal
heterogeneity.

homogeneous
to
a
population
whose
items
are
all
dissimilar.
Statistical
measures
of
dispersion,
the
variance
and
standard
deviation,
are
useful
indicators
of
the
degree
of
heterogeneity
within
a
waste
or
waste
site
(assuming
sampling
error
is
not
a
significant
contributor
to
the
variance

an
optimistic
assumption).

If
the
waste
exhibits
nonrandom
heterogeneity
and
a
high
magnitude
of
heterogeneity,
then
consider
segregating
(e.
g.,
at
the
point
of
generation)
and
managing
the
waste
as
two
or
more
separate
decision
units
(if
physically
possible
and
allowed
by
regulations).
This
approach
will
require
prior
knowledge
(for
example,
from
a
pilot
study)
of
the
portions
of
the
waste
that
fall
into
each
specified
category
(such
as
hazardous
debris
and
nonhazardous
debris).

C.
4
Sampling
Designs
for
Heterogeneous
Wastes
The
choice
of
a
sampling
design
to
characterize
heterogeneous
waste
will
depend
upon
the
regulatory
objective
of
the
study
(e.
g.,
waste
identification
or
classification,
site
characterization,
etc.),
the
data
quality
objectives,
the
type
and
magnitude
of
the
heterogeneity,
and
practical
considerations
such
as
access
to
all
portions
of
the
waste,
safety,
and
the
availability
of
equipment
suitable
for
obtaining
and
preparing
samples.

As
described
in
Section
5
of
this
document,
there
are
two
general
categories
of
sampling
designs:
probability
sampling
design
and
authoritative
(nonprobability)
sampling
designs.
Probability
sampling
refers
to
sampling
designs
in
which
all
parts
of
the
waste
or
media
under
study
have
a
known
probability
of
being
included
in
the
sample.
This
assumption
may
be
difficult
to
support
when
sampling
highly
heterogeneous
materials
such
as
construction
debris.
Appendix
C
194
All
parts
of
a
highly
heterogeneous
waste
may
not
be
accessible
by
conventional
sampling
tools,
limiting
the
ability
to
introduce
some
form
of
randomness
into
the
sampling
design.

Random
Heterogeneous
Waste:
For
random
heterogeneous
waste,
a
probability
sampling
design
such
as
simple
random
or
systematic
sampling
can
be
used.
At
least
one
of
two
sample
collection
strategies,
however,
also
should
be
used
to
improve
the
precision
(reproducibility)
of
the
sampling
design:
(1)
take
very
large
individual
samples
(to
increase
the
sample
support),
or
(2)
take
many
increments
to
form
each
individual
sample
(i.
e.,
use
composite
sampling).
The
concept
of
sample
support
is
described
in
Section
6.2.3.
Composite
sampling
is
discussed
in
Section
5.3.

Non­
Random
Heterogeneous
Waste:
For
non­
random
heterogeneous
wastes,
one
of
two
strategies
can
be
used
to
improve
sampling:
(1)
If
the
objective
is
to
estimate
an
overall
population
parameter
(such
as
the
mean),
then
stratified
random
sampling
could
be
used.
Stratified
random
sampling
is
discussed
in
detail
in
Section
5.2.2.
(2)
If
the
objective
is
to
characterize
each
stratum
separately
(e.
g.,
to
classify
the
stratum
as
either
a
hazardous
waste
or
a
nonhazardous
waste),
then
an
appropriate
approach
is
to
separate
or
divert
each
stratum
at
its
point
of
generation
into
discrete,
nonoverlapping
decision
units
and
characterize
and
manage
each
decision
unit
separately
(i.
e.,
to
avoid
mixing
or
managing
hazardous
waste
with
nonhazardous
waste).

If
some
form
of
stratified
sampling
is
used,
then
one
of
three
types
of
stratification
must
be
considered.
There
are
three
types
of
stratification
that
can
be
used
in
sampling:

°
stratification
by
space
°
stratification
by
time
°
stratification
by
component.

The
choice
of
the
type
of
stratification
will
depend
on
the
type
and
magnitude
of
heterogeneity
present
in
the
population
under
consideration.

Figure
C­
2
depicts
these
different
types
of
strata
which
are
often
generated
by
different
processes
or
a
significant
variant
of
the
same
process.
The
different
origins
of
the
strata
usually
result
in
a
different
concentration
or
property
distribution
and
different
mean
concentrations
or
average
properties.
While
stratification
over
time
or
space
is
widely
understood,
stratification
by
component
is
less
commonly
employed.
Some
populations
lack
obvious
spatial
or
temporal
stratification
yet
display
high
levels
of
heterogeneity.
If
these
populations
contain
easily
identifiable
components,
such
as
bricks,
gloves,
pieces
of
wood
or
concrete,
then
it
may
be
advantageous
to
consider
the
population
as
consisting
of
a
number
of
component
strata.
An
advantage
of
component
stratification
is
that
it
can
simplify
the
sampling
and
analytical
process
and
allow
a
mechanism
for
making
inferences
to
a
highly
stratified
population.
Component
stratification
shares
many
similarities
with
the
gender
or
age
stratification
applied
to
demographic
data
by
pollsters
(i.
e.,
the
members
of
a
given
age
bracket
belonging
to
the
same
stratum
regardless
of
where
they
reside).
Component
stratification
is
used
by
the
mining
industry
to
assay
gold
ore
and
other
commodities
where
the
analyte
of
interest
is
found
in
Appendix
C
195
Figure
C­
2.
Three
different
types
of
strata
(from
ASTM
5956­
96)

discrete
particles
relative
to
a
much
greater
mass
of
other
materials.

Component
stratification,
although
not
commonly
employed,
is
applicable
to
many
waste
streams,
including
the
more
difficult­
to­
characterize
waste
streams
such
as
building
debris.
Additional
guidance
on
stratification
by
component
can
be
found
in
ASTM
D
5956­
96.

Table
C­
1
offers
practical
examples
when
wastes
considered
"non­
randomly
heterogeneous"
might
be
good
candidates
for
stratification
across
space,
time,
or
by
component.

The
stratification
approach
can
result
in
a
more
precise
estimate
of
the
mean
compared
to
simple
random
sampling.
However,
keep
in
mind
that
greater
precision
is
likely
to
be
realized
only
if
a
waste
exhibits
substantial
nonrandom
chemical
heterogeneity
and
stratification
efficiently
"divides"
the
waste
into
strata
that
exhibit
maximum
between­
strata
variability
and
minimum
within­
strata
variability.
If
that
does
not
occur,
stratified
random
sampling
can
produce
results
that
are
less
precise
than
in
the
case
of
simple
random
sampling;
therefore,
it
is
reasonable
to
employ
stratified
sampling
only
if
the
distribution
of
chemical
contaminants
in
a
waste
is
sufficiently
known
to
allow
an
intelligent
identification
of
the
strata
and
at
least
two
or
three
samples
can
be
collected
in
each
stratum.

Note
that
failure
to
recognize
separate
strata
might
lead
one
to
conclude
incorrectly,
via
a
statistical
test,
that
the
underlying
population
is
lognormal
or
some
other
right­
skewed
distribution.
Appendix
C
196
Table
C­
1.
Examples
of
Three
Types
of
Stratification
Type
of
Stratification
Example
Scenario
Stratification
Across
Space
A
risk­
based
cleanup
action
requires
a
site
owner
to
remove
the
top
two
feet
of
soil
from
a
site.
Prior
to
excavation,
the
waste
hauler
wants
to
know
the
average
concentration
of
the
constituent
of
concern
in
the
soil
to
be
removed.
The
top
six
inches
of
soil
are
known
to
be
more
highly
contaminated
than
the
remaining
18­
inches
of
soil.
Sampling
of
the
soil
might
be
carried
out
more
efficiently
by
stratifying
the
soil
into
two
subpopulations
­
the
upper
six­
inch
portion
and
the
lower
18­
inch
portion.

Stratification
Across
Time
A
waste
discharge
from
a
pipe
varies
across
time.
If
the
objective
is
to
estimate
the
overall
mean,
then
an
appropriate
sampling
design
might
include
stratification
across
time.

Stratification
by
Component
Construction
debris
covered
with
lead­
based
paint
in
the
same
structure
with
materials
such
as
glass
and
unpainted
wood
could
be
sampled
by
components
(lead­
based
paint
debris,
glass,
and
unpainted
wood).
This
strategy
is
known
as
"stratification
by
component"
(from
ASTM
D
5956­
96).

C.
5
Sampling
Techniques
for
Heterogeneous
Waste
Due
to
practical
constraints,
conventional
sampling
approaches
may
not
be
suitable
for
use
in
sampling
of
heterogeneous
wastes.
For
example,
sampling
of
contaminated
debris
can
pose
significant
challenges
due
to
the
high
degree
of
heterogeneity
encountered.
Methods
used
to
sample
contaminated
structures
and
debris
have
included
the
following:

°
Coring
and
cutting
pieces
of
debris
followed
by
crushing
and
grinding
of
the
matrix
(either
in
the
field
or
within
the
laboratory)
so
the
laboratory
can
handle
the
sample
in
a
manner
similar
to
a
soil
sample
(Koski,
et
al
1991)

°
Drilling
of
the
matrix
(e.
g.,
with
a
hand
held
drill)
followed
by
collection
of
the
cuttings
for
analysis.
This
technique
may
require
20
to
50
drill
sites
in
a
local
area
to
obtain
a
sufficient
volume
for
an
individual
field
sample
(Koski,
et
al
1991)

°
Grinding
an
entire
structure
via
a
tub
grinder
followed
by
conventional
sampling
approaches
(AFCEE
1995).

ASTM
has
published
a
guide
for
sampling
debris
for
lead­
based
paint
(LBP)
in
ASTM
E1908­
97
Standard
Guide
for
Sample
Selection
of
Debris
Waste
from
a
Building
Renovation
or
Lead
Abatement
Project
for
Toxicity
Characteristic
Leaching
Procedure
(TCLP)
testing
for
Leachable
Lead
(Pb)
.

Additional
methods
are
described
in
Chapter
Five,
"Sample
Acquisition,"
of
Characterizing
Heterogeneous
Wastes:
Methods
and
Recommendations
(USEPA
and
USDOE
1992)
and
in
Rupp
(1990).
1
It
is
important
to
note
that
discussion
of
the
"variance
of
the
fundamental
error"
refers
to
the
relative
variance,
which
is
the
ratio
of
the
sample
variance
over
square
of
the
sample
mean
(
).
The
relative
variance
s
x
2
2
is
useful
for
comparing
results
from
different
experiments.

197
APPENDIX
D
A
QUANTITATIVE
APPROACH
FOR
CONTROLLING
FUNDAMENTAL
ERROR
This
appendix
provides
a
basic
approach
for
determining
the
particle­
size
sample­
weight
relationship
sufficient
to
achieve
the
fundamental
error
level
specified
in
the
DQOs.
The
procedure
is
based
on
that
described
by
Pitard
(1989,
1993),
Gy
(1998),
and
others;
however,
a
number
of
simplifying
assumptions
have
been
made
for
ease
of
use.
The
procedure
described
in
this
appendix
is
applicable
to
sampling
of
granular
solid
media
(including
soil)
to
be
analyzed
for
nonvolatile
constituents.
It
is
not
applicable
to
liquids,
oily
wastes,
or
debris.

The
mathematical
derivation
of
the
equation
for
the
fundamental
error
is
complex
and
beyond
the
scope
of
this
guidance.
Readers
interested
in
the
full
documentation
of
the
theory
and
underlying
mathematics
are
encouraged
to
review
Gy
(1982)
and
Pitard
(1993).
Several
authors
have
developed
example
calculations
for
the
variance
of
the
fundamental
sampling
error
for
a
"typical"
contaminated
soil
to
demonstrate
its
practical
application.
1
Examples
found
in
Mason
(1992),
and
Myers
(1997)
may
be
particularly
useful.

The
equation
for
the
variance
of
the
fundamental
error
is
extremely
practical
for
optimization
of
sampling
protocols
(Pitard
1993).
In
a
relatively
simple
"rule
of
thumb"
form,
the
equation
for
the
variance
of
the
fundamental
error
(
)
is
estimated
by
sFE
2
S
d
FE
2
3
1
2
=
 
 
 
 
 
 
 
f
M
a
s
LC
 
Equation
D.
1
where
=
a
dimensionless
"shape"
factor
for
the
shape
of
particles
in
the
material
to
be
f
sampled
where
cubic
=
1.0,
sphere
=
0.523,
flakes
=
0.1,
and
needles
=
1
to
10
=
average
density
(gm/
cm
3
)
of
the
material
 
=
the
sample
weight
(or
mass
of
sample)
in
grams
Ms
=
proportion
of
the
sample
with
a
particle
size
less
than
or
equal
to
the
particle
aLC
size
of
interest
=
diameter
of
the
largest
fragment
(or
particle)
in
the
waste,
in
centimeters.
d
Pitard's
methodology
suggests
the
particle
size
of
interest
should
be
set
at
95
percent
of
the
largest
particle
in
the
population
(or
"lot"),
such
that
=
0.05.
Equation
D.
1
then
reduces
to
aLC
s
d
FE
2
3
18
=
f
Ma
 
Equation
D.
2
Appendix
D
198
The
equation
demonstrates
that
the
variance
of
the
fundamental
error
is
directly
proportional
to
the
size
of
the
largest
particle
and
inversely
proportional
to
the
mass
of
the
sample.
To
calculate
the
appropriate
mass
of
the
sample,
Equation
D.
2
easily
can
be
rearranged
as
M
f
a
=
 
()
s
d
FE
2
3
18
Equation
D.
3
Pitard
(1989,
1993)
proposed
a
"Quick
Safety
Rule"
for
use
in
environmental
sampling
using
the
following
input
assumptions
for
Equation
D.
3:

=
0.
5
(dimensionless
shape
factor
for
a
sphere)
f
=
2.
7
(density
of
a
waste
in
gm/
cm
3
)
 
=
(standard
deviation
of
the
fundamental
error).
sFE
±
5%

By
putting
these
assumed
factors
into
Equation
D.
3,
we
get:

Ms
=
×
05
27
005
18
2
3
..
(.
)
d
Equation
D.
4
Pitard
(1993)
rounds
up,
to
yield
the
relationship
Ms
 
10000
3
d
Equation
D.
5
Alternatively,
if
we
are
willing
to
accept
,
Equation
D.
4
yields
sFE
=
±
16%

Ms
 
1000
3
d
Equation
D.
6
Equation
D.
4
was
used
to
develop
Table
D­
1
showing
the
maximum
particle
size
that
is
allowed
for
a
given
sample
mass
with
the
standard
deviation
of
the
fundamental
error
(
)
sFE
prespecified
at
various
levels
(e.
g.,
5%,
10%,
16%,
20%,
and
50%).
A
table
such
as
this
one
can
be
used
to
estimate
the
optimal
weight
of
field
samples
and
the
optimal
weight
of
subsamples
prepared
within
the
laboratory.
An
alternative
graphical
method
is
presented
by
Pitard
(1993)
and
Myers
(1997).

An
important
feature
of
the
fundamental
error
is
that
it
does
not
"cancel
out."
On
the
contrary,
the
variance
of
the
fundamental
error
adds
together
at
each
stage
of
subsampling.
As
pointed
out
by
Myers
(1997),
the
fundamental
error
can
quickly
accumulate
and
exceed
50%,
100%,
200%,
or
greater
unless
it
is
controlled
through
particle­
size
reduction.
The
variance
of
the
fundamental
error,
,
calculated
at
each
stage
of
subsampling
and
particle­
size
reduction
sFE
2
must
be
added
together
at
the
end
to
derive
the
total
.
sFE
2
Appendix
D
199
Table
D­
1.
Maximum
Allowable
Particle
Size
(cm)
for
a
Given
Sample
Mass
for
Selected
Standard
Deviations
of
the
Fundamental
Error
Sample
Mass
(g)
Maximum
Allowable
Particle
Size
d
(cm)
SFE
=
5%
SFE
=
10%
SFE
=
16%*
SFE
=
20%
SFE
=
50%
0.1
0.
02
0.
03
0.
05
0.
05
0.
10
1
0.
05
0.
07
0.
10
0.
12
0.
22
2
0.
06
0.
09
0.
13
0.
15
0.
27
3
0.
07
0.
11
0.
15
0.
17
0.
31
4
0.
07
0.
12
0.
16
0.
19
0.
35
5
0.
08
0.
13
0.
17
0.
20
0.
37
10
0.10
0.16
0.22
0.25
0.47
20
0.13
0.20
0.28
0.32
0.59
30
0.15
0.23
0.32
0.37
0.68
40
0.16
0.25
0.35
0.40
0.74
50
0.17
0.27
0.37
0.43
0.80
75
0.20
0.31
0.43
0.50
0.92
100
0.22
0.35
0.47
0.55
1.01
500
0.37
0.59
0.81
0.94
1.73
1000
0.47
0.74
1.02
1.18
2.17
5000
0.80
1.27
1.74
2.02
3.72
*A
maximum
standard
deviation
of
the
fundamental
error
of
16%
has
been
recommended
by
Pitard
(1993)
and
is
included
in
this
table
as
a
point
of
reference
only.
Project­
specific
fundamental
error
rates
should
be
set
in
the
DQO
Process.

Two
important
assumptions
underlie
the
use
of
Table
D­
1:

1.
The
table
is
valid
only
if
each
and
all
steps
of
the
sampling
and
subsampling
minimize
other
sampling
error
through
use
of
careful
and
correct
sampling
procedures
2.
The
table
is
valid
only
for
wastes
or
soils
with
a
shape
factor
(f)
and
density
(
)
 
similar
to
that
used
to
derive
the
table;
otherwise,
waste­
specific
tables
or
graphical
methods
(see
Pitard
1993,
Mason
1992,
or
Myers
1997)
should
be
used.

Hypothetical
Example
Suppose
we
have
a
waste
that
is
a
particulate
solid
to
be
analyzed
for
total
metals.
The
laboratory
requires
an
analytical
sample
of
only
1
gram.
The
DQO
planning
team
wants
to
maintain
the
total
standard
deviation
of
the
fundamental
error
(
)
within
.
The
sample
sFE
±
16%
masses
are
determined
at
each
stage
of
sampling
and
subsampling
as
follows:

Primary
Stage:
Based
on
prior
inspection
of
the
waste,
it
is
known
that
95
percent
of
the
particles
are
0.47
cm
in
diameter
or
less.
Using
Table
D­
1,
we
determine
that
a
field
sample
of
1,000
grams
(or
1
Kg)
will
generate
a
fundamental
error
not
greater
than
.
sFE
±
5%
Appendix
D
200
Secondary
Stage:
After
shipment
of
the
1,000­
gram
sample
to
the
laboratory,
particle­
size
reduction
to
about
0.23
cm
(2.36
mm
or
a
No.
8
sieve)
is
performed,
and
a
30­
gram
subsample
is
taken.
This
step
generates
a
fundamental
error
of
.
sFE
±
10%

Final
Stage:
A
1­
gram
subsample
is
required
for
the
analysis.
Particle­
size
reduction
to
0.07
cm
or
less
(e.
g.,
a
No.
30
sieve)
is
performed,
and
a
1­
g
subsample
is
taken.
This
step
generates
a
fundamental
error
of
.
sFE
±
10%

The
variance
(
)
from
each
stage
is
then
summed
to
determine
the
total
variance
of
the
sFE
2
fundamental
error.
As
shown
in
Table
D­
2,
the
total
standard
deviation
of
the
fundamental
error
is
less
than
±16
percent
and
the
DQO
is
achieved.

Table
D­
2.
Example
Calculation
of
the
Total
Variance
of
the
Fundamental
Error
Sampling
and
Subsampling
Stage
Mass
(grams)
d
(cm)
sFE
sFE
2
Primary
Stage
1000
0.47
.05
.0025
Secondary
Stage
30
0.23
.10
.01
Final
Stage
1
0.
07
.10
.01
Sum
of
the
variances
of
the
fundamental
errors
(
)
=
0.0225
sFE
2
sFE
2
Total
standard
deviation
of
the
fundamental
error
(
)
(DQO
=
16%)
=
0.15
or
15%
sFE
sFE
One
final
word
of
caution
is
provided
regarding
the
use
of
the
particle­
size
reduction
and
subsampling
routine
outlined
above.
The
approach
can
reduce
bias
and
improve
precision
of
analyses
for
total
constituent
analyses,
but
the
particle­
size
reduction
steps
may
actually
introduce
bias
when
used
in
conjunction
with
some
leaching
tests.
For
example,
the
TCLP
specifies
a
minimum
sample
mass
of
100
grams
(for
nonvolatile
extractions)
and
maximum
particle
size
of
9.5
mm.
While
this
combination
would
generate
a
of
almost
±50
percent,
sFE
excessive
particle­
size
reduction
below
9.5
mm
to
lower
would
increase
the
leachability
of
sFE
the
material
during
the
test
due
to
the
increased
surface
area­
to­
volume
ratio
of
smaller
particles.
Therefore,
it
is
important
to
remember
that
particle­
size
reduction
to
control
fundamental
error
is
beneficial
when
total
constituent
analyses
are
performed,
but
may
introduce
bias
for
some
leaching
tests.
Furthermore,
particle­
size
reduction
below
9.5
mm
is
not
required
by
Method
1311
(TCLP)
(except
during
Step
7.1.4,
"Determination
of
Appropriate
Extraction
Fluid").
201
APPENDIX
E
SAMPLING
DEVICES
The
key
features
of
recommended
sampling
devices
are
summarized
in
this
appendix.
For
each
sampling
device,
information
is
provided
in
a
uniform
format
that
includes
a
brief
description
of
the
device
and
its
use,
advantages
and
limitations
of
the
device,
and
a
figure
to
indicate
the
general
design
of
the
device.
Each
summary
also
identifies
sources
of
other
guidance
on
each
device,
particularly
any
relevant
ASTM
standards.

Much
of
the
information
in
this
appendix
was
drawn
from
ASTM
standards
(see
also
Appendix
J
for
summaries
of
ASTM
standards).
In
particular,
much
of
the
information
came
from
ASTM
D
6232,
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities.

Devices
not
listed
in
this
appendix
or
described
elsewhere
in
this
chapter
also
may
be
appropriate
for
use
in
RCRArelated
sampling.
For
example,
other
more
innovative
or
less
common
technologies
may
allow
you
to
meet
your
performance
goals
and
may
be
appropriate
for
your
sampling
effort.
Therefore,
we
encourage
and
recommend
the
selection
and
use
of
sampling
equipment
based
on
a
performance­
based
approach.
In
future
revisions
to
this
chapter,
we
will
include
new
technologies,
as
appropriate.

This
appendix
is
divided
into
subsections
based
on
various
categories
of
sampling
technologies.
The
categories
are
based
on
those
listed
in
ASTM
D
6232.
The
equipment
categories
covered
within
this
appendix
are
as
follows:

E.
1
Pumps
and
Siphons
E.
2
Dredges
E.
3
Discrete
Depth
Samplers
E.
4
Push
Coring
Devices
E.
5
Rotating
Coring
Devices
E.
6
Liquid
Profile
Devices
E.
7
Surface
Sampling
Devices
E.
1
Pumps
and
Siphons
Pumps
and
siphons
can
be
used
to
obtain
samples
of
liquid
wastes
and
ground
water.
For
detailed
guidance
on
the
selection
and
use
of
pumps
for
sampling
of
ground
water,
see
RCRA
Ground­
Water
Monitoring:
Draft
Technical
Guidance
(USEPA
1992c).

In
this
section,
you
will
find
summaries
for
the
following
pumps
or
siphons:
Internet
Resource
Information
on
sampling
devices
can
be
found
on
the
Internet
at
the
Federal
Remediation
Technologies
Roundtable
site
at
http://
www.
frtr.
gov/.
The
Field
Sampling
and
Analysis
Technologies
Matrix
and
accompanying
Reference
Guide
are
intended
as
an
initial
screening
tool
to
provide
users
with
an
introduction
to
innovative
site
characterization
technologies
and
to
promote
the
use
of
potentially
cost­
effective
methods
for
onsite
monitoring
and
measurement.
Appendix
E
202
Figure
E­
1.
Automatic
sampler
E.
1.1
Automatic
Sampler
E.
1.2
Bladder
Pump
E.
1.
3
Peristaltic
Pump
E.
1.4
Centrifugal
Submersible
Pump
E.
1.5
Displacement
Pumps
E.
1.
1
Automatic
Sampler
An
automatic
sampler
(see
Figure
E­
1)
is
a
type
of
pumping
device
used
to
periodically
collect
samples.
It
is
recommended
for
sampling
surface
water
and
point
discharges.
It
can
be
used
in
waste­
water
collection
systems
and
treatment
plants
and
in
stream
sampling
investigations.
An
automatic
sampler
designed
for
collection
of
samples
for
volatile
organic
analyses
is
available.

An
automatic
sampler
typically
uses
peristaltic
pumps
as
the
sampling
mechanism.
It
can
be
programmed
to
obtain
samples
at
specified
intervals
or
to
obtain
a
continuous
sample.
It
also
can
be
programmed
to
collect
time
composite
or
flow
proportional
samples.

Advantages
°
Can
provide
either
grab
sample
or
composite
samples
over
time.

°
Operates
unattended,
and
it
can
be
programmed
to
sample
variable
volumes
at
variable
times.

Limitations
°
Requires
power
to
operate
(either
AC
or
battery
power).

°
May
be
difficult
to
decontaminate.

°
May
not
operate
correctly
when
sampling
liquid
streams
containing
a
high
percentage
of
solids.

°
Highly
contaminated
water
or
waste
can
degrade
sampler
components.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232.
Appendix
E
203
Figure
E­
2.
Bladder
pump
E.
1.2
Bladder
Pump
The
bladder
pump
is
recommended
for
the
sampling
of
surface
water,
ground
water,
and
point
discharges.
It
also
can
be
used
to
sample
other
liquids
in
surface
impoundments.

A
bladder
pump
consists
of
a
flexible
membrane
(bladder)
enclosed
by
a
rigid
sample
container
and
can
be
constructed
of
a
variety
of
materials,
such
as
neoprene,
rubber,
stainless
steel,
nitrile,
etc.
There
are
two
types
of
bladder
pumps
­
the
squeeze
type
and
the
expanding
type
(see
Figure
E­
2).
The
squeeze
type
has
the
bladder
connected
to
the
sample
discharge
line.
The
chamber
between
the
bladder
and
the
sampler
body
is
connected
to
the
gas
line.
The
expanding
type
has
the
bladder
connected
to
the
gas
line.
In
this
type
of
bladder
pump,
the
chamber
between
the
bladder
and
the
sampler
body
is
connected
to
the
sample
discharge
line.

During
sampling,
water
enters
the
sampler
through
a
check
valve
at
the
bottom
of
the
device.
Compressed
air
or
gas
is
then
injected
into
the
sampler.
This
causes
the
bladder
to
expand
or
compress
depending
on
the
type
of
bladder
pump.
The
inlet
valve
closes
and
the
contents
of
the
sampler
are
forced
through
the
top
check
valve
into
the
discharge
line.
The
top
check
valve
prevents
water
from
re­
entering
the
sampler.
By
removing
the
pressure,
the
process
is
repeated
to
collect
more
sample.
Automated
sampling
systems
have
been
developed
to
control
the
time
between
pressurization
cycles.

Advantages
°
Is
suitable
for
sampling
liquids
containing
volatile
compounds.

°
Can
collect
samples
up
to
a
depth
of
60
m
(200
ft.)
(ASTM
D
6232).

Limitations
°
Operation
requires
large
volumes
of
compressed
air
or
gas
and
a
controller.

°
Requires
a
power
source.

°
Can
be
heavy
and
difficult
to
operate.

°
Decontamination
can
be
difficult.
Appendix
E
204
Figure
E­
3.
Peristaltic
pump
Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
Groundwater
Monitoring
Wells,
ASTM
D
4448
E.
1.
3
Peristaltic
Pump
A
peristaltic
pump
(Figure
E­
3)
is
a
suction
lift
pump
used
at
the
surface
to
collect
liquid
from
ground­
water
monitoring
wells
or
surface
impoundments.
It
can
be
used
for
sampling
surface
water,
ground
water,
point
discharges,
impounded
liquids,
and
multi­
layer
liquid
wastes.

A
peristaltic
pump
consists
of
a
rotor
with
ball
bearing
rollers
and
it
has
a
piece
of
flexible
tubing
threaded
around
the
pump
rotor
and
connected
to
two
pieces
of
polytetrafluroethylene
(PTFE)
or
other
suitable
tubing.
One
end
of
the
tubing
is
placed
in
the
sample.
The
other
end
is
connected
to
a
sample
container.
Silicone
tubing
is
commonly
used
within
the
pumphead;
however,
for
organic
sampling
purposes,
medical
grade
silicone
is
recommended
to
avoid
contamination
of
the
sample
(ASTM
D
4448).
Fluorocarbon
resin
tubing
is
also
sometimes
used
for
high
hazard
materials
and
for
samples
to
be
analyzed
for
organics
(ASTM
D
6063).
The
device
can
be
modified
to
avoid
contact
of
the
sample
with
the
flexible
tubing.
In
such
a
case,
the
pump
is
connected
to
a
clean
glass
container
using
a
PTFE
insert.
A
second
PTFE
tubing
is
used
to
connect
the
glass
container
to
the
sample
source.

During
operation,
the
rotor
squeezes
the
flexible
tubing,
causing
a
vacuum
to
be
applied
to
the
inlet
tubing.
The
sample
material
is
drawn
up
the
inlet
tubing
and
discharged
through
the
outlet
end
of
the
flexible
tubing.
In
the
modified
peristaltic
pump,
the
sample
is
emptied
into
the
glass
container
without
coming
in
contact
with
the
flexible
tubing.
To
sample
liquids
from
drums,
the
peristaltic
pump
is
first
used
to
mix
the
sample.
Both
ends
of
the
tubing
are
placed
in
the
sample
and
the
pump
is
turned
on.
Once
the
drum
contents
are
mixed,
the
sample
is
collected
as
described
above.
To
collect
samples
for
organic
volatile
analyses,
the
PTFE
tubing
attached
to
the
intake
end
of
the
pump
is
filled
with
the
sample
and
then
disconnected
from
the
pump.
The
tube
is
then
drained
into
the
sample
vials.

Advantages
°
Can
collect
samples
from
multiple
depths
and
small
diameter
wells.

°
Easy
to
use
and
readily
available.
Appendix
E
205
Figure
E­
4.
Centrifugal
submersible
pump
°
The
pump
itself
does
not
need
to
be
decontaminated.
The
tubing
can
be
either
decontaminated
or
replaced.

Limitations
°
The
drawing
of
a
vacuum
to
lift
the
sample
may
cause
the
loss
of
volatile
contaminants.

°
Sampling
depth
cannot
exceed
about
7.6
m
(25
ft.)
(ASTM
D
6232).

°
Requires
a
power
source.

°
Flexible
tubing
may
be
incompatible
with
certain
matrices.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel,
ASTM
D
6063
°
Standard
Guide
for
Sampling
Groundwater
Monitoring
Wells,
ASTM
D
4448
E.
1.4
Centrifugal
Submersible
Pump
The
centrifugal
submersible
pump
(Figure
E­
4)
is
a
type
of
pump
used
for
purging
and
sampling
monitoring
wells,
sampling
of
waste
water
from
impoundments,
and
sampling
point
discharges.

A
centrifugal
submersible
pump
uses
a
set
of
impellers,
powered
by
an
electric
motor,
to
draw
water
up
and
through
a
discharge
hose.
Parts
in
contact
with
liquid
may
be
made
of
PTFE
and
stainless
steel.
The
pump
discharge
hose
can
be
made
of
PTFE
or
other
suitable
material.
The
motor
cavity
is
filled
with
either
air
or
deionized
or
distilled
water
that
may
be
replaced
when
necessary.
Flow
rates
for
centrifugal
submersible
pumps
range
from
100
mL
per
minute
to
9
gallons
per
minute
(ASTM
D
6232).

During
operation,
water
is
drawn
into
the
pump
by
a
slight
suction
created
by
the
rotation
of
the
impellers.
The
impellers
work
against
fixed
stator
plates
and
pressurize
the
water
which
is
driven
to
the
surface
through
the
discharge
hose.
The
speed
at
which
the
impellers
are
driven
controls
the
pressure
and,
thus,
the
flow
rate.
Appendix
E
206
Figure
E­
5.
Displacement
pump
Advantages
°
Can
be
constructed
of
materials
(PTFE
and
stainless
steel)
that
are
chemically
resistant.

°
Can
be
used
to
pump
liquids
up
to
a
76
m
(250
ft)
head
(ASTM
D
6232).

°
Flow
rate
is
adjustable.

Limitations
°
May
be
incompatible
with
liquids
containing
a
high
percentage
of
solids.

°
May
not
be
appropriate
for
collection
of
samples
for
volatile
organics
analysis.
Loss
of
volatiles
can
occur
as
a
result
of
motor
heating
and
sample
pressurization.

°
Requires
an
electric
power
source;
e.
g.,
either
a
12
v
(DC)
or
a
110/
220
v
(AC)
converter
(ASTM
D
6232).

°
May
require
a
winch
or
reel
system
for
portable
use.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
1.5
Displacement
Pumps
The
displacement
pump
(Figure
E­
5)
is
a
type
of
pump
used
for
the
sampling
of
surface
water,
ground
water,
point
discharges
and
other
liquids
(e.
g.,
in
impoundments).

A
displacement
pump
forces
a
discrete
column
of
water
to
the
surface
via
a
mechanical
lift.
During
sampling,
water
enters
the
sampler
through
the
check
valve
at
the
bottom
of
the
device.
It
is
commonly
constructed
of
PVC,
stainless
steel,
or
both.
It
also
can
be
made
of
PTFE
to
reduce
the
risk
of
contamination
when
collecting
samples
with
trace
levels
of
organic
compounds.
Two
common
types
of
displacement
pumps
include
the
air/
gas
and
piston
displacement
pumps.

The
air/
gas
displacement
pump
uses
compressed
gas
and
it
operates
by
applying
positive
Appendix
E
207
pressure
to
the
gas
line.
This
causes
the
inlet
check
valve
to
close
and
the
discharge
line
check
valve
to
open,
forcing
water
up
the
discharge
line
to
the
surface.
Removal
of
the
gas
pressure
causes
the
top
valve
to
close
and
the
bottom
valve
to
open.
Water
enters
the
sampler
and
the
process
is
repeated.

The
piston
displacement
pump
uses
an
actuating
rod
powered
from
the
surface
or
from
an
air
or
electric
actuator.
The
mechanically
operated
plunger
delivers
the
sample
to
the
surface
at
the
same
time
the
chamber
fills.
It
has
a
flap
valve
on
the
piston
and
an
inlet
check
valve
at
the
bottom
of
the
sampler.

Advantages
°
Can
be
constructed
of
PTFE
to
reduce
the
risk
of
contamination
caused
by
materials
of
construction
when
collecting
samples
for
trace
levels
of
organics.

Limitations
°
May
be
difficult
to
decontaminate.

°
Displacement
pumps
require
large
volumes
of
air
or
gas
and
a
power
source.

°
Loss
of
dissolved
gases
or
sample
contamination
from
the
driving
gas
may
occur
during
sampling.

°
Displacement
pumps
may
be
heavy.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
Groundwater
Monitoring
Wells,
ASTM
D
4448
E.
2
Dredges
Dredges
include
equipment
that
is
often
used
to
collect
bottom
material
(e.
g.,
sediments)
from
beneath
a
layer
of
stationary
or
moving
liquid.
A
variety
of
dredges
are
available
including
the
Ekman
bottom
grab
sampler
and
the
Ponar
dredge.
The
Ponar
dredge
is
described
below.

E.
2.1
Ponar
Dredge
The
ponar
dredge
is
recommended
for
sampling
sediment.
It
has
paired
jaws
that
penetrate
the
substrate
and
close
to
retain
the
sample.
The
sample
volume
range
is
0.5
to
3.0
liters
(ASTM
D
6232).
Appendix
E
208
Figure
E­
6.
Ponar
dredge
The
Ponar
dredge
is
lowered
slowly
with
controlled
speed
so
that
the
dredge
will
properly
land
and
avoid
blowout
of
the
surface
layer
to
be
sampled.
The
weight
of
the
dredge
causes
it
to
penetrate
the
substrate
surface.
The
slack
in
tension
unlocks
the
open
jaws
and
allows
the
dredge
to
close
as
it
is
raised.
The
dredge
is
raised
slowly
to
minimize
disturbance
and
sample
washout
as
the
dredge
is
retrieved
through
the
liquid
column.
The
collected
sample
is
emptied
into
a
suitable
container.
Auxiliary
weight
may
be
added
to
the
dredge
to
increase
penetration.

Advantages
°
Reusable
°
Can
obtain
samples
of
most
types
of
stationary
sediments
ranging
from
silt
to
granular
material
°
Available
in
a
range
of
sizes
and
weights
°
Some
models
may
be
available
in
either
stainless
steel
or
brass.

Limitations
°
Not
capable
of
collecting
undisturbed
samples
°
May
be
difficult
to
decontaminate
(depending
upon
the
dredge's
design
and
characteristics
of
the
sampled
material)

°
Cannot
collect
a
representative
lift
or
repeatedly
sample
to
the
same
depth
and
position
°
Can
be
heavy
and
require
a
winch
or
portable
crane
to
lift;
however,
a
smaller
version,
the
petit
Ponar,
is
available
and
can
be
operated
by
a
hand­
line
(ASTM
D
4342).

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Collecting
of
Benthic
Macroinvertebrates
with
Ponar
Grab
Sampler,
ASTM
D
4342
°
Standard
Guide
for
Selecting
Grab
Sampling
Devices
for
Collecting
Benthic
Macroinvertebrates,
ASTM
D
4387
Appendix
E
209
Figure
E­
7.
Bacon
bomb
°
"Sediment
Sampling"
(USEPA
1994e)

E.
3
Discrete
Depth
Samplers
Discrete
depth
samplers
include
equipment
that
can
collect
samples
at
a
specific
depth.
Such
samplers
are
sometimes
used
to
collect
samples
from
layered
liquids
in
tanks
or
surface
impoundments.
You
will
find
summaries
for
the
following
discrete
depth
samplers
in
this
section:

E.
3.1
Bacon
Bomb
E.
3.2
Kemmerer
Sampler
E.
3.3
Syringe
Sampler
E.
3.4
Lidded
Sludge/
Water
Sampler
E.
3.5
Discrete
Level
Sampler
Besides
the
samplers
listed
below,
a
self­
purging,
discrete
depth
sampler
is
available
for
sampling
ground­
water
monitoring
wells.
It
fills
when
stopped
at
the
desired
depth
and
eliminates
the
need
for
well
purging.
It
samples
directly
into
a
40­
mL
glass
VOA
sample
vial
contained
within
the
sampler;
therefore,
the
loss
of
volatile
organic
compounds
is
minimized.

E.
3.1
Bacon
Bomb
A
bacon
bomb
(Figure
E­
7)
is
a
type
of
discrete
level
sampler
that
provides
a
sample
from
a
specific
depth
in
a
stationary
body
of
water
or
waste.
A
bacon
bomb
is
recommended
for
sampling
surface
water
and
is
usually
used
to
collect
samples
from
a
lake
or
pond.
It
can
also
be
used
to
collect
liquid
waste
samples
from
large
tanks
or
lagoons.
It
originally
was
designed
to
collect
oil
samples.
The
sample
volume
range
is
from
0.1
to
0.5
liters
(100
to
500
mL)
(ASTM
D
6232).

A
bacon
bomb
has
a
cylindrical
body
sometimes
constructed
of
stainless
steel,
but
it
is
sometimes
made
of
chrome­
plated
brass
and
bronze.
It
is
lowered
into
material
by
a
primary
support
line
and
has
an
internal
tapered
plunger
that
acts
as
a
valve
to
admit
the
sample.
A
secondary
line
attached
to
the
top
of
the
plunger
opens
and
closes
the
plunger
valve.
The
top
cover
has
a
locking
mechanism
to
keep
the
plunger
closed
after
sampling.
The
bacon
bomb
remains
closed
until
triggered
to
collect
the
sample.
Sample
collection
is
triggered
by
raising
the
plunger
line
and
allowing
the
sampler
to
fill.
The
device
is
then
closed
by
releasing
the
plunger
line.
It
is
returned
to
the
surface
by
raising
the
primary
support
line,
and
the
sample
is
transferred
directly
to
a
container.
Appendix
E
210
Figure
E­
8.
Kemmerer
sampler
Advantages
°
Collects
a
discrete
depth
sample;
it
is
not
opened
until
the
desired
depth.

°
Easy
to
use,
without
physical
requirement
limitations.

Limitations
°
May
be
difficult
to
decontaminate
due
to
design
or
construction
materials.

°
Maximum
sample
capacity
is
only
500
mL.

°
Materials
of
construction
may
not
be
compatible
with
parameters
of
concern.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
"Tank
Sampling"
(USEPA
1994c)

E.
3.2
Kemmerer
Sampler
A
kemmerer
sampler
(Figure
E­
8)
is
a
type
of
discrete
level
sampler
that
provides
a
sample
from
a
specific
depth.
Recommended
for
sampling
surface
water,
it
is
usually
used
to
collect
samples
from
a
lake
or
pond.
It
can
also
be
used
to
collect
liquid
waste
samples
from
large
tanks
or
lagoons.
The
sample
volume
range
is
from
1
to
2
liters
(ASTM
D
6232).

The
sampler
comprises
a
stainless
steel
or
brass
cylinder
with
rubber
stoppers
for
the
ends,
but
all
PFTE
construction
also
is
available.
The
ends
are
left
open
while
being
lowered
in
a
vertical
position,
allowing
free
passage
of
water
or
liquid
through
the
cylinder.
When
the
device
is
at
the
designated
depth,
a
messenger
is
sent
down
a
rope
to
close
the
stoppers
at
each
end.
The
cylinder
is
then
raised
and
the
sample
is
removed
through
a
valve
to
fill
sample
containers.

Advantages
°
Can
collect
a
discrete
depth
sample.
Appendix
E
211
Figure
E­
9.
Syringe
sampler
°
Provides
correct
delimitation
and
extraction
of
sample
(Pitard
1989)

°
Easy
to
use
°
All
PTFE
construction
is
available.

Limitations
°
May
be
difficult
to
decontaminate
due
to
construction
or
materials.

°
The
sampler
is
exposed
to
the
medium
at
other
depths
while
being
lowered
to
a
sampling
point
at
the
desired
depth.

°
Materials
of
construction
may
not
be
compatible
with
parameters
of
concern.

Other
Guidance:

°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
3.3
Syringe
Sampler
A
syringe
sampler
(Figure
E­
9)
is
a
discrete
depth
sampler
used
to
sample
liquids.
With
the
optional
coring
tip,
it
can
be
used
as
a
coring
device
to
sample
highly
viscous
liquids,
sludges,
and
tarlike
substances.
It
is
used
to
collect
samples
from
drums,
tanks,
and
surface
impoundments,
and
it
can
also
draw
samples
when
only
a
small
amount
remains
at
the
bottom
of
a
tank
or
drum.
The
sample
volume
range
is
0.2
to
0.5
liters
(ASTM
D
6232).

A
syringe
sampler
generally
is
constructed
of
a
piston
assembly
that
comprises
a
T­
handle,
safety
locking
nut,
control
rod,
piston
body
assembly,
sampling
tube
assembly,
and
two
tips
for
the
lower
end
(a
closeable
valve
and
a
coring
tip).
When
used
as
a
syringe,
the
sampler
is
slowly
lowered
to
the
sampling
point
and
the
Thandle
is
gradually
raised
to
collect
the
sample.
Once
the
desired
sample
is
obtained,
the
lock
nut
is
tightened
to
secure
the
piston
rod
and
the
bottom
valve
is
closed
by
pressing
down
on
the
sampler
against
the
side
or
bottom
of
the
container.
When
used
as
a
coring
device,
the
sampler
is
slowly
pushed
down
into
the
material.
Once
the
desired
sample
is
obtained,
the
lock
nut
is
tightened
to
secure
the
piston
rod
and
the
sampler
is
removed
from
the
media.
The
sample
material
is
extruded
into
the
sample
container
by
opening
the
bottom
valve
(if
fitted),
loosening
the
lock
nut,
and
pushing
the
piston
down.
Appendix
E
212
Figure
E­
10.
Lidded
sludge/
water
sampler
Advantages
°
The
syringe
sampler
is
easy
to
use
and
decontaminate.

°
The
syringe
sampler
can
sample
at
discrete
depths,
including
the
bottom
of
a
container.

Limitations
°
The
syringe
sampler
can
be
used
to
depths
of
about
1.8
meters
only
(ASTM
D
6232).

°
Material
to
be
sampled
must
be
viscous
enough
to
remain
in
the
device
when
the
coring
tip
is
used;
the
valve
tip
is
not
recommended
for
viscous
materials
(ASTM
D
6063).

Other
Guidance
°
Standard
Guide
for
Sampling
Single
or
Multilayered
Liquids,
ASTM
D
5743
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel,
ASTM
D
6063
E.
3.4
Lidded
Sludge/
Water
Sampler
A
lidded
sludge/
water
sampler
(Figure
E­
10)
is
a
type
of
discrete
depth
device
that
provides
a
sample
from
a
specific
depth.
It
is
used
to
collect
sludges
or
waste
fluids
from
tanks,
tank
trucks,
and
ponds.
It
can
sample
liquids,
multi­
layer
liquid
wastes,
and
mixed­
phase
solid/
liquid
wastes.
The
typical
sample
volume
is
1.0­
liter
(ASTM
D
6232).

A
lidded
sludge/
water
sampler
comprises
a
removable
glass
jar,
sometimes
fitted
with
a
cutter
for
sampling
materials
containing
more
than
40­
percent
solids
(ASTM
D
6232),
that
is
mounted
on
a
stainless
steel
device.

The
sampler
is
lowered
into
the
material
to
be
sampled
and
opened
at
the
desired
depth.
The
top
handle
is
rotated
to
upright
the
jar
and
open
and
close
the
lid.
Then,
the
device
is
carefully
retrieved
from
the
material.
The
jar
is
removed
from
the
sampler
by
lifting
it
from
the
holder,
and
Appendix
E
213
Figure
E­
11.
Discrete
level
sampler
the
jar
serves
as
a
sample
container
so
there
is
no
need
to
transfer
the
sample.

Advantages
°
The
jar
in
the
sampling
device
also
serves
as
a
sample
container
reducing
the
risk
of
cross­
contamination.

°
Bottles
and
lids
are
unique
to
each
sample,
therefore,
decontamination
of
an
intermediate
transfer
container
is
not
required.

Limitations
°
Heavy
and
limited
to
one
bottle
size
°
Thick
sludge
is
difficult
to
sample
(ASTM
D
6232).

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
3.5
Discrete
Level
Sampler
A
discrete
level
sampler
(Figure
E­
11)
is
a
dismountable
cylindrical
sampler
fitted
with
a
manually­
operated
valve(
s).
It
is
recommended
for
sampling
surface
water,
ground
water,
point
discharges,
liquids,
and
multi­
layer
liquids
and
is
used
for
sampling
drums,
tanks,
containers,
wells,
and
surface
impoundments.
The
typical
sample
volume
range
is
0.2
to
0.5
liters
(ASTM
D
6232).

A
discrete
level
sampler
is
made
from
PTFE
and
stainless
steel
and
is
designed
to
be
reusable.
It
comprises
a
tube
fitted
with
manually­
operated
valve
or
valves,
which
are
operated
by
a
control
assembly
attached
to
the
upper
end
of
the
sampler.
This
assembly
consists
of
a
rigid
tube
and
rod
or
a
flexible
tube
and
inner
cable.
The
standard
level
sampler
has
a
manually
operated
upper
valve
and
a
lower
spring­
retained
bottom
dump
valve.
The
dual
valve
model
may
be
emptied
by
opening
the
valves
manually
or
with
a
metering
device
attached
to
the
lower
end
of
the
sampler
(not
shown).
Appendix
E
214
To
collect
a
sample,
the
discrete
level
sampler
is
lowered
into
the
sample
material
to
the
desired
sampling
depth.
The
valve
or
valves
are
opened
manually
to
collect
the
sample
and
closed
before
retrieving
the
sampler.
The
standard
model
is
emptied
by
pressing
the
dump
valve
against
the
side
of
the
sample
container.
The
dual
valve
sampler
is
emptied
by
opening
the
valves
manually.
Alternatively,
the
collected
sample
may
be
taken
to
the
laboratory
in
the
sampler
body
by
replacing
the
valves
with
solid
PTFE
end
caps.

Advantages
°
Relatively
easy
to
decontaminate
and
reuse
°
May
be
used
to
sample
liquids
in
most
environmental
situations.

°
Can
be
remotely
operated
in
hazardous
environments.

°
Sample
representativeness
is
not
affected
by
liquids
above
the
sampling
point.

°
The
sampling
body
can
be
used
for
sample
storage
and
transport.

Limitations
°
Limited
to
sample
chamber
capacities
of
240­
475
mL
(ASTM
D
6232).

°
May
be
incompatible
with
liquids
containing
a
high
percentage
of
solids.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
4
Push
Coring
Devices
Push
coring
devices
include
equipment
that
use
a
pushing
action
to
collect
a
vertical
column
of
a
solid
sample.
You
will
find
summaries
for
the
following
push
coring
devices
in
this
section:

E.
4.1
Penetrating
Probe
Sampler
E.
4.2
Split
Barrel
Sampler
E.
4.3
Concentric
Tube
Thief
E.
4.4
Trier
E.
4.5
Thin­
Walled
Tube
E.
4.6
Coring
Type
Sampler
(with
Valve)
E.
4.7
Miniature
Core
Sampler
E.
4.8
Modified
Syringe
Sampler
Appendix
E
215
Figure
E­
12.
Probe
sampler
E.
4.1
Penetrating
Probe
Sampler
The
penetrating
probe
sampler
(Figure
E­
12)
is
a
push
coring
device
and,
therefore,
provides
a
core
sample.
The
probe
sampler
is
recommended
for
sampling
soil
and
other
solids.
The
sample
volume
range
is
0.2
to
2.0
liters
(ASTM
D
6232).

The
probe
sampler
typically
consists
of
single
or
multiple
threaded
steel
tubes,
a
threaded
top
cap,
and
a
detachable
steel
tip.
The
steel
tubes
are
approximately
1
inch
or
less
in
diameter.
Specialized
attachments
may
be
used
for
various
matrices.
Some
probes
are
equipped
with
adjustable
screens
or
retractable
inner
rods
to
sample
soil
vapor
or
ground
water.

Advantages
°
Easy
to
decontaminate
and
is
reusable.

°
Can
provide
samples
for
onsite
analysis
(ASTM
D
6232).

°
Versatile
and
may
sample
15
to
20
locations
a
day
for
any
combination
of
matrices
(ASTM
D
6232).

°
Can
reduce
quantity
of
investigative
derived
wastes.

Limitations
°
May
be
heavy
and
bulky
depending
on
the
size
used.

°
Limited
by
composition
of
subsurface
materials
and
accessibility
to
deeper
depth
materials.

°
May
be
inappropriate
for
sampling
materials
that
require
mechanical
strength
to
penetrate.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
Appendix
E
216
Figure
E­
13.
Split
barrel
sampler
E.
4.2
Split
Barrel
Sampler
A
split
barrel
sampler
(Figure
E­
13)
is
a
push
coring
device
often
used
with
a
drill
rig
to
collect
deep
subsurface
samples.
The
device
is
recommended
for
soil
sampling,
but
can
be
used
to
sample
other
solids.
The
materials
to
be
sampled
should
be
moist
enough
to
remain
in
the
sampler.
The
sample
volume
range
is
0.5
to
30.0
liters
(ASTM
D
6232).

The
sampler
consists
of
a
length
of
steel
tubing
split
longitudinally
and
equipped
with
a
drive
shoe,
made
of
steel,
and
a
drive
head.
The
drive
shoe
is
detachable
and
should
be
replaced
when
dented
or
distorted.
The
samplers
are
available
in
a
variety
of
diameters
and
lengths.
The
split
barrel
is
typically
18
to
30
inches
in
length
with
an
inside
diameter
of
1.5
to
2.5
inches
(ASTM
D
4700,
ASTM
D
1586).
The
split
barrel
sampler
can
be
used
to
collect
relatively
undisturbed
soil
samples
at
considerable
depths.

The
split
barrel
sampler
may
be
driven
manually,
but
is
usually
driven
with
a
drill
rig
drive
weight
assembly
or
hydraulically
pushed
using
rig
hydraulics.
The
sampler
is
placed
on
the
surface
of
the
material
to
be
sampled,
then
pushed
downward
while
being
twisted
slightly.
Because
pushing
by
hand
may
be
difficult,
a
drop
hammer
typically
is
attached
to
a
drill
rig
used
to
finish
inserting
the
sampler.
When
the
desired
depth
is
reached
the
sampler
is
twisted
again
to
break
the
core;
then,
the
sampler
is
pulled
straight
up
and
out
of
the
material.
The
sample
may
be
removed
from
the
barrel
or
the
liner
may
be
capped
off
for
analysis.
Barrels
may
be
extended
to
5
inches
in
diameter
(ASTM
D
6232).
Liners
often
are
used
when
sampling
for
volatile
organic
compounds
or
other
trace
constituents
of
interest.
With
a
liner,
the
sample
can
be
removed
with
a
minimum
amount
of
disturbance.
Liners
must
be
compatible
with
the
matrix
and
compounds
of
interest;
plastic
liners
may
be
inappropriate
if
analyzing
for
organics.

Advantages
°
Reusable,
easily
decontaminated,
and
easy
to
use.

°
Provides
a
relatively
undisturbed
sample,
therefore,
can
minimize
the
loss
of
volatile
organic
compounds.

Limitations
°
Requires
a
drill
or
direct
push
rig
for
deep
samples.

°
Made
of
steel
and
may
penetrate
underground
objects
such
as
a
pipe
or
drum.
Appendix
E
217
Figure
E­
14.
Concentric
tube
thief
°
Only
accommodates
samples
that
contain
particles
smaller
than
the
opening
of
the
drive
shoe
(ASTM
D
4700).

Other
Guidance:

°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Soil
Sampling
from
the
Vadose
Zone,
ASTM
D
4700
°
Standard
Test
Method
for
Penetration
Test
and
Split­
Barrel
Sampling
of
Soils,
ASTM
D
1586
E.
4.3
Concentric
Tube
Thief
The
concentric
tube
thief
(also
known
as
a
grain
sampler)
(Figure
E­
14)
is
a
push
coring
device
that
the
user
directly
pushes
into
the
material
to
be
sampled.
It
can
be
used
to
sample
powdered
or
granular
solids
and
wastes
in
piles
or
in
bags,
drums,
or
similar
containers.
The
concentric
tube
thieves
are
generally
61
to
100
cm
(24
to
40
inches)
long
by
1.27
to
2.54
cm
(½
to
1
inch)
in
diameter
(USEPA
1994i).
The
sample
volume
range
is
0.5
to
1.0
liters
(ASTM
D
6232).

The
concentric
tube
thief
consists
of
two
slotted
telescoping
tubes,
which
are
constructed
of
stainless
steel,
brass,
or
other
material.
The
outer
tube
has
a
conical
pointed
tip
on
one
end
which
allows
the
thief
to
penetrate
the
material
being
sampled.
The
thief
is
opened
and
closed
by
rotating
the
inner
tube,
and
it
is
inserted
into
the
material
while
in
the
closed
position.
Once
inserted,
the
inner
tube
is
rotated
into
the
open
position
and
the
device
is
wiggled
to
allow
the
material
to
enter
the
open
slots.
The
thief
then
is
closed
and
withdrawn.

Advantages
°
Is
a
good
direct
push
sampler
for
dry
unconsolidated
materials.

°
Easy
to
use.
Appendix
E
218
Figure
E­
15.
Trier
Limitations
°
May
be
difficult
to
decontaminate,
depending
on
the
matrix
°
Not
recommended
for
sampling
of
moist
or
sticky
materials.

°
Does
not
collect
samples
containing
all
particle
sizes
if
the
diameter
of
the
largest
solid
particle
is
greater
than
one­
third
of
the
slot
width
(ASTM
D
6232).
Most
useful
when
the
solids
are
no
greater
than
0.6
cm
(1/
4­
inch)
in
diameter
(USEPA
1994i).

°
Depth
of
sample
is
limited
by
the
length
of
the
thief.

°
Not
recommended
for
use
when
volatiles
are
of
interest.
Collects
a
somewhat
disturbed
sample,
which
may
cause
loss
of
some
volatiles.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
"Waste
Pile
Sampling"
(USEPA
1994d)

E.
4.4
Trier
A
trier
(Figure
E­
15)
is
a
push
coring
device
that
resembles
an
elongated
scoop
and
is
used
to
sample
moist
or
sticky
solids
with
a
particle
diameter
less
than
one­
half
the
diameter
of
the
tube
portion.
The
trier
can
be
used
to
sample
soils
and
similar
fine­
grained
cohesive
materials.
The
typical
sample
volume
range
is
0.1
to
0.5
liters
(ASTM
D
6232).

A
trier
comprises
a
handle
connected
to
a
tube
cut
in
half
lengthwise,
with
a
sharpened
tip
that
allows
it
to
cut
into
the
material.
Triers
are
made
of
stainless
steel,
PTFE­
coated
metal,
or
plastic.
One
should
be
selected
who
materials
of
construction
are
compatible
with
the
sampled
material.

A
trier,
typically
61
to
100
cm
long
and
1.27
to
2.54
cm
in
diameter,
is
used
as
a
vertical
coring
device
when
a
relatively
complete
and
cylindrical
sample
can
be
extracted.

The
trier
is
pushed
into
the
material
to
be
sampled
and
turned
one
or
two
times
to
cut
a
Appendix
E
219
core.
The
rotation
is
stopped
with
the
open
face
pointing
upward.
The
core
is
then
carefully
removed
from
the
hole,
preventing
overburden
material
from
becoming
a
part
of
the
sample.
The
sample
is
inspected
for
irregularities
(e.
g.,
pebbles)
or
breakage.
If
breakage
occurred
and
if
the
core
does
not
satisfy
minimum
length
requirements,
discard
it
and
extract
another
from
an
immediately
adjacent
location
(ASTM
D
5451).
The
sample
is
emptied
into
the
appropriate
container
for
analysis.

Advantages
°
A
good
direct
push
sampler
for
moist
or
sticky
materials.

°
Lightweight,
easy
to
use,
and
easy
to
decontaminate
for
reuse.

Limitations
°
Limited
to
sample
particle
sizes
within
the
diameter
of
the
inserted
tube
and
will
not
collect
particles
greater
than
the
slot
width.

°
Not
recommended
for
sampling
of
dry
unconsolidated
materials.
(A
concentric
tube
thief
is
good
for
such
materials.)

°
Only
for
surface
sampling,
and
the
depth
of
sample
is
limited
by
the
length
of
the
trier.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Sampling
Using
a
Trier
Sampler,
ASTM
D
5451
°
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel,
ASTM
D
6063
°
Standard
Practice
for
Sampling
Unconsolidated
Solids
in
Drums
or
Similar
Containers,
ASTM
D
5680
E.
4.5
Thin­
Walled
Tube
A
thin­
walled
tube
(Figure
E­
16)
is
a
type
of
push
coring
device
recommended
for
sampling
cohesive,
unconsolidated
solids
–
particularly
soil.
It
is
not
recommended
for
gravel
or
rocky
soil.
The
sample
volume
range
is
0.5
to
5.0
liters
(ASTM
D
6232).

The
tube
generally
is
constructed
of
carbon
stainless
steel,
but
can
be
manufactured
from
other
metals
(ASTM
D
4700).
It
is
commonly
30­
inches
long
and
is
readily
available
in
2­,
3­,
and
5­
inch
outside
diameters
(ASTM
D
4700).
The
tube
is
attached
with
set
screws
to
a
length
of
a
solid
or
tubular
rod,
and
the
upper
end
of
the
rod,
or
sampler
head,
is
threaded
to
accept
a
handle
or
extension
rod.
Typically,
the
length
of
the
tube
depends
on
the
desired
sampling
depth.
Its
advancing
end
is
beveled
and
has
a
cutting
edge
with
a
smaller
diameter
than
the
Appendix
E
220
Figure
E­
16.
Thin­
walled
tube
tube
inside
diameter.
The
tube
can
be
used
in
conjunction
with
drills
–
from
hand­
held
to
full­
sized
rigs.

The
end
of
the
sampler
is
pushed
directly
into
the
media
using
a
downward
force
on
the
handle.
It
can
be
pushed
downward
by
hand,
with
a
jack­
like
system,
or
with
a
hydraulic
piston.
Once
the
desired
depth
is
reached,
the
tube
is
twisted
to
break
the
continuity
of
the
tip
and
is
pulled
from
the
media.
The
sample
material
is
extruded
into
the
sample
container
by
forcing
a
rod
through
the
tube.
A
paring
device
has
been
developed
to
remove
the
outer
layer
during
extrusion
(ASTM
D
4700).
Plastic
and
PFTE
sealing
caps
for
use
after
sampling
are
available
for
the
2­,
3­,
and
5­
inch
tubes.

Advantages
°
Readily
available,
inexpensive,
and
easy
to
use.

°
Reusable
and
can
be
decontaminated.

°
Obtains
a
relatively
undisturbed
sample.

Limitations
°
Some
thin­
walled
tubes
are
large
and
heavy.

°
The
material
to
be
sampled
must
be
of
a
physical
consistency
(cohesive
sold
material)
to
be
cored
and
retrieved
within
the
tube.
It
cannot
be
used
to
sample
gravel
or
rocky
soils.

°
Some
volatile
loss
is
possible
when
the
sample
is
removed
from
the
tube.

°
The
most
disturbed
portion
in
contact
with
the
tube
may
be
considered
unrepresentative.
Shorter
tubes
provide
less­
disturbed
samples
than
longer
tubes.

°
Materials
with
particles
larger
than
one­
third
of
the
inner
diameter
of
the
tube
should
not
be
sampled
with
a
thin­
walled
tube.
Appendix
E
221
Figure
E­
17.
Coring
type
sampler
(with
valve)
Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Core
Sampling
of
Submerged,
Unconsolidated
Sediments,
ASTM
D
4823
°
Standard
Practice
for
Thin­
Walled
Type
Geotechnical
Sampling
of
Soils,
ASTM
D
1587
°
Standard
Guide
for
Soil
Sampling
from
the
Vadose
Zone,
ASTM
D
4700
E.
4.6
Coring
Type
Sampler
(with
Valve)

A
coring
type
sampler
with
valve
(Figure
E­
17)
is
a
type
of
push
coring
device
recommended
for
wet
soil,
and
can
also
be
used
to
sample
unconsolidated
solid
waste,
mixed­
phase
solid/
liquid
waste,
and
free­
flowing
powders.
The
coring
device
may
be
used
in
drums
and
small
containers
as
well
as
tanks,
lagoons,
and
waste
impoundments.
The
sample
volume
range
is
0.2
to
1.5
liters
(ASTM
D
6232).

The
coring
type
sampler
with
valve
is
a
stainless
steel
cylindrical
sampler
with
a
coring
tip,
top
cap,
an
extension
with
a
cross
handle,
and
a
non­
return
valve
at
the
lower
end
behind
a
coring
or
augering
tip.
The
valve
is
a
retaining
device
to
hold
the
sample
in
place
as
the
coring
device
is
removed.
Samples
are
normally
collected
in
an
optional
liner.
It
is
operated
by
attaching
a
handle
or
an
extension
with
a
handle
to
the
top
of
the
coring
device.
The
corer
is
lowered
to
the
surface,
pushed
into
the
material
being
sampled
and
removed.
The
top
cap
is
removed
and
the
contents
emptied
into
a
sample
container.
Alternatively,
the
liner
can
be
removed
(with
the
sampled
material
retained
inside)
and
capped
on
both
ends
for
shipment
to
a
laboratory.

Advantages
°
Reusable
and
is
easily
decontaminated.

°
Provides
a
relatively
undisturbed
sample
if
not
extruded.

°
Can
be
hand
operated
and
does
not
require
significant
physical
strength.
Appendix
E
222
Figure
E­
18.
Miniature
core
sample
(Encore™
sampler)
Limitations
°
Can
not
be
used
in
gravel,
large
particle
sediments,
or
sludges.

°
When
sampling
for
volatile
organic
compounds,
it
must
be
used
with
a
liner
and
capped
to
minimize
the
loss
of
volatiles.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Guide
for
Core
Sampling
Submerged,
Unconsolidated
Sediments,
ASTM
D
4823
E.
4.7
Miniature
Core
Sampler
The
miniature
core
sampler
(Figure
E­
18)
can
be
used
to
collect
soil
and
waste
samples
for
volatile
organics
analysis.
These
include
devices
such
as
the
Purge­
and­
Trap
Soil
Sampler™,
the
EnCore™
sampler,
or
a
cut
plastic
syringe
(see
Section
6.0
of
SW­
846
Method
5035).
A
miniature
core
sampler
is
a
single­
use
push
coring
sampling
device
that
also
can
be
used
as
an
air­
tight
sample
storage
and
shipping
container.
It
collects
a
small
contained
subsample
and
is
particularly
useful
for
the
sampling
and
analysis
of
volatile
organic
compounds.

It
is
recommended
for
sampling
soil,
from
the
ground
or
the
side
of
a
trench,
and
may
be
used
for
sampling
sediment
and
unconsolidated
solid
wastes.
It
cannot
be
used
for
sampling
cemented
material,
consolidated
material,
or
material
having
fragments
coarse
enough
to
interfere
with
proper
coring.
The
EnCore™
sampler
can
be
used
to
collect
subsamples
from
soil
cores
and
has
a
sample
volume
range
of
0.01
to
0.05
liters
(ASTM
D
6232).

The
device
is
available
from
the
manufacturer
in
two
sizes
for
collection
of
5­
and
25­
gram
samples
(assuming
a
soil
density
of
1.7
g/
cm
3
).
The
size
is
chosen
based
on
the
sample
size
required
by
the
analytical
procedure.

SW­
846
Method
5035,
"Closed­
System
Purge­
and­
Trap
and
Extraction
for
Volatile
Organics
in
Soil
and
Waste
Samples,"
recommends
that
samples
not
be
stored
in
the
device
longer
than
48
hours
prior
to
sample
preparation
for
analysis.
The
manufacturer's
instructions
for
sample
extrusion
should
be
followed
carefully.
Appendix
E
223
Advantages
°
Maintains
sample
structure
in
a
device
that
also
can
be
used
to
store
and
transport
the
sample
directly
to
the
laboratory.

°
Recommended
for
collecting
samples
for
the
analysis
of
volatile
compounds.
It
collects
a
relatively
undisturbed
sample
that
is
contained
prior
to
analysis
to
minimize
the
loss
of
volatile
compounds.

°
Usually
is
compatible
with
the
chemicals
and
physical
characteristics
of
the
sampled
media.

°
No
significant
physical
limitations
for
its
use.

°
Cross­
contamination
should
not
be
a
concern
if
the
miniature
core
sampler
is
certified
clean
by
the
manufacturer
and
employed
as
a
single­
use
device.

Limitations
°
Cannot
be
used
to
sample
gravel
or
rocky
soils.

°
Instructions
must
be
followed
carefully
for
proper
use
to
avoid
trapping
air
with
the
sample
and
to
ensure
that
the
sample
does
not
compromise
the
seals.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Using
the
Disposable
EnCore™
Sampler
for
Sampling
and
Storing
Soil
for
Volatile
Organic
Analysis,
ASTM
D
6418
°
Standard
Guide
for
Sampling
Waste
and
Soils
for
Volatile
Organic
Compounds,
ASTM
D
4547
Appendix
E
224
Figure
E­
19.
Modified
syringe
sampler
E.
4.8
Modified
Syringe
Sampler
A
modified
syringe
sampler
(Figure
E­
19)
is
a
push
coring
sampling
device
constructed
by
the
user
by
modifying
a
plastic,
single­
use,
medical
syringe.
It
can
be
used
to
provide
a
small,
subsample
of
soil,
sediments,
and
unconsolidated
solid
wastes.
It
is
sometimes
used
to
sub­
sample
a
larger
core
of
soil.
It
is
not
recommended
for
sampling
cemented
material,
consolidated
material,
or
material
having
fragments
coarse
enough
to
interfere
with
proper
coring.
Unlike
the
EnCore™
sampler,
it
should
not
be
used
to
store
and
ship
a
sample
to
the
laboratory.
Instead,
the
sample
should
be
extruded
into
another
container.
Although
the
modified
syringe
sampler
does
not
provide
as
contained
a
sample
as
the
EnCore™
sampler,
it
can
be
used
for
sampling
volatile
compounds,
as
long
as
sample
extrusion
into
another
container
is
quickly
and
carefully
executed.
The
modified
syringe
sample
has
a
volume
range
of
0.01
to
0.05
liters
(ASTM
D
6232).

A
modified
syringe
sampler
is
constructed
by
cutting
off
the
lower
end
of
the
syringe
attachment
for
the
needle.
The
rubber
cap
is
removed
from
the
plunger,
and
the
plunger
is
pushed
in
until
it
is
flush
with
the
cut
end.
For
greater
ease
in
pushing
into
the
solid
matrix,
the
front
edge
sometimes
can
be
sharpened
(ASTM
D
4547).
The
syringe
sampler
is
then
pushed
into
the
media
to
collect
the
sample,
which
then
may
be
placed
in
a
glass
VOA
vial
for
storage
and
transport
to
the
laboratory.
The
sample
is
immediately
extruded
into
the
vial
by
gently
pushing
the
plunger.
The
volume
of
material
collected
should
not
cause
excessive
stress
on
the
device
during
intrusion
into
the
material,
or
be
so
large
that
the
sample
falls
apart
easily
during
extrusion.

Advantages
°
Obtains
a
relatively
undisturbed
profile
sample.

°
Can
be
used
for
the
collection
of
samples
for
the
analysis
of
volatile
compounds
as
long
as
sample
extrusion
is
quickly
and
carefully
executed.

°
No
significant
physical
limitations
for
its
use.

°
Low­
cost,
single­
use
device.
Appendix
E
225
Figure
E­
20.
Bucket
auger
Limitations
°
Cannot
be
used
to
sample
gravel
or
rocky
soils.

°
Material
of
construction
may
be
incompatible
with
highly
contaminated
media.

°
Care
is
required
to
ensure
that
the
device
is
clean
before
use.

°
The
device
cannot
be
used
to
store
and
transport
a
sample.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
Waste
and
Soils
for
Volatile
Organic
Compounds,
ASTM
D
4547
E.
5
Rotating
Coring
Devices
Rotating
coring
devices
include
equipment
that
obtains
vertical
columns
of
a
solid
sample
through
a
rotating
action.
Some
of
these
devices
(such
as
augers)
also
can
be
used
for
just
boring
a
hole
for
sample
collection
at
a
certain
depth
using
another
piece
of
equipment.
You
will
find
summaries
for
the
following
rotating
coring
devices
in
this
section:

E.
5.1
Bucket
Auger
E.
5.2
Rotating
Coring
Device
E.
5.1
Bucket
Auger
The
bucket
auger
(Figure
E­
20)
is
a
handoperated
rotating
coring
device
generally
used
to
sample
soil,
sediment,
or
unconsolidated
solid
waste.
It
can
be
used
to
obtain
samples
from
drums,
storage
containers,
and
waste
piles.
The
sample
volume
range
is
0.2
to
1.0
liters
(ASTM
D
6232).

The
cutting
head
of
the
auger
bucket
is
pushed
and
twisted
by
hand
with
a
downward
force
into
the
ground
and
removed
as
the
bucket
is
filled.
The
empty
auger
is
returned
to
the
hole
and
the
procedure
is
repeated.
The
sequence
is
continued
until
the
required
depth
is
reached.
The
same
bucket
may
be
used
to
advance
the
hole
if
the
vertical
sample
is
a
composite
of
all
intervals;
however,
discrete
grab
Appendix
E
226
samples
should
be
collected
in
separate
clean
auger
buckets.
The
top
several
inches
of
material
should
be
removed
from
the
bucket
to
minimize
chances
of
cross­
contamination
of
the
sample
from
fall­
in
material
from
the
upper
portions
of
the
hole.

Note
that
hand
augering
may
be
difficult
in
tight
clays
or
cemented
sands.
At
depths
approaching
20
feet
(6
m),
the
tension
of
hand
auger
extension
rods
may
make
operation
of
the
auger
too
difficult.
Powered
methods
are
recommended
if
deeper
samples
are
required
(ASTM
D
6232).

Advantages
°
Reusable
and
easy
to
decontaminate.

°
Easy
to
use
and
relatively
quick
for
shallow
subsurface
samples.

°
Allows
the
use
of
various
auger
heads
to
sample
a
wide
variety
of
soil
conditions
(USEPA
1993c).

°
Provides
a
large
volume
of
sample
in
a
short
time.

Limitations
°
Depth
of
sampling
is
limited
to
about
20
feet
(6
m)
below
the
surface.

°
Not
suitable
for
obtaining
undisturbed
samples.

°
Requires
considerable
strength
to
operate
and
is
labor
intensive.

°
Not
ideal
for
sampling
soils
for
volatile
organic
compounds.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Soil
Investigation
and
Sampling
by
Auger
Borings,
ASTM
D
1452
°
Standard
Guide
for
Soil
Sampling
from
the
Vadose
Zone,
ASTM
D
4700
°
Standard
Practice
for
Sampling
Unconsolidated
Waste
From
Trucks,
ASTM
D
5658
°
Standard
Guide
for
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel,
ASTM
D
6063
°
"Waste
Pile
Sampling"
(USEPA
1994d)
Appendix
E
227
Figure
E­
21.
Rotating
coring
device
°
"Sediment
Sampling"
(USEPA
1994e)

E.
5.2
Rotating
Coring
Device
The
rotating
coring
device
(Figure
E­
21)
collects
vertical
columns
of
a
solid
sample
through
a
rotating
action
and
can
be
used
in
sampling
consolidated
solid
waste,
soil,
and
sediment.
The
sample
volume
range
is
0.5
to
1.0
liters
(ASTM
D
6232).

The
rotating
coring
device
consists
of
a
diamond­
or
carbide­
tipped
open
steel
cylinder
attached
to
an
electric
drill.
The
coring
device
may
be
operated
with
the
drill
hand­
held
or
with
the
drill
mounted
on
a
stand.
When
on
a
portable
stand,
fulldepth
core
samples
can
be
obtained.
The
barrel
length
is
usually
1­
to
1.5­
feet
long
and
the
barrel
diameter
ranges
from
2
to
6
inches
(ASTM
D
6232
and
ASTM
D
5679).
The
rotating
coring
device
may
be
used
for
surface
or
depth
samples.

The
rotating
coring
device
is
placed
vertical
to
the
surface
of
the
media
to
be
sampled,
then
turned
on
before
contact
with
the
surface.
Uniform
and
continuous
pressure
is
supplied
to
the
device
until
the
specified
depth
is
reached.
The
coring
device
is
then
withdrawn
and
the
sample
is
placed
into
a
container
for
analysis,
or
the
tube
itself
may
be
capped
and
sent
to
the
laboratory.
Capping
the
tube
is
preferred
when
sampling
for
volatile
organic
compounds.
The
rotating
tube
must
be
cooled
and
lubricated
with
water
between
samples.

Advantages
°
Easy
to
decontaminate.

°
Reusable.

°
Can
obtain
a
solid
core
sample.

Limitations
°
Requires
a
battery
or
other
source
of
power.

°
Requires
a
supply
of
water,
used
for
cooling
the
rotating
tube.

°
Not
easy
to
operate.
Appendix
E
228
Figure
E­
22.
COLIWASA
Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Sampling
Consolidated
Solids
in
Drums
or
Similar
Containers,
ASTM
D
5679
°
"Drum
Sampling"
(USEPA
1994b)

°
"Sediment
Sampling"
(USEPA
1994e)

E.
6
Liquid
Profile
Devices
Liquid
profile
devices
include
equipment
that
can
collect
a
vertical
column
of
a
liquid,
sludge,
or
slurry
sample.
You
will
find
summaries
for
the
following
liquid
profile
devices
in
this
section:

E.
6.1
Composite
Liquid
Waste
Sampler
(COLIWASA)
E.
6.
2
Drum
Thief
E.
6.3
Valved
Drum
Sampler
E.
6.4
Plunger
Type
Sampler
E.
6.5
Settleable
Solids
Profiler
(Sludge
Judge)

E.
6.1
COLIWASA
(Composite
Liquid
Waste
Sampler)

The
COLIWASA
(Figure
E­
22)
is
a
type
of
liquid
profile
sampling
device
used
to
obtain
a
vertical
column
of
sampled
material.
A
COLIWASA
is
recommended
for
sampling
liquids,
multi­
layer
liquid
wastes,
and
mixed­
phase
solid/
liquid
wastes
and
is
commonly
used
to
sample
containerized
liquids,
such
as
tanks
and
drums.
It
also
may
be
used
for
sampling
open
bodies
of
stagnant
liquids.
The
sample
volume
range
is
0.5
to
3
liters
(ASTM
D
6232).

A
COLIWASA
can
be
constructed
of
polyvinyl
chloride
(PVC),
glass,
metal,
PTFE
or
any
other
material
compatible
with
the
sample
being
collected.
In
general,
a
COLIWASA
comprises
a
tube
with
a
tapered
end
and
an
inner
rod
that
has
some
type
of
stopper
on
the
end.
The
design
can
be
modified
or
adapted
to
meet
the
needs
of
the
sampler.
One
configuration
comprises
a
piston
valve
attached
by
an
inner
rod
to
a
locking
Appendix
E
229
mechanism
at
the
other
end.
Designs
are
available
for
specific
sampling
situations
(i.
e.,
drums,
tanks).
COLIWASAs
specifically
designed
for
sampling
liquids,
viscous
materials,
and
heavy
sludges
are
also
available.
COLIWASAs
come
in
a
variety
of
diameters
(0.
5
to
2
inches)
and
lengths
(4
to
20
feet)
(ASTM
D
6232).

COLIWASAs
are
available
commercially
with
different
types
of
stoppers
and
locking
mechanisms,
but
all
have
the
same
operating
principle.
To
draw
a
sample,
the
COLIWASA
is
slowly
lowered
into
the
sample
at
a
right
angle
with
the
surface
of
the
material.
(If
the
COLIWASA
sampler
is
lowered
too
fast,
the
level
of
material
inside
and
outside
the
sampler
may
not
be
the
same,
causing
incorrect
proportions
in
the
sample.
In
addition,
the
layers
of
multi­
layered
materials
may
be
disturbed.)
The
sampler
is
opened
at
both
ends
as
it
is
lowered
to
allow
the
material
to
flow
through
it.
When
the
device
reaches
the
desired
sampling
depth,
the
sampler
is
closed
by
the
stopper
mechanism
and
both
tubes
are
removed
from
the
material.
The
sampled
material
is
then
transferred
to
a
sample
container
by
opening
the
COLIWASA.
A
COLIWASA
can
be
reused
following
proper
decontamination
(reusable
point
sampler)
or
disposed
after
use
(single­
use
COLIWASA).
The
reusable
point
sampler
is
used
in
the
same
way
as
the
single
use
COLIWASA;
however,
it
can
also
sample
at
a
specific
point
in
the
liquid
column.

Advantages
°
Provides
correct
delimitation
and
extraction
of
waste
(Pitard
1989).

°
Easy
to
use.

°
Inexpensive.

°
Reusable.

°
Single­
use
models
are
available.

Limitations
°
May
break
if
made
of
glass
and
used
in
consolidated
matrices.

°
Decontamination
may
be
difficult.

°
The
stopper
may
not
allow
collection
of
material
in
the
bottom
of
a
drum.

°
High
viscosity
fluids
are
difficult
to
sample.

Other
Guidance
°
Standard
Practice
for
Sampling
with
a
Composite
Liquid
Waste
Sampler
(COLIWASA),
ASTM
D
5495
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
Appendix
E
230
Figure
E­
23.
Drum
thief
°
Standard
Guide
for
Sampling
Drums
and
Similar
Containers
by
Field
Personnel,
ASTM
D
6063
°
Standard
Practice
for
Sampling
Single
or
Multilayered
Liquids,
With
or
Without
Solids,
in
Drums
or
Similar
Containers,
ASTM
D
5743
°
"Drum
Sampling"
(USEPA
1994b)

°
"Tank
Sampling"
(USEPA
1994c)

E.
6.2
Drum
Thief
A
drum
thief
(Figure
E­
23)
is
an
open­
ended
tube
and
liquid
profile
sampling
device
that
provides
a
vertical
column
of
the
sampled
material.
It
is
recommended
for
sampling
liquids,
multi­
layer
liquid
wastes,
and
mixed­
phase
solid/
liquid
wastes
and
can
be
used
to
sample
liquids
in
drums
or
similar
containers.
The
typical
sample
volume
range
is
0.1
to
0.5
liters
(ASTM
D
6232).

Drum
thieves
can
be
made
of
glass,
stainless
steel,
or
any
other
suitable
material.
Drum
thieves
are
typically
6
mm
to
16
mm
inside
diameter
and
48­
inches
long
(USEPA
1994c).
To
sample
liquids
with
low
surface
tension,
a
narrow
bailer
works
best.
In
most
cases,
tubes
with
a
1­
centimeter
inside
diameter
work
best.
Wider
tubes
can
be
used
to
sample
sludges.

The
drum
thief
is
lowered
vertically
into
the
material
to
be
sampled,
inserted
slowly
to
allow
the
level
of
material
inside
and
outside
the
tube
to
be
approximately
the
same.
This
avoids
incorrect
proportions
in
the
sample.
The
upper
end
is
then
sealed
with
the
thumb
or
a
rubber
stopper
to
hold
the
sample
in
the
tube
as
it
is
removed
from
the
container.
The
thief
is
emptied
by
removing
the
thumb
or
stopper.

Advantages
°
Easy
to
use
and
inexpensive.

°
Available
in
reusable
and
single­
use
models.

Limitations
°
Sampling
depth
is
limited
to
the
length
of
the
sampler.

°
May
not
collect
material
in
the
bottom
of
a
drum.
The
depth
of
unsampled
material
depends
on
the
density,
surface
tension,
and
viscosity
of
the
material
being
sampled.
Appendix
E
231
Figure
E­
24.
Valved
drum
sampler
°
Highly
viscous
materials
are
difficult
to
sample.

°
May
be
difficult
to
retain
sample
in
the
tube
when
sampling
liquids
of
high
specific
gravity.

°
If
made
of
glass,
may
break
if
used
in
consolidated
matrices.
In
addition,
chips
and
cracks
in
a
glass
drum
thief
may
cause
an
imperfect
seal.

°
Decontamination
is
difficult.

°
When
sampling
a
drum,
repeated
use
of
the
drum
thief
to
obtain
an
adequate
volume
of
sample
may
disturb
the
drum
contents.

°
Drum­
size
tubes
have
a
small
volume
and
sometimes
require
repeated
use
to
obtain
a
sample.
Two
or
more
people
may
be
required
to
use
larger
sizes.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel,
ASTM
D
6063
°
Standard
Practice
for
Sampling
Single
or
Multilayered
Liquids,
With
or
Without
Solids,
in
Drums
or
Similar
Containers,
ASTM
D
5743
°
"Drum
Sampling"
(USEPA
1994b)

°
"Tank
Sampling"
(USEPA
1994c)

E.
6.3
Valved
Drum
Sampler
A
valved
drum
sampler
(Figure
E­
24)
is
a
liquid
profile
device
often
used
to
sample
liquids
in
drums
or
tanks
and
provides
a
vertical
column
of
the
sampled
material.
A
valved
drum
sampler
is
recommended
for
sampling
liquids,
multi­
layered
liquid
wastes,
and
mixed­
phase
solid/
liquid
wastes.
The
typical
sample
volume
range
is
0.3
to
1.6
liters
(ASTM
D
6232).

The
sampler
can
be
constructed
from
PTFE
for
reuse
or
polypropylene
for
single
use
and
comprises
a
tube
fitted
with
a
top
plug
and
a
bottom
valve.
A
sliding
indicator
ring
allows
specific
levels
or
fluids
interfaces
to
be
identified.

The
valved
drum
sampler
is
open
at
both
ends
during
Appendix
E
232
Figure
E­
25.
Plunger
type
sampler
sample
collection
and
lowered
vertically
into
the
material
to
be
sampled.
The
sampler
is
inserted
slowly
to
allow
the
level
of
material
inside
and
outside
the
tube
to
equalize.
Once
the
desired
amount
of
sample
is
collected,
the
top
plug
and
the
bottom
valve
are
closed.
The
top
plug
is
closed
manually
and
the
bottom
valve
is
closed
by
pressing
against
the
side
or
bottom
of
the
container.
The
sample
is
poured
from
the
top
of
the
sampler
into
a
suitable
container.

Advantages
°
Easy
to
use,
inexpensive,
and
unbreakable.

°
Obtains
samples
to
depths
of
about
8
feet
(2.
4
m)
(ASTM
D
6232).

°
Reusable
if
made
from
PTFE
(single­
use
if
made
from
polypropylene)
(ASTM
D
6232).

Limitations
°
Somewhat
difficult
to
decontaminate
°
The
bottom
valve
may
prevent
collection
of
the
bottom
1.25
cm
of
material
(ASTM
D
6232).

°
High
viscosity
fluids
are
difficult
to
sample.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
6.4
Plunger
Type
Sampler
The
plunger
type
sampler
(Figure
E­
25)
is
a
liquid
profile
sampling
device
used
to
collect
a
vertical
column
of
liquid
and
is
recommended
for
the
sampling
of
single
and
multilayered
liquids
or
mixtures
of
liquids
and
solids.
The
plunger
type
sampler
can
be
used
to
collect
samples
from
drums,
surface
impoundments,
and
tanks.
Sample
volume
is
at
least
0.2
liters
and
ultimately
depends
on
the
size
of
the
sample
container
(ASTM
D
6232).

A
plunger
type
sampler
comprises
a
sample
tube,
sample
line
or
rod,
head
section,
and
plunger
and
is
made
of
HDPE,
PTFE,
or
glass.
A
sample
jar
is
connected
to
the
head
section.
The
sample
tube
is
lowered
into
the
liquid
to
the
desired
depth.
The
plunger
is
engaged
into
the
tube
to
secure
the
sample
within
the
tube
and
the
cord
or
rod
is
raised
to
transfer
the
sample
directly
into
the
Appendix
E
233
Figure
E­
26.
Settleable
solids
profiler
sampling
bottle
or
jar.
The
plunger
can
be
pushed
back
down
the
sampling
tube
to
reset
the
sampler.

Advantages
°
Easy
to
use.

°
Provides
a
sealed
collection
system.

°
Relatively
inexpensive
and
available
in
various
lengths.

Limitations
°
Care
is
needed
when
using
a
glass
sampling
tube.

°
Decontamination
may
be
difficult,
particularly
when
a
glass
sampling
tube
is
used.

Other
Guidance:

°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Sampling
Single
or
Multilayered
Liquids,
With
or
Without
Solids,
in
Drums
or
Similar
Containers,
ASTM
D
5743
E.
6.5
Settleable
Solids
Profiler
(Sludge
Judge)

The
settleable
solids
profiler
(Figure
E­
26),
also
known
as
the
sludge
judge,
primarily
is
used
to
measure
or
sample
settleable
(suspended)
solids
found
in
sewage
treatment
plants,
waste
settling
ponds
and
impoundments
containing
waste.
It
also
can
be
used
to
sample
drums
and
tanks.
It
has
a
sample
volume
range
of
1.3
to
4.0
liters
(ASTM
D
6232).

The
sludge
judge
is
made
from
clear
PVC
and
has
1­
foot­
depth
markings
on
its
5­
foot­
long
body
sections.
It
has
a
check
valve
on
the
lower
section
and
a
cord
on
the
upper
section
and
is
assembled
using
the
threaded
connections
of
the
sections
to
the
length
needed
for
the
sampling
event.
The
sampler
is
lowered
into
the
media
to
allow
it
to
fill.
A
tug
on
the
cord
sets
the
check
valve
and
it
is
removed
from
the
sampled
material.
The
level
of
settleable
solids
can
be
measured
using
the
markings.
It
is
emptied
by
pressing
in
the
protruding
pin
on
the
lower
end.
Appendix
E
234
Figure
E­
27.
Bailer
Advantages
°
Allows
measurement
of
the
liquid/
settleable
solids
columns
of
any
length.

°
Easy
to
assemble
and
use.

°
Unbreakable
in
normal
use
and
reusable.

Limitations
°
Suitable
for
sampling
noncaustic
liquids
only.

°
May
be
difficult
to
sample
high
viscosity
materials.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
7
Surface
Sampling
Devices
Surface
sampling
devices
include
equipment
that
by
design
are
limited
to
sample
collection
at
the
surface
of
material
or
can
sample
material
of
limited
depth
or
width
only.
You
will
find
summaries
for
the
following
surface
sampling
devices
in
this
section:

E.
7.1
Bailer
E.
7.2
Dipper
E.
7.3
Liquid
Grab
Sampler
E.
7.4
Swing
Jar
Sampler
E.
7.5
Spoons,
Scoops,
Trowels,
and
Shovels
E.
7.1
Bailer
Bailers
(Figure
E­
27)
are
designed
for
obtaining
samples
of
ground
water;
however,
they
also
can
be
used
to
obtain
samples
of
liquids
and
multi­
layered
liquid
wastes
from
tanks
and
surface
impoundments.
Bailers
are
not
suitable
for
sampling
sludges.
The
sample
volume
range
is
0.5
to
2
liters
(ASTM
D
6232).

A
bailer
is
a
hollow
tube
with
a
check
valve
at
the
base
(open
bailer)
or
valves
at
both
ends
(point­
source
bailer).
A
bailer
can
be
threaded
in
the
middle
so
that
extension
tubes
can
be
added
to
increase
the
sampling
volume.
It
can
be
constructed
of
stainless
steel,
PVC,
PTFE,
or
any
other
Appendix
E
235
suitable
material
and
is
available
in
numerous
sizes
for
use
in
a
variety
of
well
sizes.
The
bailer
is
attached
to
a
line
and
gradually
lowered
into
the
sample.
As
the
bailer
is
lowered,
the
bottom
check
valve
allows
water
to
flow
through
the
tube.
The
bailer
is
then
slowly
raised
to
the
surface.
The
weight
of
the
water
closes
the
bottom
check
valve.
A
point­
source
bailer
allows
sampling
at
a
specific
depth.
The
check
valve
at
the
top
of
the
tube
limits
water
or
particles
from
entering
the
bailer
as
it
is
retrieved.

The
bailer
is
emptied
either
by
pouring
from
the
top
or
by
a
bottom
emptying
device.
When
using
a
top­
emptying
bailer,
the
bailer
should
be
tipped
slightly
to
allow
a
slow
discharge
into
the
sample
container
to
minimize
aeration.
A
bottom­
emptying
model
has
controlled
flow
valves,
which
is
good
for
collecting
samples
for
volatile
organic
analysis
since
agitation
of
the
sample
is
minimal.

Advantages
°
Easy
to
use,
inexpensive,
and
does
not
require
an
external
power
source.

°
Can
be
constructed
of
almost
any
material
that
is
compatible
with
the
parameters
of
interest.

°
Relatively
easy
to
decontaminate
between
samples.
Single­
use
models
are
available.

°
Bottom­
emptying
bailers
with
control
valves
can
be
used
to
obtain
samples
for
volatile
compound
analysis.

Limitations
°
Not
designed
to
obtain
samples
from
specific
depths
below
liquid
surface
(unless
it
is
a
point­
source
bailer).

°
If
using
a
top­
emptying
bailer,
the
sample
may
become
aerated
if
care
is
not
taken
during
transfer
to
the
sample
container.

°
May
disturb
the
sample
in
a
water
column
if
it
is
lowered
too
rapidly.

°
High
suspended
solids'
content
or
freezing
temperatures
can
impact
operation
of
check
valves.

°
One
of
the
least
preferred
devices
for
obtaining
samples
of
ground
water
for
low
concentration
analyses
due
to
their
imprecision
and
agitation
of
the
sample
(see
USEPA
1992a
and
Puls
and
Barcelona
1996).

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Guide
for
Sampling
Groundwater
Monitoring
Wells,
ASTM
D
4448
Appendix
E
236
Figure
E­
28.
Dipper
°
"Tank
Sampling"
(USEPA
1994c)

E.
7.2
Dipper
A
dipper
(Figure
E­
28)
is
a
type
of
surface
sampling
device
used
to
sample
surface
samples
from
drums,
surface
impoundments,
tanks,
pipes,
and
point
source
discharges.
Sampling
points
are
shallow
(10
inches)
and
taken
at,
or
just
below,
the
surface.
The
typical
sample
volume
range
is
0.5
to
1.0
liters
(ASTM
D
6232).

A
dipper
comprises
a
glass,
metal,
or
plastic
beaker
clamped
to
the
end
of
a
two­
or
three­
piece
telescoping
aluminum
or
fiberglass
pole,
which
serves
as
a
handle.
A
dipper
may
vary
in
the
number
of
assembled
pieces.
Some
dippers
have
an
adjustable
clamp
attached
to
the
end
of
a
piece
of
metal
tubing.
The
tubing
forms
the
handle;
the
clamp
secures
the
beaker.
Another
type
of
dipper
is
a
stainless
steel
scoop
clamped
to
a
movable
bracket
that
is
attached
to
a
piece
of
rigid
tube.
The
scoop
may
face
either
toward
or
away
from
the
person
collecting
the
sample,
and
the
angle
of
the
scoop
to
the
pipe
is
adjustable.
The
dipper,
when
attached
to
a
rigid
tube,
can
reach
easily
10
to
13
feet
(3
to
4
m)
away
from
the
person
collecting
the
samples
(ASTM
D
6232).

The
dipper
is
used
by
submerging
the
beaker
end
into
the
material
slowly
(to
minimize
surface
disturbance).
It
should
be
on
its
side
so
that
the
liquid
runs
into
the
container
without
swirling
or
bubbling.
The
beaker
is
filled
and
rotated
up,
then
brought
slowly
to
the
surface.
Dippers
and
their
beakers
should
be
compatible
with
the
sampled
material.

Advantages
°
Inexpensive.

°
Easy
to
construct
and
adapt
to
the
sampling
scenario
by
modifying
the
length
of
the
tubing
or
the
type
of
container.

Limitations
°
Not
appropriate
for
sampling
subsurface
layers
or
to
characterize
discrete
layers
of
stratified
liquids.

°
Can
only
be
used
to
collect
surface
samples.
Appendix
E
237
Figure
E­
29.
Liquid
grab
sampler
Other
Guidance
°
Standard
Practice
for
Sampling
with
a
Dipper
or
Pond
Sampler,
ASTM
D
5358
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Sampling
Wastes
from
Pipes
and
Other
Point
Discharges,
ASTM
D
5013
E.
7.
3
Liquid
Grab
Sampler
A
liquid
grab
sampler
(Figure
E­
29)
is
a
surface
sampling
device
designed
to
collect
samplers
at
a
specific
shallow
depth
beneath
the
liquid
surface.
It
can
be
used
to
collect
samples
of
liquids
or
slurries
from
surface
impoundments,
tanks,
and
drums.
Its
sample
volume
range
is
from
0.5
to
1.0
liters
(ASTM
D
6232).

The
liquid
grab
sampler
is
usually
made
from
polypropylene
or
PTFE
with
an
aluminum
or
stainless
steel
handle
and
stainless
steel
fittings.
The
sampling
jar
is
usually
made
of
glass,
although
plastic
jars
are
available.
The
jar
is
threaded
into
the
sampler
head
assembly,
then
lowered
by
the
sampler
to
the
desired
sampling
position
beneath
the
liquid
surface.
The
valve
is
then
opened
by
pulling
up
on
a
finger
ring
to
fill
the
jar.
The
valve
is
closed
before
retrieving
the
sample.

Advantages
°
Easy
to
use.

°
The
sample
jar
can
be
capped
and
used
for
transport
to
the
laboratory,
thus
minimizing
the
loss
of
volatile
organic
compounds.

°
The
closed
sampler
prevents
contaminants
in
upper
layers
from
compromising
the
sample.

Limitations
°
Care
is
required
to
prevent
breakage
of
glass
sample
jar.

°
Materials
of
construction
need
to
be
compatible
with
the
sampled
media.
Appendix
E
238
Figure
E­
30.
Swing
jar
sampler
°
Cannot
be
used
to
collect
deep
samples.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
E.
7.4
Swing
Sampler
(Swing
Jar
Sampler)

The
swing
jar
sampler
(Figure
E­
30)
is
a
surface
sampler
that
may
be
used
to
sample
liquids,
powders,
or
small
solids
at
distances
of
up
to
12
feet
(3.
5
m).
It
can
be
used
to
sample
many
different
types
of
units,
including
drums,
surface
impoundments,
tanks,
pipe/
point
source
discharges,
sampling
ports,
and
storage
bins.
It
has
a
sample
volume
range
of
0.5
to
1.0
liters.

The
swing
jar
sampler
is
normally
used
with
high
density
polyethylene
sample
jars
and
has
an
extendable
aluminum
handle
with
a
pivot
at
the
juncture
of
the
handle
and
the
jar
holder.
The
jar
is
held
in
the
holder
with
an
adjustable
clamp.
The
pivot
allows
samples
to
be
collected
at
different
angles.

Advantages
°
Easy
to
use.

°
Easily
adaptable
to
samples
with
jars
of
different
sizes
and
materials,
which
can
be
used
to
facilitate
compatibility
with
the
material
to
be
sampled.

°
Can
be
pivoted
to
collect
samples
at
different
angles.

°
Can
sample
from
a
wide
variety
of
locations
and
units.

Limitations
°
Cannot
collect
discrete
depth
samples.

°
Care
is
required
to
prevent
breakage
when
using
a
glass
sample
jar.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
Appendix
E
239
Figure
E­
31.
Scoops
E.
7.5
Spoons,
Scoops,
Trowels,
and
Shovels
Spoons,
scoops,
trowels,
or
shovels
are
types
of
surface
sampling
devices
used
to
sample
sludge,
soil,
powder,
or
solid
wastes.
The
typical
sample
volume
range
is
0.1
to
0.6
liters
for
scoops
or
trowels
and
1.0
to
5.0
Liters
for
shovels
(ASTM
D
6232).
The
typical
sample
volume
for
a
spoon
is
10
to
100
grams
(USEPA
1993c).

Spoons,
available
in
stainless
steel
or
PTFE
(reusable)
or
in
plastic
(disposable),
easily
sample
small
volumes
of
liquid
or
other
waste
from
the
ground
or
a
container.

Scoop
samplers
provide
best
results
when
the
material
is
uniform
and
may
be
the
only
sampler
possible
for
materials
containing
fragments
or
chunks.
The
scoop
size
should
be
suitable
for
the
size
and
quantity
of
the
collected
material.
Scoops
and
trowels
come
in
a
variety
of
sizes
and
materials,
although
unpainted
stainless
steel
is
preferred
(ASTM
D
6232).
Scoops
may
be
attached
to
an
extension,
similar
to
the
dipper,
in
order
to
reach
a
particular
area.
Scoops
and
trowels
are
used
by
digging
and
rotating
the
sampler.
The
scoop
is
used
to
remove
a
sample
and
transfer
it
into
a
sample
container.

Shovels,
usually
made
from
stainless
steel
or
suitable
plastic
materials,
are
typically
used
to
collect
surface
samples
or
to
remove
overburden
material
so
that
a
scoop
may
remove
a
sample.

Advantages
°
A
correctly
designed
scoop
or
spatula
(i.
e.,
with
a
flat
bottom
and
vertical
sides)
is
one
of
the
preferred
devices
for
sampling
a
one­
dimensional
mass
of
granular
solids
(see
also
Sections
6.3.2.1
and
7.3.3.3).

°
Spoons,
scoops,
trowels,
and
shovels
are
reusable,
easy
to
decontaminate,
and
do
not
require
significant
physical
strength
to
use.

°
Spoons
and
scoops
are
inexpensive
and
readily
available.

°
Spoons
and
scoops
are
easily
transportable
and
often
disposable
­­
hence,
their
use
can
reduce
sampling
time.

°
Shovels
are
rugged
and
can
be
used
to
sample
hard
materials.
Appendix
E
240
Limitations
°
Spoons,
scoops,
trowels,
and
shovels
are
limited
to
shallow
and
surface
sampling.

°
Shovels
may
be
awkward
to
handle
and
cannot
be
used
to
easily
fill
small
sample
containers.

°
Sampling
with
a
spoon,
scoop,
trowel,
or
shovel
may
cause
loss
of
volatile
organic
compounds
through
disturbance
of
the
media.

°
Spoons,
scoops,
trowels,
and
shovels
of
incorrect
design
(e.
g.,
with
rounded
bottoms)
can
introduce
bias
by
preferentially
selecting
certain
particle
sizes.

Other
Guidance
°
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities,
ASTM
D
6232
°
Standard
Practice
for
Sampling
with
a
Scoop,
ASTM
D
5633
°
"Waste
Pile
Sampling"
(USEPA
1994d)

°
"Sediment
Sampling"
(USEPA
1994e).
241
APPENDIX
F
STATISTICAL
METHODS
This
appendix
provides
guidance
on
the
statistical
analysis
of
waste
testing
and
environmental
monitoring
data.
You
should
select
the
statistical
test
during
the
Data
Quality
Assessment
(DQA)
phase
after
you
review
the
data
quality
objectives,
the
sampling
design,
and
the
characteristics
of
the
data
set.
See
guidance
provided
in
Section
8.

The
statistical
methods
in
this
appendix
are
appropriate
for
use
in
evaluating
sample
analysis
results
when
comparing
constituent
concentrations
in
a
waste
or
environmental
medium
to
a
fixed
standard.
Users
of
this
guidance
may
have
other
objectives
such
as
comparing
two
populations,
detecting
trends,
or
characterizing
the
spatial
pattern
of
contamination.
If
so,
review
other
guidance
or
seek
assistance
from
a
professional
environmental
statistician.

Note
that
not
all
RCRA
standards
require
the
waste
handler
to
use
sampling,
analysis,
and
statistical
tests
to
measure
compliance.
However,
if
sampling
and
analysis
is
used
by
the
waste
handler
to
measure
compliance
with
a
RCRA
standard,
then
statistical
methods
may
be
used
to
help
quantify
uncertainty
associated
with
the
decisions
made
using
the
data
–
even
where
there
is
no
regulatory
obligation
to
do
so
(see
also
Sections
2
and
3).

This
appendix
is
divided
into
subsections
that
describe
the
following
statistical
methods:

F.
1
Testing
Distributional
Assumptions
F.
1.1
Overview
and
Recommendations
F.
1.2
Shapiro­
Wilk
Test
for
Normality
(
)
n
 
50
F.
2
Confidence
Limits
for
the
Mean
F.
2.1
Confidence
Limits
for
the
Mean
of
a
Normal
Distribution
F.
2.2
Confidence
Limits
for
a
Normal
Mean
When
Composite
Sampling
Is
Used
F.
2.3
Confidence
Limits
for
a
Lognormal
Mean
F.
2.4
Confidence
Limits
for
the
Mean
of
a
Non­
normal
or
Unknown
Distribution
F.
3
Tests
for
a
Proportion
or
a
Percentile
F.
3.1
Parametric
Upper
Confidence
Limits
for
an
Upper
Percentile
F.
3.2
Using
a
Simple
Exceedance
Rule
Method
for
Determining
Compliance
With
A
Fixed
Standard
F.
4
Treatment
of
Nondetects
F.
4.1
Recommendations
F.
4.2
Cohen's
Adjustment
Table
F­
1
provides
a
summary
of
frequently
used
statistical
equations.
See
Appendix
G
for
statistical
tables
used
with
these
methods.
Additional
Guidance
on
the
Statistical
Analysis
of
Waste
Testing
and
Environmental
Monitoring
Data
USEPA.
2000d.
Guidance
For
Data
Quality
Assessment,
EPA
QA/
G­
9,
(QA00
version).
EPA/
600/
R­
96/
084.
Office
of
Research
and
Development,
Washington,
D.
C.
Appendix
F
242
Table
F­
1.
Summary
of
Basic
Statistical
Terminology
Applicable
to
Sampling
Plans
for
Solid
Waste
Terminology
Symbol
Mathematical
Equation
Equation
No.

Variable
(e.
g.,
barium
or
endrin)
­­
­­
x
Individual
measurement
of
xi
­­
variable

Simple
Random
Sampling
and
Systematic
Random
Sampling
Mean
of
measurements
generated
from
the
samples
(sample
mean)
x
x
n
xi
i
n
=
=
 
1
1
where
n
=
number
of
sample
measurements.
1
Variance
of
sample
s
2
s
n
x
x
i
i
n
2
2
1
1
1
=
 
 
=
 
()
2
Standard
deviation
of
sample
s
s
s
=
2
3
Standard
error
(also
standard
deviation
of
the
mean)
sx
s
s
n
x
=
4
Approximate
number
of
samples
to
estimate
the
mean
(financial
constraints
not
considered)
(See
Section
5.4.1)
n
n
z
zsz
=
+
+
 
()
1
1 
2
2
2
1 
2
2
  
 
 
where
the
"
"
values
are
obtained
from
the
last
z
row
of
Table
G­
1
in
Appendix
G.
8
Approximate
number
of
samples
to
test
a
proportion
against
a
fixed
standard
(See
Section
5.5.1).
n
n
z
GR
GR
z
AL
AL
=
 
+
 
 
 
 
 
 
 
 
 
 
1
1 
2
2
1
1
  
()
()

 
15
Number
of
samples
to
test
a
proportion
when
the
decision
rule
specifies
zero
nonconforming
samples
(See
Section
5.5.2).
n
n
p
=
log(
)
log(
)
 
where
equals
the
proportion
of
the
waste
or
p
media
exceeded
by
the
largest
sample
16
Appendix
F
243
Table
F­
1.
(Continued)

Terminology
Symbol
Mathematical
Equation
Equation
No.

Stratified
Random
Sampling
(Proportional
Allocation)

Arithmetic
mean
of
the
measurements
generated
from
the
samples
obtained
from
each
stratum
hth
xh
x
n
x
h
h
hi
i
nh
=
=
 
1
1
where
=
number
of
sample
measurements
nh
obtained
from
each
stratum.
hth


Variance
of
measurements
generated
from
the
samples
obtained
from
each
stratum
hth
sh
2
s
n
x
x
h
h
hi
h
i
nh
2
2
1
1
1
=
 
 
=
 
()


The
weighting
factor
assigned
to
each
hth
stratum
when
stratified
random
sampling
is
used
Wh
­­

Overall
sample
mean
using
stratified
random
sampling
x
st
x
Wx
st
h
h
h
L
=
=
 
1
9
Standard
error
of
the
mean
for
a
stratified
random
sample
sxst
s
W
s
n
x
h
h
L
h
h
st
=
=
 
2
1
2
10
Total
number
of
samples
to
collect
from
a
solid
waste
to
estimate
the
mean
using
stratified
random
sampling
(proportional
allocation)
n
[
]
n
t
t
W
s
df
df
h
h
h
L
=
+
 
 
=
 
1
1
2
2
2
1
  
,,

 
11
Degrees
of
freedom
associated
with
the
t­
quantile
in
Table
G­
1,
Appendix
G,
when
stratified
random
sampling
is
used
df
df
W
s
W
s
nW
h
h
h
L
h
h
h
h
L
=
 
 
 
 
 
 
 
=
=
 
 
2
1
2
2
4
1
1
12
Appendix
F
244
F.
1
Testing
Distributional
Assumptions
F.
1.1
Overview
and
Recommendations
The
assumption
of
normality
is
very
important
as
it
is
the
basis
for
many
statistical
tests.
A
normal
distribution
is
a
reasonable
model
of
the
behavior
of
certain
random
phenomena
and
often
can
be
used
to
approximate
other
probability
distributions.
In
addition,
the
Central
Limit
Theorem
and
other
limit
theorems
state
that
as
the
sample
size
gets
large,
some
of
the
sample
summary
statistics
(such
as
the
sample
mean)
behave
as
if
they
are
normally
distributed
variables.
As
a
result,
a
common
assumption
associated
with
parametric
tests
or
statistical
models
is
that
the
errors
associated
with
data
or
a
model
follow
a
normal
distribution.

While
assumption
of
a
normal
distribution
is
convenient
for
statistical
testing
purposes,
it
is
not
always
appropriate.
Sometimes
data
are
highly
skewed.
In
environmental
applications,
it
is
not
unusual
to
encounter
data
that
exhibit
a
lognormal
distribution
in
which
the
natural
logarithms
of
the
data
exhibit
a
normal
distribution.
Statistical
tests
can
be
used
to
verify
the
assumption
of
normality
or
lognormality,
but
the
conclusion
of
lognormality
should
not
be
based
on
the
outcome
of
a
statistical
test
alone.
There
are
several
physical
phenomena
that
can
cause
the
underlying
distribution
to
appear
lognormal
when
in
fact
it
is
not.
For
example,
Singh,
et
al.
(1997)
note
that
the
presence
of
a
relatively
small
highly
contaminated
area
in
an
otherwise
uncontaminated
area
can
cause
sampling
results
to
indicate
a
lognormal
distribution.
In
such
a
situation,
it
may
be
more
appropriate
to
treat
the
areas
as
two
separate
decision
units
or
use
a
stratified
sampling
design.
In
other
cases,
sampling
bias
may
cause
a
population
to
appear
lognormal.
For
example,
analytical
results
could
be
skewed
if
highly
concentrated
portions
of
the
waste
are
over­
or
under­
represented
by
the
sampling
procedure.

There
are
many
methods
available
for
verifying
the
assumption
of
normality
ranging
from
simple
to
complex.
This
guidance
recommends
use
of
the
Shapiro­
Wilk
test
for
normality.
Use
of
the
test
is
appropriate
when
the
number
of
samples
(n)
is
50
or
less.
For
n
greater
than
50,
an
alternative
test
for
normality
should
be
used.
One
alternative
presented
in
EPA's
QA/
G­
9
guidance
(USEPA
2000d)
and
the
DataQUEST
software
(USEPA
1997b)
is
Filliben's
Statistic
(Filliben
1975).
Refer
to
EPA's
QA/
G­
9
(USEPA
2000d)
guidance
or
EPA's
statistical
guidance
for
ground­
water
monitoring
data
(USEPA
1989b
and
1992b)
for
other
graphical
and
statistical
goodness­
of­
fit
tests.

F.
1.2
Shapiro­
Wilk
Test
for
Normality
(
)
n
 
50
Purpose
and
Background
This
section
provides
the
method
for
performing
the
Shapiro­
Wilk
test
for
normality.
The
test
is
easily
performed
using
statistical
software
such
as
EPA's
DataQUEST
freeware
(USEPA
1997b);
however,
the
test
also
can
be
performed
manually,
as
described
here.

The
Shapiro­
Wilk
test
is
recommended
as
a
superior
method
for
testing
normality
of
the
data.
It
is
based
on
the
premise
that
if
the
data
are
normally
distributed,
the
ordered
values
should
be
highly
correlated
with
corresponding
quantiles
(z­
scores)
taken
from
a
normal
distribution
(Shapiro
and
Wilk
1965).
In
particular,
the
Shapiro­
Wilk
test
gives
substantial
weight
to
evidence
of
non­
normality
in
the
tails
of
a
distribution,
where
the
robustness
of
statistical
tests
based
on
the
normality
assumption
is
most
severely
affected.
Appendix
F
245
The
Shapiro­
Wilk
test
statistic
(W)
will
tend
to
be
large
when
a
probability
plot
of
the
data
indicates
a
nearly
straight
line.
Only
when
the
plotted
data
show
significant
bends
or
curves
will
the
test
statistic
be
small.
The
Shapiro­
Wilk
test
is
considered
to
be
one
of
the
very
best
tests
of
normality
available
(Miller
1986,
Madansky
1988).

Procedure
Step
1.
Order
the
data
from
least
to
greatest,
labeling
the
observations
as
for
xi
.
Using
the
notation
,
let
the
order
statistic
from
any
data
set
i
n
=
1...
x
j
()
jth
represent
the
smallest
value.
jth
Step
2.
Compute
the
differences
for
each
.
Then
determine
x
x
n
i
i
()()
 
+
 
1
i
n
=
1...

as
the
greatest
integer
less
than
or
equal
to
.
k
(/)
n
2
Step
3.
Use
Table
G­
4
in
Appendix
G
to
determine
the
Shapiro­
Wilk
coefficients,
,
an
i
 
+
1
for
.
Note
that
while
these
coefficients
depend
only
on
the
sample
size
i
n
=
1...
(
),
the
order
of
the
coefficients
must
be
preserved
when
used
in
step
4
below.
n
The
coefficients
can
be
determined
for
any
sample
size
from
n
=
3
up
to
n
=
50.

Step
4.
Compute
the
quantity
given
by
the
following
formula:
b
b
bax
x
i
ninii
i
k
i
k
=
=
 
 
+
 
+
=
=
 
 
1
1
1
1
()
()()
Equation
F.
1
Note
that
the
values
are
simply
intermediate
quantities
represented
by
the
bi
terms
in
the
sum
of
the
right­
hand
expression
in
the
above
equation.

Step
5.
Calculate
the
standard
deviation
(s)
of
the
data
set.
Then
compute
the
Shapiro
Wilk
test
statistic
using
the
following
formula:

W
b
s
n
=
 
 
 
 
 
 
 
1
2
Equation
F.
2
Step
6.
Given
the
significance
level
(
)
of
the
test
(for
example,
0.01
or
0.05),
 
determine
the
critical
point
of
the
Shapiro­
Wilk
test
with
n
observations
using
Table
G­
5
in
Appendix
G.
Compare
the
Shapiro­
Wilk
statistic
(W)
against
the
critical
point
(
).
If
the
test
statistic
exceeds
the
critical
point,
accept
normality
wc
as
a
reasonable
model
for
the
underlying
population;
however,
if
,
reject
W
wc
<
the
null
hypothesis
of
normality
at
the
­level
and
decide
that
another
 
distributional
model
would
provide
a
better
fit.

An
example
calculation
of
the
Shapiro­
Wilk
test
for
normality
is
presented
in
Box
F.
1.
Appendix
F
246
Box
F.
1.
Example
Calculation
of
the
Shapiro­
Wilk
Test
for
Normality
Use
the
Shapiro­
Wilk
test
for
normality
to
determine
whether
the
following
data
set,
representing
the
total
concentration
of
nickel
in
a
solid
waste,
follows
a
normal
distribution:
58.8,
19,
39,
3.1,
1,
81.5,
151,
942,
262,
331,
27,
85.6,
56,
14,
21.4,
10,
8.7,
64.4,
578,
and
637.

Solution
Step
1.
Order
the
data
from
smallest
to
largest
and
list,
as
in
Table
F­
2.
Also
list
the
data
in
reverse
order
alongside
the
first
column.

Step
2.
Compute
the
differences
in
column
4
of
the
table
by
subtracting
column
2
x
x
n
i
i
()()
 
+
 
1
from
column
3.
Because
the
total
number
of
samples
is
,
the
largest
integer
less
than
n
=
20
or
equal
to
is
.
(/)
n
2
k
=
10
Step
3.
Look
up
the
coefficients
from
Table
G­
4
in
Appendix
G
and
list
in
column
4.
an
i
 
+
1
Step
4.
Multiply
the
differences
in
column
4
by
the
coefficients
in
column
5
and
add
the
first
k
products
(
)
to
get
quantity
,
using
Equation
F.
1.
bi
bi
b
=
.4734(
941.0)+.
3211(
633.9)
+
.0140(
2.
8)
 
 
 
=
932
88
.

Step
5.
Compute
the
standard
deviation
of
the
sample,
=
259.72,
then
use
Equation
F.
2
to
calculate
s
the
Shapiro­
Wilk
test
statistic:

W
=
 
 
 
 
 
 
=
932
88
259
72
19
0
679
2
.
.
.

Step
6.
Use
Table
G­
5
in
Appendix
G
to
determine
the
.01­
level
critical
point
for
the
Shapiro­
Wilk
test
when
=
20.
This
gives
=
0.868.
Then,
compare
the
observed
value
of
=
0.679
to
n
wc
W
the
1­
percent
critical
point.
Since
<
0.868,
the
sample
shows
significant
evidence
of
non­
W
normality
by
the
Shapiro­
Wilk
test.
The
data
should
be
transformed
using
natural
logs
and
rechecked
using
the
Shapiro­
Wilk
test
before
proceeding
with
further
statistical
analysis.
Appendix
F
247
Table
F­
2.
Example
Calculation
of
the
Shapiro­
Wilk
Test
(see
example
in
Box
F.
1)

i
x
i
()
x
n
i
()
 
+
1
x
x
n
i
i
()()
 
+
 
1
an
i
 
+
1
bi
1
1
942
941
0.4734
445.47
2
3.
1
637
634
0.3211
203.55
3
8.
7
578
569
0.2565
146.03
4
10
331
321
0.2085
66.93
5
14
262
248
0.1686
41.81
6
19
151
132
0.1334
17.61
7
21.4
85.6
64.2
0.
1013
6.5
8
27
81.5
54.5
0.
0711
3.87
9
39
64.4
25.4
0.
0422
1.07
10
56
58.8
2.
8
0.0140
0.04
11
58.8
56
–2.8
b
=
932.88
12
64.4
39
–25.4
13
81.5
27
–54.5
14
85.6
21.4
–64.2
15
151
19
–132.0
16
262
14
–248.0
17
331
10
–321.0
18
578
8.7
–569.3
19
637
3.1
–633.9
20
942
1
–941.0
F.
2
Confidence
Limits
for
the
Mean
When
a
fixed
standard
or
limit
is
meant
to
represent
an
average
or
mean
concentration
level,
attainment
of
the
standard
can
be
measured
using
a
confidence
limit
on
the
mean.
A
confidence
limit
is
then
compared
with
the
fixed
compliance
limit.
Under
the
null
hypothesis
that
the
mean
concentration
in
the
waste
exceeds
the
standard
unless
proven
otherwise,
statistically
significant
evidence
of
compliance
with
the
standard
is
shown
if
and
only
if
the
entire
confidence
interval
lies
below
the
standard.
By
implication,
the
key
test
then
involves
comparing
the
upper
confidence
limit
(UCL)
to
the
standard.
In
other
words,
the
entire
confidence
interval
must
lie
below
the
standard
for
the
waste
to
be
compliant
with
the
standard.
If
the
UCL
exceeds
the
regulatory
limit,
on
the
other
hand,
we
cannot
conclude
the
mean
concentration
is
below
the
standard.

F.
2.1
Confidence
Limits
for
the
Mean
of
a
Normal
Distribution
Requirements
and
Assumptions
Confidence
intervals
for
the
mean
of
a
normal
distribution
should
be
constructed
only
if
the
data
pass
a
test
of
approximate
normality
or
at
least
are
reasonably
symmetric.
It
is
strongly
recommended
that
a
confidence
interval
not
be
constructed
with
less
than
four
measurements,
though
the
actual
number
of
samples
should
be
determined
as
part
of
the
planning
process.
The
reason
for
this
is
two­
fold:
(1)
the
formula
for
a
normal­
based
confidence
interval
on
the
Appendix
F
248
mean
involves
calculation
of
the
sample
standard
deviation
(s),
which
is
used
as
an
estimate
of
the
underlying
population
standard
deviation
(this
estimate
may
not
be
particularly
accurate
when
the
sample
size
is
smaller
than
four),
and
(2)
the
confidence
interval
formula
also
involves
a
Student's
t­
quantile
based
on
n
­
1
degrees
of
freedom,
where
n
equals
the
number
of
samples
used
in
the
calculation
(see
Table
G­
1
in
Appendix
G).
When
n
is
quite
small,
the
tquantile
will
be
relatively
large,
leading
to
a
much
wider
confidence
interval
than
would
be
expected
with
a
larger
n.
For
example,
at
a
90­
percent
confidence
level,
the
appropriate
tquantile
would
be
t
=
3.078
for
n
=
2,
t
=
1.638
for
n
=
4,
and
t
=
1.415
for
n
=
8.

Procedure
Step
1.
Check
the
n
sample
concentrations
for
normality.
If
the
normal
model
is
acceptable,
calculate
the
mean
(
)
and
standard
deviation
(s)
of
the
data
set.
If
x
the
lognormal
model
provides
a
better
fit,
see
Section
F.
2.3.

Step
2.
Given
the
desired
level
of
confidence,
(
),
calculate
the
upper
confidence
1   
 
limit
as
follows:

UCL
x
t
s
n
df
=
+
 
1
 
,
Equation
F.
3
where
is
obtained
from
a
Student's
t­
table
(Table
G­
1)
with
the
t
df
1   
 
,

appropriate
degrees
of
freedom.
If
simple
random
or
systematic
sampling
is
used,
then
.
df
n
=
 
1
If
stratified
random
sampling
is
used,
calculate
the
UCL
as
follows:

UCL
x
t
s
st
st
df
xst
=
+
 
1
 
,
Equation
F.
4
where
is
the
overall
mean
from
Equation
8,
the
is
obtained
from
Equation
xst
df
11,
and
the
standard
error
(
)
is
obtained
from
Equation
9
(see
also
Table
F­
sxst
1
for
these
equations).

Step
3.
Compare
the
UCL
calculated
in
Step
2
to
the
fixed
standard.
If
the
UCL
is
less
than
the
standard,
then
you
can
conclude,
with
100(
)%
confidence,
that
1   
 
the
mean
concentration
of
the
constituent
of
concern
is
less
than
the
standard.
If,
however,
the
upper
confidence
bound
is
greater
than
the
standard,
then
there
is
not
sufficient
evidence
that
the
mean
is
less
than
the
standard.

An
example
calculation
of
the
UCL
on
the
mean
is
provided
in
Box
F.
2.
Appendix
F
249
F.
2.2
Confidence
Limits
for
a
Normal
Mean
When
Composite
Sampling
Is
Used
If
a
composite
sampling
strategy
has
been
employed
to
obtain
a
more
precise
estimate
of
the
mean,
confidence
limits
can
be
calculated
from
the
analytical
results
using
the
same
procedure
outlined
above
in
Section
F.
2.1,
except
that
n
represents
the
number
of
composite
samples
and
s
represents
the
standard
deviation
of
the
n
composite
samples.

F.
2.3
Confidence
Limits
for
a
Lognormal
Mean
If
the
results
of
a
test
for
normality
indicate
the
data
set
may
have
a
lognormal
distribution,
and
a
confidence
limit
on
the
mean
is
desired,
then
a
special
approach
is
required.
It
is
not
correct
to
simply
transform
the
data
to
the
log
scale,
calculate
a
normal­
based
mean
and
confidence
interval
on
the
logged
data,
and
transform
the
results
back
to
the
original
scale.
It
is
a
common
mistake
to
do
so.
Invariably,
a
transformation
bias
will
be
introduced
and
the
approach
will
underestimate
the
mean
and
UCL.
In
fact,
the
procedure
just
described
actually
produces
a
confidence
interval
around
the
median
of
a
lognormal
population
rather
than
the
higher­
valued
mean.

To
calculate
a
UCL
on
the
mean
for
data
that
exhibit
a
lognormal
distribution,
this
guidance
recommends
use
of
a
procedure
developed
by
Land
(1971,
1975);
however,
as
noted
below,
Land's
procedure
should
be
used
with
caution
because
it
relies
heavily
on
the
lognormal
assumption,
and
if
that
assumption
is
not
true,
the
results
may
be
substantially
biased.

Requirements
and
Assumptions
Confidence
intervals
for
the
mean
of
a
lognormal
distribution
should
be
constructed
only
if
the
data
pass
a
test
of
approximate
normality
on
the
log­
scale.
While
many
environmental
Box
F.
2.
Example
Calculation
of
the
UCL
for
a
Normal
Mean
A
generator
obtains
ten
samples
of
waste
to
demonstrate
that
the
waste
qualifies
for
the
comparable
fuels
exclusion
under
40
CFR
261.38.
The
samples
are
obtained
using
a
simple
random
sampling
design.
Analysis
of
the
samples
for
lead
generated
the
following
results:
16,
17.5,
21,
22,
23,
24,
24.5,
27,
31,
and
38
ppm.
The
regulation
requires
comparison
of
a
95%
UCL
on
the
mean
to
the
specification
level.
The
specification
level
is
31
ppm.

Solution
Step
1.
Using
the
Shapiro­
Wilk
test,
we
confirmed
that
the
normal
model
is
acceptable.
The
mean
is
calculated
as
24.4
ppm
and
the
standard
deviation
as
6.44
ppm.

Step
2.
The
RCRA
regulations
at
40
CFR
261.38(
c)(
8)(
iii)(
A)
require
that
the
determination
be
made
with
a
level
of
confidence,
100(
)%,
of
95
percent.
We
turn
to
Table
G­
1
(Appendix
G)
and
find
the
Student's
t
1
 
 
value
is
1.833
for
degrees
of
freedom.
The
UCL
is
calculated
as
follows:
n
 
=
1
9
UCL
=
+
=
 
24
4
1833
644
10
281
28
..
.
.

Step
3.
We
compare
the
limit
calculated
in
step
2
to
the
fixed
standard.
Because
the
UCL
(28
ppm)
is
less
than
the
regulatory
level
(31
ppm),
we
can
conclude
with
at
least
95­
percent
confidence
that
the
mean
concentration
of
the
constituent
in
the
waste
is
less
than
31
ppm.
Appendix
F
250
populations
tend
to
follow
the
lognormal
distribution,
it
is
usually
wisest
to
first
test
the
data
for
normality
on
the
original
scale.
If
such
a
test
fails,
the
data
can
then
be
transformed
to
the
logscale
and
retested.

Cautionary
Note:
Even
if
a
data
set
passes
a
test
for
normality
on
the
log
scale,
do
not
proceed
with
calculation
of
the
confidence
limits
using
Land's
procedure
until
you
have
considered
the
following:

°
The
skewness
of
the
data
set
may
be
due
to
biased
sampling,
mixed
distributions
of
multiple
populations,
or
outliers,
and
not
necessarily
due
to
lognormally
distributed
data
(see
Singh,
et
al.
1997).
Review
the
sampling
approach,
the
physical
characteristics
of
the
waste
or
media,
and
recheck
any
unusually
high
values
before
computing
the
confidence
limits.
Where
there
is
spatial
clustering
of
sample
data,
declustering
and
distribution
weighting
techniques
(Myers
1997)
may
also
be
appropriate.

°
If
the
number
of
samples
(n)
is
small,
the
confidence
interval
obtained
by
Land's
procedure
could
be
remarkably
wide.
Singh,
et
al.
(1997)
have
recommended
that
Land's
procedure
not
be
used
for
cases
in
which
the
number
of
samples
is
less
than
30.
They
argue
that
in
many
cases
the
resulting
UCL
will
be
an
order
of
magnitude
larger
than
the
maximum
observed
data
value.
Even
higher
values
for
the
UCL
could
be
generated
if
the
coefficient
of
variation
(CV
or
the
standard
deviation
divided
by
the
mean)
is
greater
than
1.

If
the
lognormal
distribution
is
the
best
fit,
and
the
number
of
samples
(n)
is
small,
then
Land's
method
(provided
below)
can
still
be
used,
though
a
"penalty"
will
be
paid
for
the
small
sample
size.
If
the
number
of
samples
is
small
and
the
distribution
is
skewed
to
the
right,
one
of
the
following
alternative
approaches
should
be
considered:
(1)
Simply
treat
the
data
set
as
if
the
parent
distribution
were
normal
and
use
the
parametric
Student­
t
method
to
calculate
confidence
limits
using
the
untransformed
(original
scale)
data
(as
described
in
Section
F.
2.1).
If,
however,
this
normal
theory
approach
is
used
with
highly
skewed
data,
the
actual
confidence
level
achieved
by
the
test
will
be
less
than
that
desired
(Porter,
et
al.
1997);
(2)
UCLs
on
the
mean
could
be
constructed
using
procedures
such
as
the
"bootstrap"
or
the
"jackknife,"
as
recommended
by
Singh,
et
al.
(1997)
(see
Section
F.
2.4).

The
approach
for
Land's
"H­
statistic"
method
is
given
below:

Procedure
Step
1.
Test
the
data
for
normality
on
the
log­
scale.
After
determining
that
the
lognormal
distribution
is
a
good
fit,
transform
the
data
via
logarithms
(the
natural
log
is
used)
and
denote
the
transformed
measurements
by
.
yi
Step
2.
Compute
the
sample
mean
and
the
standard
deviation
(
)
from
the
log­
scale
sy
measurements.

Step
3.
Obtain
Land's
bias­
correction
factor(
s)
(
)
from
Table
G­
6
in
Appendix
G,
H1   
 
where
the
correct
factor
depends
on
the
sample
size
(n),
the
log­
scale
sample
Appendix
F
1
For
a
more
extensive
tabulation
of
Land's
factors,
see
Land
(1975)
or
Tables
A10
through
A13
in
Gilbert
(1987).

251
standard
deviation
(
),
and
the
desired
confidence
level
(
).
1
sy
1   
 
Step
4.
Plug
all
these
factors
into
the
equations
given
below
for
the
UCL.

UCL
y
s
s
H
n
y
y
1
2
1
5
1
 
 
=
+
+
 
 
 
 
 
 
 
 
 
exp
.
Equation
F.
5
Step
5.
Compare
the
UCL
against
the
fixed
standard.
If
the
UCL
is
less
than
the
standard,
then
you
can
conclude
with
100(
)%
confidence
that
the
mean
1 
 
concentration
of
the
constituent
of
concern
is
less
than
the
standard.
If,
however,
the
upper
confidence
bound
is
greater
than
the
standard,
then
there
is
not
sufficient
evidence
that
the
mean
is
less
than
the
standard.

An
example
calculation
of
the
UCL
on
a
lognormal
mean
is
given
in
Box
F.
3.

Box
F.
3:
Example
Calculation
of
the
UCL
on
a
Lognormal
Mean
This
example
is
modified
after
an
example
provided
in
Supplemental
Guidance
to
RAGS:
Calculating
the
Concentration
Term
(USEPA
1992a).

The
concentration
of
lead
(total
in
mg/
Kg)
in
31
soil
samples
obtained
using
a
simple
random
sampling
design
are:
1,
3,
13,
14,
18,
20,
21,
36,
37,
41,
42,
45,
48,
59,
60,
110,
110,
111,
111,
136,
137,
140,
141,
160,
161,
200,
201,
230,
400,
1300,
and
1400.
Using
these
data,
calculate
a
90%
UCL
on
the
mean.

Solution
Step
1.
Using
the
Shapiro­
Wilk
test,
the
natural
logarithms
of
the
data
set
are
shown
to
exhibit
a
normal
distribution.
The
data
are
then
transformed
to
natural
logs.

Step
2.
The
mean
of
logged
data
is
.
The
standard
deviation
is
.
y
=
4
397
.
sy
=
1509
.

Step
3.
The
bias­
correction
factor
(
)
is
obtained
from
Table
G­
6
for
and
a
confidence
H1
2
282
 
=
 
.
n
=
31
level
of
90
percent
.

Step
4.
Plug
the
factors
into
the
equation
for
the
upper
(UCL)
confidence
limit.

UCL1
2
4
222
05
1509
1509
2
282
31
1
5989
399
 
=
+
+
 
 
 
 
 
 
 
=
=
 
exp
.
.
(
.
)
.(.)

exp(
.
)
mg
/
kg
Step
5.
The
90­
percent
UCL
on
the
mean
is
399
mg/
kg.
Appendix
F
252
F.
2.4
Confidence
Limits
for
the
Mean
of
a
Non­
normal
or
Unknown
Distribution
If
the
assumption
of
a
normal
or
lognormal
distribution
cannot
be
justified,
then
you
may
construct
a
UCL
on
the
mean
using
one
of
several
alternative
methods
described
in
this
section.

Bootstrap
or
Jackknife
Methods:
Bootstrap
and
jackknife
procedures,
as
discussed
by
Efron
(1981)
and
Miller
(1974),
typically
are
nonparametric
statistical
techniques
which
can
be
used
to
reduce
the
bias
of
point
estimates
and
construct
approximate
confidence
intervals
for
parameters
such
as
the
population
mean.
These
procedures
require
no
assumptions
regarding
the
statistical
distribution
(e.
g.,
normal
or
lognormal)
for
the
underlying
population.

Using
a
computer,
the
bootstrap
method
randomly
samples
n
values
with
replacement
from
the
original
set
of
n
random
observations.
For
each
bootstrap
sample,
the
mean
(or
some
other
statistic)
is
calculated.
This
process
of
"resampling"
is
repeated
hundreds
or
perhaps
thousands
of
times
and
the
multiple
estimates
of
the
mean
are
used
to
define
the
confidence
limits
on
the
mean.
The
jackknife
approximates
the
bootstrap.
Rather
than
resampling
randomly
from
the
entire
sample
like
the
bootstrap
does,
the
jackknife
takes
the
entire
sample
except
for
one
value,
and
then
calculates
the
statistic
of
interest.
It
repeats
the
process,
each
time
leaving
out
a
different
value,
and
each
time
recalculating
the
test
statistic.

Both
the
bootstrap
and
the
jackknife
methods
require
a
great
deal
of
computer
power,
and,
historically
have
not
been
widely
adopted
by
environmental
statisticians
(Singh,
et
al.
1997).
However,
with
advances
in
computer
power
and
availability
of
software,
computationally
intensive
statistical
procedures
have
become
more
practical
and
accessible.
Users
of
this
guidance
interested
in
applying
a
"resampling"
method
such
as
the
bootstrap
or
jackknife
should
check
the
capabilities
of
available
software
packages
and
consult
with
a
professional
statistician
on
the
correct
use
and
application
of
the
procedures.

Nonparametric
Confidence
Limits:
If
the
data
are
not
assumed
to
follow
a
particular
distribution,
then
it
may
not
be
possible
to
calculate
a
UCL
on
the
mean
using
normal
theory
techniques.
If,
however,
the
data
are
non­
normal
but
approximately
symmetric,
a
nonparametric
UCL
on
the
median
(or
the
50
th
percentile)
may
serve
as
a
reasonable
alternative
to
calculation
of
a
parametric
UCL
on
the
mean.
One
severe
limitation
of
this
approach
is
that
it
involves
changing
the
parameter
of
interest
(as
determined
in
the
DQO
Process)
from
the
mean
to
the
median,
potentially
biasing
the
result
if
the
distribution
of
the
data
is
not
symmetric.
Accordingly,
the
procedure
should
be
used
with
caution.

Lookup
tables
can
be
used
to
determine
the
confidence
limits
on
the
median
(50
th
percentile).
For
example,
see
Conover
(1999,
Table
A3)
or
Gilbert
(1987,
Table
A14).
In
general,
when
the
sample
size
is
very
small
(e.
g.,
less
than
about
nine
or
ten
samples)
and
the
required
level
of
confidence
is
high
(e.
g.,
95
to
99
percent),
the
tables
will
designate
the
maximum
value
in
the
data
set
as
the
upper
confidence
limit.
Conover
(1999,
page
143)
gives
a
large
sample
approximation
for
a
confidence
interval
on
a
proportion
(quantile).
Methods
also
are
given
in
Gilbert
(1987,
page
173),
Hahn
and
Meeker
(1991,
page
83),
and
USEPA
(1992i,
page
5­
30).
Appendix
F
253
F.
3
Tests
for
a
Proportion
or
Percentile
Some
RCRA
standards
represent
concentrations
that
should
rarely
or
never
be
exceeded
for
the
waste
or
media
to
comply
with
the
standard.
To
measure
compliance
with
such
a
standard,
a
waste
handler
may
want
to
know
with
some
specified
level
of
confidence
that
a
high
proportion
of
the
waste
complies
with
the
standard
(or
conversely,
that
at
most
only
a
small
proportion
of
all
possible
samples
could
exceed
the
standard).
Two
approaches
are
given
for
measuring
compliance
with
such
a
standard:

1.
Under
the
assumption
of
a
normal
distribution,
use
a
parametric
UCL
on
a
percentile
to
demonstrate
that
the
true
pth
percentile
(xp)
concentration
in
the
set
of
all
possible
samples
is
less
than
the
concentration
standard.
The
method
is
given
below
in
Section
F.
3.1.

2.
By
far,
the
simplest
method
for
testing
proportions
is
to
use
an
"exceedance
rule"
in
which
the
proportion
of
the
population
with
concentrations
less
than
the
standard
can
be
estimated
based
on
the
total
number
of
sample
values
and
the
number
of
those
(if
any)
that
exceed
the
standard.
The
exceedance
rule
method
is
given
below
in
Section
F.
3.2.

If
the
number
of
samples
is
relatively
large,
then
a
"one­
sample
proportion
test"
also
can
be
used
to
test
a
proportion
against
a
fixed
standard.
The
one­
sample
proportion
test
is
described
in
Section
3.2.2.1
in
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(QA00
Update)
(USEPA
2000d).

F.
3.1
Parametric
Upper
Confidence
Limits
for
an
Upper
Percentile
If
the
study
objective
is
to
demonstrate
that
the
true
pth
percentile
(xp)
concentration
in
the
set
of
all
possible
samples
(of
a
given
sample
support)
is
less
than
the
applicable
standard
or
Action
Level,
then
a
UCL
on
the
upper
percentile
can
be
used
to
determine
attainment
of
the
standard.

Requirements
and
Assumptions
The
formulas
for
constructing
parametric
UCL
on
an
upper
percentile
assume
that
the
data
are
at
least
approximately
normally
distributed.
Therefore,
such
a
limit
should
be
constructed
only
if
the
data
pass
a
test
of
normality.
If
the
data
are
best
fit
by
a
lognormal
distribution
instead,
the
observations
should
first
be
transformed
to
the
log­
scale.
Unlike
confidence
limits
for
a
lognormal
mean,
no
special
equations
are
required
to
construct
similar
limits
on
an
upper
percentile.
The
same
formula
used
when
the
data
are
normally
distributed
can
be
applied
to
the
log­
scale
data.
The
only
additional
step
is
that
the
confidence
interval
limits
must
be
reexponentiated
before
comparing
them
against
the
regulatory
standard.

It
is
strongly
recommended
that
a
confidence
limit
not
be
constructed
with
less
than
four
measurements,
and
preferably
more
(the
actual
number,
however,
should
be
determined
during
Step
Seven
of
the
DQO
Process).
There
are
three
reasons
for
this:
(1)
the
formula
for
a
normal­
based
confidence
interval
on
an
upper
percentile
involves
calculation
of
the
sample
standard
deviation,
s,
which
is
used
as
an
estimate
of
the
underlying
population
standard
deviation.
This
estimate
may
not
be
accurate
when
fewer
than
four
samples
are
used.
(2)
The
confidence
interval
formula
also
involves
a
special
factor
("
kappa"),
which
depends
on
both
 
Appendix
F
254
the
desired
confidence
level
(
)
and
the
number
of
samples,
n,
used
in
the
calculation.
1   
 
When
n
is
quite
small,
the
factor
is
more
extreme,
leading
to
a
much
wider
confidence
 
interval
than
would
be
expected
with
a
larger
n.
For
example,
at
a
confidence
level
of
90
percent,
the
appropriate
factor
for
an
upper
one­
sided
limit
on
the
99th
percentile
is
=
 
 
18.50
when
n
=
2,
=
5.438
when
n
=
4,
and
=
3.783
when
n
=
8.
(3)
The
third
reason
is
 
 
that
the
power
of
the
test
for
normality
or
lognormality
is
very
low
with
a
small
number
of
samples.

Procedure
Step
1.
First
test
the
data
for
normality
on
the
original
scale.
If
a
test
of
normality
is
passed,
calculate
the
limit
on
the
raw
measurements.
If
the
data
violate
the
assumption
of
normality,
but
pass
a
test
of
lognormality,
calculate
the
limit
using
the
log­
scale
data.

Step
2.
If
the
data
are
normal,
compute
the
mean
and
standard
deviation
of
the
raw
data.
If
the
data
are
consistent
with
lognormality
instead,
compute
the
mean
and
standard
deviation
after
first
transforming
the
data
to
the
log­
scale.

Step
3.
Given
the
percentile
(p)
being
estimated,
the
sample
size
(n),
and
the
desired
confidence
level
(
),
use
Table
G­
2
(in
Appendix
G)
to
determine
the
1   
 
 
factor(
s)
needed
to
construct
the
appropriate
UCL.
A
one­
sided
upper
confidence
bound
is
then
computed
with
the
formula
UL
x
x
s
p
p
1
1
 
 
=
+
 
  
 
()
,
Equation
F.
6
where
is
the
upper
factor
for
the
pth
percentile
with
n
sample
 
 
1   
,
p
1   
 
measurements.

Again,
if
the
data
are
lognormal
instead
of
normal,
the
same
formula
would
be
used
but
with
the
log­
scale
mean
and
standard
deviation
substituted
for
the
rawscale
values.
Then
the
limit
must
be
exponentiated
to
get
the
final
upper
confidence
bound,
as
in
the
following
formula
for
an
upper
bound
with
confidence:
()
1
100%
 
 
[
]
UL
x
y
s
p
yp
1
1
 
 
=
+
 
  
 
()
exp
,
Equation
F.
7
Step
4.
Compare
the
upper
confidence
bound
against
the
fixed
standard.
()
1
100%
 
 
If
the
upper
limit
exceeds
the
standard,
then
the
standard
is
not
met.

An
example
calculation
of
the
UCL
on
a
percentile
is
given
in
Box
F.
4.
Appendix
F
255
F.
3.2
Using
a
Simple
Exceedance
Rule
Method
for
Determining
Compliance
With
A
Fixed
Standard
Some
RCRA
standards
represent
concentration
limits
that
should
never
or
rarely
be
exceeded
or
waste
properties
that
should
never
or
rarely
be
exhibited
for
the
waste
to
comply
with
the
standard.
One
of
the
simplest
nonparametric
methods
for
determining
compliance
with
such
a
standard
is
to
use
an
"exceedance
rule"
(USEPA
1989a).
To
apply
this
method,
simply
require
that
a
number
of
samples
be
acquired
and
that
zero
or
a
small
number
(e.
g.,
one)
of
the
concentration
measurements
be
allowed
to
exceed
the
standard.
This
kind
of
rule
is
easy
to
implement
and
evaluate
once
the
data
are
collected.
It
only
requires
specification
of
a
number
of
samples
and
the
number
of
exceedances
allowed
(usually
zero,
for
example,
for
compliance
with
the
LDR
concentration
level
treatment
standards).
Alternately,
one
can
specify
the
statistical
performance
criteria
in
advance
and
then
determine
the
number
of
samples
required.
Box
F.
4.
Example
Calculation
of
a
UCL
on
an
Upper
Percentile
To
Classify
a
Solid
Waste
A
secondary
lead
smelter
produces
a
slag
that
under
some
operating
conditions
exhibits
the
Toxicity
Characteristic
(TC)
for
lead.
The
facility
owner
needs
to
classify
a
batch
of
waste
as
either
hazardous
or
nonhazardous
at
the
point
of
waste
generation.
During
the
planning
process,
the
owner
determined
based
on
previous
sampling
studies
that
the
constituent
of
interest
is
lead,
TCLP
results
for
lead
tend
to
exhibit
a
normal
distribution,
and
a
sample
size
of
ten
200­
gram
samples
(not
including
QC
samples)
should
satisfy
the
study
objectives.
The
TC
regulatory
level
for
lead
is
5
mg/
L.
The
owner
wants
to
determine,
with
90­
percent
confidence,
whether
a
large
proportion
(e.
g.,
at
least
95
percent)
of
all
possible
samples
of
the
waste
will
be
below
the
regulatory
limit.

At
the
point
of
waste
generation,
the
facility
representative
takes
a
series
of
systematic
samples
of
the
waste.
The
following
sample
analysis
results
were
generated
for
ten
samples
analyzed
for
lead
via
the
TCLP
and
SW846
Method
6010B:
<0.
5,
0.55,
0.60,
0.80,
0.90,
1.00,
1.50,
1.80,
2.00,
and
3.00
mg/
L.

Calculate
a
90­
percent
upper
confidence
limit
on
the
95
th
percentile.

Solution
Step
1.
Based
on
the
shape
of
the
histogram
and
normal
probability
plot,
the
data
were
judged
to
exhibit
a
normal
distribution.
Therefore,
we
proceed
with
the
calculation
on
the
original
(untransformed)
scale.

Step
2.
One
value
(10%
of
the
measurements)
is
reported
below
the
quantitation
limit
of
0.5
mg/
L
so
we
replace
that
value
with
half
the
quantitation
limit
(0.25
mg/
L)
(see
also
Section
F.
4).
The
mean
and
standard
deviation
of
the
data
set
are
then
calculated
as
mg/
L
and
.
x
=
124
.
s
=
0836
.

Step
3.
Use
Table
G­
2
(in
Appendix
G)
to
determine
the
factor
for
n
=
10
needed
to
construct
a
90­
percent
 
UCL
on
the
95
th
percentile.
The
table
indicates
.
Plug
,
,
and
into
Equation
F.
6,
 =
2568
.
x
s
 
as
follows:

UL
x
0
90
095
124
0
836
2
568
3
39
3
4
..
().(.)(.)..
=
+
=
 
mg
/
L
Step
4.
All
of
the
sample
analysis
results
are
less
than
the
TC
regulatory
limit
of
5
mg/
L
TCLP
for
lead,
and
the
owner
concludes
that
the
waste
is
a
nonhazardous
waste
under
RCRA.
The
owner
also
can
conclude
with
at
least
90­
percent
confidence
that
at
least
95
percent
of
all
possible
sample
analysis
results
representing
the
batch
of
waste
in
the
roll­
off
bin
are
nonhazardous.
Appendix
F
256
Requirements
and
Assumptions
for
Use
of
an
Exceedance
Rule
The
method
given
here
is
a
simple
nonparametric
method
and
requires
only
the
ability
to
identify
the
number
of
samples
in
the
data
set
and
whether
each
sample
analysis
result
complies
with
the
applicable
standard
or
does
not
comply
with
the
standard.
Unfortunately,
this
ease
of
use
comes
with
a
price.
Compared
to
parametric
methods
that
assume
underlying
normality
or
lognormality
of
the
data,
the
nonparametric
method
given
here
requires
significantly
more
samples
to
achieve
the
same
level
of
confidence.

Procedure
Step
1:
Specify
the
degree
of
confidence
desired,
,
and
the
proportion
(p)
(
)
100
1   
 
%
of
the
population
that
must
comply
with
the
standard.

Step
2:
If
the
decision
rule
permits
no
exceedance
of
the
standard
for
any
single
sample
in
a
set
of
samples,
then
obtain
and
analyze
the
number
of
samples
(n)
indicated
in
Table
G­
3a
in
Appendix
G.

If
the
decision
rule
permits
a
single
exceedance
of
the
standard
in
a
set
of
samples,
then
obtain
and
analyze
the
number
of
samples
(n)
indicated
in
Table
G­
3b
in
Appendix
G.

Step
3:
Based
on
the
number
of
samples
obtained
and
the
statistical
performance
required,
determine
whether
the
applicable
standard
has
been
attained.

An
example
application
of
the
exceedance
rule
is
Box
F.
5.

Box
F.
5:
Example
Application
of
a
Simple
Exceedance
Rule
A
facility
has
treated
nonwastewater
F003
solvent
waste
containing
carbon
disulfide
to
attain
the
LDR
UTS.
Samples
of
the
treatment
residue
are
obtained
systematically
as
the
waste
treatment
is
completed.
The
treater
wants
to
have
at
least
90%
confidence
that
at
least
90%
of
the
batch
of
treated
waste
attains
the
standard.
To
comply
with
the
LDR
regulations,
no
samples
can
exceed
the
UTS.
TCLP
analyses
for
carbon
disulfide
in
the
treated
waste
are
required
to
measure
compliance
with
the
treatment
standard
of
4.8
mg/
L
TCLP.

From
Table
G­
3a
we
find
that
for
a
confidence
level
(
)
of
.90
(or
90%)
and
a
proportion
of
.90,
at
least
22
1   
 
samples
are
required.
All
sample
analysis
results
must
be
less
than
or
equal
to
the
UTS
of
4.8
mg/
L
TCLP
for
the
statistical
performance
criteria
to
be
achieved.

If
only
9
samples
are
obtained
(with
all
sample
analysis
results
less
than
or
equal
to
the
standard),
what
level
of
confidence
can
the
treater
have
that
at
least
90­
percent
(or
p
=
0.90)
of
all
possible
samples
drawn
from
the
waste
meet
the
treatment
standard?

From
Table
G­
3a
we
find
for
p
=
0.90
and
n
=
9,
=
0.60.
Therefore,
the
confidence
level
1
 
 
100
1
()%
 
 
equals
only
60
percent.
Appendix
F
2
Additional
experience
and
research
for
EPA
supporting
development
of
guidance
on
the
statistical
analysis
of
ground­
water
monitoring
data
indicates
that
if
the
percentage
of
nondetects
is
as
high
as
20
to
25
percent,
the
results
of
parametric
statistical
tests
may
not
be
substantially
affected
if
the
nondetects
are
replaced
with
half
their
detection
limits
(Cameron
1999).

257
F.
4
Treatment
of
Nondetects
in
Statistical
Tests
Data
generated
from
chemical
analysis
may
fall
below
a
limit
of
detection
of
the
analytical
procedure.
These
measurement
data
generally
are
described
as
"nondetects",
(rather
than
as
zero
or
not
present)
and
the
appropriate
limit
of
detection
­
such
as
a
quantitation
limit
­
usually
is
reported.
Data
sets
that
include
both
detected
and
nondetected
results
are
called
"censored"
data
in
the
statistical
literature.

If
a
relatively
small
proportion
of
the
data
are
reported
below
detection
limit
values,
replacing
the
nondetects
with
a
small
number
(between
zero
and
the
detection
limit)
and
proceeding
with
the
usual
analysis
may
be
satisfactory.
For
moderate
amounts
of
data
below
the
detection
limit,
a
more
detailed
adjustment
is
appropriate.
In
situations
in
which
relatively
large
amounts
of
data
below
the
detection
limit
exist,
one
may
need
only
to
consider
whether
the
chemical
was
detected
as
above
some
level
or
not.

F.
4.1
Recommendations
If
no
more
than
approximately
15
percent
of
the
sample
analysis
results
are
nondetect
for
a
given
constituent,
then
the
results
of
parametric
statistical
tests
will
not
be
substantially
affected
if
nondetects
are
replaced
by
half
their
detection
limits
(USEPA
1992b).
2
When
more
than
approximately
15
percent
of
the
samples
are
nondetect,
however,
the
handling
of
nondetects
is
more
crucial
to
the
outcome
of
statistical
procedures.
Indeed,
simple
substitution
methods
tend
to
perform
poorly
in
statistical
tests
when
the
nondetect
percentage
is
substantial
(Gilliom
and
Helsel
1986).
If
the
percentage
of
nondetects
is
between
approximately
15
percent
and
50
percent,
we
recommend
use
of
Cohen's
Adjustment
(see
method
below).

The
conditions
for
use
of
Cohen's
method,
however,
are
limited
(see
method
given
below)
and
numerous
alternative
techniques
for
imputing
left­
censored
data
should
be
considered
if
the
conditions
for
use
of
Cohen's
method
do
not
apply.
Other
methods
available
include
iterative
techniques,
regression
on
order
statistics
(ROS)
methods,
bias­
corrected
maximum
likelihood
estimator
(MLE),
restricted
MLE,
modified
probability
plotting,
Winsorization,
and
lognormalized
statistics
(EPA
Delta
log).
A
modified
probability
plotting
method
called
Helsel's
Robust
Method
(Helsel
1990)
is
a
popular
method
that
should
be
considered.
Most
of
the
above
methods
can
be
performed
using
publicly
available
software
entitled
UnCensor©
v.
4.0
(Newman
et
al.
1995).
Although
EPA's
Office
of
Solid
Waste
has
not
reviewed
or
tested
this
software,
users
of
this
guidance
may
be
interested
in
investigating
its
use.

If
the
percentage
of
nondetects
is
greater
than
50
percent,
then
the
regression
on
order
statistics
method
or
Helsel's
Robust
Method
should
be
considered.
As
an
alternative,
EPA's
Guidance
for
Data
Quality
Assessment
EPA
QA/
G­
9
(USEPA
2000d)
suggests
the
use
of
a
test
for
proportions
when
the
percentage
of
nondetects
is
in
the
range
of
greater
than
50
percent
to
90
percent.

This
guidance
does
not
advocate
a
specific
method
for
imputing
or
replacing
values
that
lie
Appendix
F
258
below
the
limit
of
detection,
however,
whichever
method
is
selected
should
be
adequately
supported.
Table
F­
3
provides
a
summary
of
approaches
for
handling
nondetects
in
statistical
intervals.

Table
F­
3.
Guidance
for
Handling
Nondetects
In
Statistical
Intervals
Percentage
of
Data
Reported
as
"Nondetect"
Recommended
Treatment
of
Data
Set
<
15%
Replace
nondetects
with
DL/
2
15%
to
50%
Cohen's
adjustment,
regression
order
statistics,
or
Helsel's
Robust
Method
>
50%
Regression
on
order
statistics,
Helsel's
Robust
Method,
or
a
test
for
proportions
Even
with
a
small
proportion
of
nondetects,
care
should
be
taken
when
choosing
which
value
should
be
used
as
the
"detection
limit".
There
are
important
differences
between
the
method
detection
limit
and
the
quantitation
limit
(QL)
in
characterizing
"nondetect"
concentrations.
Many
nondetects
are
characterized
by
analytical
laboratories
with
one
of
three
data
qualifier
flags:
"U,"
"J,"
or
"E."
Samples
with
a
"U"
data
qualifier
represent
"undetected"
measurements,
meaning
that
the
signal
characteristic
of
that
analyte
could
not
be
observed
or
distinguished
from
"background
noise"
during
lab
analysis.
Inorganic
samples
with
an
"E"
flag
and
organic
samples
with
a
"J"
flag
may
or
may
not
be
reported
with
an
estimated
concentration.
If
no
concentration
estimate
is
reported,
these
samples
represent
"detected
but
not
quantified"
measurements.
In
this
case,
the
actual
concentration
is
assumed
to
be
positive,
falling
somewhere
between
zero
and
the
QL.
Because
the
actual
concentration
is
unknown,
the
suggested
substitution
for
parametric
statistical
procedures
is
to
replace
each
nondetect
qualified
with
an
"E"
or
"J"
with
one­
half
the
QL.
Note,
however,
that
"E"
and
"J"
samples
reported
with
estimated
concentrations
should
be
treated,
for
statistical
purposes,
as
valid
measurements.
In
other
words,
substitution
of
one­
half
the
QL
is
not
recommended
for
samples
for
which
an
estimated
concentration
is
provided.

As
a
general
rule,
nondetect
concentrations
should
not
be
assumed
to
be
bounded
above
by
the
MDL.
The
MDL
is
usually
estimated
on
the
basis
of
ideal
laboratory
conditions
with
analyte
samples
that
may
or
may
not
account
for
matrix
or
other
interferences
encountered
when
analyzing
specific,
actual
field
samples.
For
this
reason,
the
QL
typically
should
be
taken
as
the
most
reasonable
upper
bound
for
nondetects
when
imputing
specific
concentration
values
to
these
measurements.

If
a
constituent
is
reported
only
as
"not
detected"
and
a
detection
limit
is
not
provided,
then
review
the
raw
data
package
to
determine
if
a
detection
limit
was
provided.
If
not,
identify
the
analytical
method
used
and
consult
a
qualified
chemist
for
guidance
on
an
appropriate
QL.

F.
4.2
Cohen's
Adjustment
If
a
confidence
limit
is
used
to
compare
waste
concentrations
to
a
fixed
standard,
and
a
significant
fraction
of
the
observed
measurements
in
the
data
set
are
reported
as
nondetects,
simple
substitution
techniques
(such
as
putting
in
half
the
detection
limit
for
each
nondetect)
can
lead
to
biased
estimates
of
the
mean
or
standard
deviation
and
inaccurate
confidence
limits.
Appendix
F
259
By
using
the
detection
limit
and
the
pattern
seen
in
the
detected
values,
Cohen's
method
(Cohen
1959)
attempts
to
reconstruct
the
key
features
of
the
original
population,
providing
explicit
estimates
of
the
population
mean
and
standard
deviation.
These,
in
turn,
can
be
used
to
calculate
confidence
intervals,
where
Cohen's
adjusted
estimates
are
used
as
replacements
for
the
sample
mean
and
sample
standard
deviation.

Requirements
and
Assumptions
Cohen's
Adjustment
assumes
that
the
common
underlying
population
is
normal.
As
such,
the
technique
should
only
be
used
when
the
observed
sample
data
approximately
fit
a
normal
model.
Because
the
presence
of
a
large
fraction
of
nondetects
will
make
explicit
normality
testing
difficult,
if
not
impossible,
the
most
helpful
diagnostic
aid
may
be
to
construct
a
censored
probability
plot
on
the
detected
measurements.
If
the
censored
probability
plot
is
clearly
linear
on
the
original
measurement
scale
but
not
on
the
log­
scale,
assume
normality
for
purposes
of
computing
Cohen's
Adjustment.
If,
however,
the
censored
probability
plot
is
clearly
linear
on
the
log­
scale,
but
not
on
the
original
scale,
assume
the
common
underlying
population
is
lognormal
instead;
then
compute
Cohen's
Adjustment
to
the
estimated
mean
and
standard
deviation
on
the
log­
scale
measurements
and
construct
the
desired
statistical
interval
using
the
algorithm
for
lognormally­
distributed
observations
(see
also
Gilbert
1987,
page
182).

When
more
than
50
percent
of
the
observations
are
nondetect,
the
accuracy
of
Cohen's
method
breaks
down
substantially,
getting
worse
as
the
percentage
of
nondetects
increases.
Because
of
this
drawback,
EPA
does
not
recommend
the
use
of
Cohen's
adjustment
when
more
than
half
the
data
are
nondetect.
In
such
circumstances,
one
should
consider
an
alternate
statistical
method
(see
Section
F.
4.1).

One
other
requirement
of
Cohen's
method
is
that
there
be
just
a
single
censoring
point.
As
discussed
previously,
data
sets
with
multiple
detection
or
quantitation
limits
may
require
a
more
sophisticated
treatment.

Procedure
Step
1.
Divide
the
data
set
into
two
groups:
detects
and
nondetects.
If
the
total
sample
size
equals
n,
let
m
represent
the
number
of
detects
and
(n
­
m)
represent
the
number
of
nondetects.
Denote
the
ith
detected
measurement
by
,
then
xi
compute
the
mean
and
sample
variance
of
the
group
of
detects
(i.
e.,
above
the
quantitation
limit
data)
using
the
following
formulas:

x
m
x
d
i
i
m
=
=
 
1
1
Equation
F.
8
and
s
m
x
mx
d
id
i
m
2
22
1
1
1
=
 
 
 
 
 
 
 
 
=
 
Equation
F.
9
Appendix
F
260
Step
2.
Denote
the
single
censoring
point
(e.
g.,
the
quantitation
limit)
by
QL.
Then
compute
the
two
intermediate
quantities,
h
and
,
necessary
to
derive
Cohen's
 
adjustment
via
the
following
equations:

h
nmn
=
 
()
Equation
F.
10
and
 
=
 
s
xQL
d
d
2
2
()
Equation
F.
11
Step
3.
Use
the
intermediate
quantities,
h
and
to
determine
Cohen's
adjustment
 
parameter
from
Table
G­
7
in
Appendix
G.
For
example,
if
h
=
0.4
and
=
$
 
 
0.30,
then
=
0.6713.
$
 
Step
4.
Using
the
adjustment
parameter
found
in
step
3,
compute
adjusted
estimates
$
 
of
the
mean
and
standard
deviation
with
the
following
formulas:

x
x
xQL
d
d
=
 
 
$
()
 
Equation
F.
12
and
s
s
xQL
d
d
=
+
 
2
2
$
()
 
Equation
F.
13
Step
5.
Once
the
adjusted
estimates
for
the
population
mean
and
standard
deviation
are
derived,
these
values
can
be
substituted
for
the
sample
mean
and
standard
deviation
in
formulas
for
the
desired
confidence
limit.

An
example
calculation
using
Cohen's
method
is
given
in
Box
F.
6.
Appendix
F
261
Box
F.
6.
An
Example
of
Cohen's
Method
To
determine
attainment
of
a
cleanup
standard
at
SWMU,
24
random
soil
samples
were
obtained
and
analyzed
for
pentachlorophenol.
Eight
of
the
24
values
(33%)
were
below
the
matrix/
laboratory­
specific
quantitation
limit
of
1
mg/
L.
The
24
values
are
<1.
0,
<1.
0,
<1.0,
<1.0,
<1.0,
<1.0,
<1.0,
<1.0,
1.1,
1.5,
1.9,
2.0,
2.5,
2.6,
3.1,
3.3,
3.2,
3.2,
3.3,
3.4,
3.5,
3.8,
4.5,
5.8
mg/
L.
Cohen's
Method
will
be
used
to
adjust
the
sample
mean
and
standard
deviation
for
use
in
constructing
a
UCL
on
the
mean
to
determine
if
the
cleanup
has
attained
the
site­
specific
risk­
based
cleanup
standard
of
5.0
mg/
kg.

Solution
Step
1:
The
sample
mean
of
the
m
=
16
values
greater
than
the
quantitation
limit
is
=
3.044
xd
Step
2:
The
sample
variance
of
the
16
quantified
values
is
=
1.325.
sd
2
Step
3:
h
=
(24
­
16)
/
24
=
0.333
and
=
1.325
/
(3.044
­
1.0)
2
=
0.317
 
Step
4:
Table
G­
7
of
Appendix
G
was
used
for
h
=
0.333
and
=
0.317
to
find
the
value
of
.
Since
the
 
$
 
table
does
not
contain
these
entries
exactly,
double
linear
interpolation
was
used
to
estimate
=
$
 
0.5223.

Step
5:
The
adjusted
sample
mean
and
standard
deviation
are
then
estimated
as
follows:

=
3.044
­
0.5223
(3.044
­
1.0)
=
1.976
2.0
and
x
 
s
=
+
 
=
 
1325
0
5223
3
044
10
1873
19
2
..(..)..
262
This
page
intentionally
left
blank
263
APPENDIX
G
STATISTICAL
TABLES
Table
G­
1.
Critical
Values
of
Student's
t
Distribution
(One­
Tailed)

1
 
 
t()
1   
 
Degrees
of
Freedom
(see
note)
values
for
(
)
or
(
)
t
1   
 
1   
 
0.70
0.75
0.80
0.85
0.90
0.95
0.975
0.99
0.995
1
0.
727
1.000
1.376
1.963
3.078
6.314
12.706
31.821
63.657
2
0.
617
0.816
1.061
1.386
1.886
2.920
4.303
6.965
9.925
3
0.
584
0.765
0.978
1.250
1.638
2.353
3.182
4.541
5.841
4
0.
569
0.741
0.941
1.190
1.533
2.132
2.776
3.747
4.604
5
0.
559
0.727
0.920
1.156
1.476
2.015
2.571
3.365
4.032
6
0.
553
0.718
0.906
1.134
1.440
1.943
2.447
3.143
3.707
7
0.
549
0.711
0.896
1.119
1.415
1.895
2.365
2.998
3.499
8
0.
546
0.706
0.889
1.108
1.397
1.860
2.306
2.896
3.355
9
0.
543
0.703
0.883
1.100
1.383
1.833
2.262
2.821
3.250
10
0.542
0.700
0.879
1.093
1.372
1.812
2.228
2.764
3.169
11
0.540
0.697
0.876
1.088
1.363
1.796
2.201
2.718
3.106
12
0.539
0.695
0.873
1.083
1.356
1.782
2.179
2.681
3.055
13
0.538
0.694
0.870
1.079
1.350
1.771
2.160
2.650
3.012
14
0.537
0.692
0.868
1.076
1.345
1.761
2.145
2.624
2.977
15
0.536
0.691
0.866
1.074
1.340
1.753
2.131
2.602
2.947
16
0.535
0.690
0.865
1.071
1.337
1.746
2.120
2.583
2.921
17
0.534
0.689
0.863
1.069
1.333
1.740
2.110
2.567
2.898
18
0.534
0.688
0.862
1.067
1.330
1.734
2.101
2.552
2.878
19
0.533
0.688
0.861
1.066
1.328
1.729
2.093
2.539
2.861
20
0.533
0.687
0.860
1.064
1.325
1.725
2.086
2.528
2.845
21
0.532
0.686
0.859
1.063
1.323
1.721
2.080
2.518
2.831
22
0.532
0.686
0.858
1.061
1.321
1.717
2.074
2.508
2.819
23
0.532
0.685
0.858
1.060
1.319
1.714
2.069
2.500
2.807
24
0.531
0.685
0.857
1.059
1.318
1.711
2.064
2.492
2.797
25
0.531
0.684
0.856
1.058
1.316
1.708
2.060
2.485
2.787
26
0.531
0.684
0.856
1.058
1.315
1.706
2.056
2.479
2.779
27
0.531
0.684
0.855
1.057
1.314
1.703
2.052
2.473
2.771
28
0.530
0.683
0.855
1.056
1.313
1.701
2.048
2.467
2.763
29
0.530
0.683
0.854
1.055
1.311
1.699
2.045
2.462
2.756
30
0.530
0.683
0.854
1.055
1.310
1.697
2.042
2.457
2.750
40
0.529
0.681
0.851
1.050
1.303
1.684
2.021
2.423
2.704
60
0.527
0.679
0.848
1.046
1.296
1.671
2.000
2.390
2.660
120
0.526
0.677
0.845
1.041
1.289
1.658
1.980
2.358
2.617
 
0.524
0.674
0.842
1.036
1.282
1.645
1.960
2.326
2.576
Note:
For
simple
random
or
systematic
sampling,
degrees
of
freedom
(
)
are
equal
to
the
number
of
samples
(
)
df
n
collected
from
a
solid
waste
and
analyzed,
less
one
(in
other
words,
).
If
stratified
random
sampling
is
df
n
=
 
1
used,
calculate
using
Equation
12
or
14
in
Section
5.4.2.2.
df
The
last
row
of
the
table
(
degrees
of
freedom)
gives
the
critical
values
for
a
standard
normal
distribution
(
).
 
z
For
example,
the
value
for
where
is
found
in
the
last
row
as
1.282.
z
1   
 
 =
010
.
Appendix
G
264
Table
G­
2.
Factors
(
)
for
Parametric
Upper
Confidence
Bounds
on
Upper
Percentiles
(
)
 
p
n
p
=
0.80
p
=
0.90
1   
 
0.800
0.900
0.950
0.975
0.990
0.800
0.900
0.950
0.975
0.990
2
3.417
6.987
14.051
28.140
70.376
5.049
10.253
20.581
41.201
103.029
3
2.016
3.039
4.424
6.343
10.111
2.871
4.258
6.155
8.797
13.995
4
1.675
2.295
3.026
3.915
5.417
2.372
3.188
4.162
5.354
7.380
5
1.514
1.976
2.483
3.058
3.958
2.145
2.742
3.407
4.166
5.362
6
1.417
1.795
2.191
2.621
3.262
2.012
2.494
3.006
3.568
4.411
7
1.352
1.676
2.005
2.353
2.854
1.923
2.333
2.755
3.206
3.859
8
1.304
1.590
1.875
2.170
2.584
1.859
2.219
2.582
2.960
3.497
9
1.266
1.525
1.779
2.036
2.391
1.809
2.133
2.454
2.783
3.240
10
1.237
1.474
1.703
1.933
2.246
1.770
2.066
2.355
2.647
3.048
11
1.212
1.433
1.643
1.851
2.131
1.738
2.011
2.275
2.540
2.898
12
1.192
1.398
1.593
1.784
2.039
1.711
1.966
2.210
2.452
2.777
13
1.174
1.368
1.551
1.728
1.963
1.689
1.928
2.155
2.379
2.677
14
1.159
1.343
1.514
1.681
1.898
1.669
1.895
2.109
2.317
2.593
15
1.145
1.321
1.483
1.639
1.843
1.652
1.867
2.068
2.264
2.521
16
1.133
1.301
1.455
1.603
1.795
1.637
1.842
2.033
2.218
2.459
17
1.123
1.284
1.431
1.572
1.753
1.623
1.819
2.002
2.177
2.405
18
1.113
1.268
1.409
1.543
1.716
1.611
1.800
1.974
2.141
2.357
19
1.104
1.254
1.389
1.518
1.682
1.600
1.782
1.949
2.108
2.314
20
1.096
1.241
1.371
1.495
1.652
1.590
1.765
1.926
2.079
2.276
21
1.089
1.229
1.355
1.474
1.625
1.581
1.750
1.905
2.053
2.241
22
1.082
1.218
1.340
1.455
1.600
1.572
1.737
1.886
2.028
2.209
23
1.076
1.208
1.326
1.437
1.577
1.564
1.724
1.869
2.006
2.180
24
1.070
1.199
1.313
1.421
1.556
1.557
1.712
1.853
1.985
2.154
25
1.065
1.190
1.302
1.406
1.537
1.550
1.702
1.838
1.966
2.129
26
1.060
1.182
1.291
1.392
1.519
1.544
1.691
1.824
1.949
2.106
27
1.055
1.174
1.280
1.379
1.502
1.538
1.682
1.811
1.932
2.085
28
1.051
1.167
1.271
1.367
1.486
1.533
1.673
1.799
1.917
2.065
29
1.047
1.160
1.262
1.355
1.472
1.528
1.665
1.788
1.903
2.047
30
1.043
1.154
1.253
1.344
1.458
1.523
1.657
1.777
1.889
2.030
31
1.039
1.148
1.245
1.334
1.445
1.518
1.650
1.767
1.877
2.014
32
1.035
1.143
1.237
1.325
1.433
1.514
1.643
1.758
1.865
1.998
33
1.032
1.137
1.230
1.316
1.422
1.510
1.636
1.749
1.853
1.984
34
1.029
1.132
1.223
1.307
1.411
1.506
1.630
1.740
1.843
1.970
35
1.026
1.127
1.217
1.299
1.400
1.502
1.624
1.732
1.833
1.957
36
1.023
1.123
1.211
1.291
1.391
1.498
1.618
1.725
1.823
1.945
37
1.020
1.118
1.205
1.284
1.381
1.495
1.613
1.717
1.814
1.934
38
1.017
1.114
1.199
1.277
1.372
1.492
1.608
1.710
1.805
1.922
39
1.015
1.110
1.194
1.270
1.364
1.489
1.603
1.704
1.797
1.912
40
1.013
1.106
1.188
1.263
1.356
1.486
1.598
1.697
1.789
1.902
41
1.010
1.103
1.183
1.257
1.348
1.483
1.593
1.691
1.781
1.892
42
1.008
1.099
1.179
1.251
1.341
1.480
1.589
1.685
1.774
1.883
43
1.006
1.096
1.174
1.246
1.333
1.477
1.585
1.680
1.767
1.874
44
1.004
1.092
1.170
1.240
1.327
1.475
1.581
1.674
1.760
1.865
45
1.002
1.089
1.165
1.235
1.320
1.472
1.577
1.669
1.753
1.857
46
1.000
1.086
1.161
1.230
1.314
1.470
1.573
1.664
1.747
1.849
47
0.998
1.083
1.157
1.225
1.308
1.468
1.570
1.659
1.741
1.842
48
0.996
1.080
1.154
1.220
1.302
1.465
1.566
1.654
1.735
1.835
49
0.994
1.078
1.150
1.216
1.296
1.463
1.563
1.650
1.730
1.828
50
0.993
1.075
1.146
1.211
1.291
1.461
1.559
1.646
1.724
1.821
55
0.985
1.063
1.130
1.191
1.266
1.452
1.545
1.626
1.700
1.790
60
0.978
1.052
1.116
1.174
1.245
1.444
1.532
1.609
1.679
1.764
65
0.972
1.043
1.104
1.159
1.226
1.437
1.521
1.594
1.661
1.741
70
0.967
1.035
1.094
1.146
1.210
1.430
1.511
1.581
1.645
1.722
75
0.963
1.028
1.084
1.135
1.196
1.425
1.503
1.570
1.630
1.704
80
0.959
1.022
1.076
1.124
1.183
1.420
1.495
1.559
1.618
1.688
85
0.955
1.016
1.068
1.115
1.171
1.415
1.488
1.550
1.606
1.674
90
0.951
1.011
1.061
1.106
1.161
1.411
1.481
1.542
1.596
1.661
95
0.948
1.006
1.055
1.098
1.151
1.408
1.475
1.534
1.586
1.650
100
0.945
1.001
1.049
1.091
1.142
1.404
1.470
1.527
1.578
1.639
Appendix
G
265
Table
G­
2.
Factors
(
)
for
Parametric
Upper
Confidence
Bounds
on
Upper
Percentiles
(
)
(continued)
 
p
n
p
=
0.95
p
=
0.99
1   
 
0.800
0.900
0.950
0.975
0.990
0.800
0.900
0.950
0.975
0.990
2
6.464
13.090
26.260
52.559
131.426
9.156
18.500
37.094
74.234
185.617
3
3.604
5.311
7.656
10.927
17.370
5.010
7.340
10.553
15.043
23.896
4
2.968
3.957
5.144
6.602
9.083
4.110
5.438
7.042
9.018
12.387
5
2.683
3.400
4.203
5.124
6.578
3.711
4.666
5.741
6.980
8.939
6
2.517
3.092
3.708
4.385
5.406
3.482
4.243
5.062
5.967
7.335
7
2.407
2.894
3.399
3.940
4.728
3.331
3.972
4.642
5.361
6.412
8
2.328
2.754
3.187
3.640
4.285
3.224
3.783
4.354
4.954
5.812
9
2.268
2.650
3.031
3.424
3.972
3.142
3.641
4.143
4.662
5.389
10
2.220
2.568
2.911
3.259
3.738
3.078
3.532
3.981
4.440
5.074
11
2.182
2.503
2.815
3.129
3.556
3.026
3.443
3.852
4.265
4.829
12
2.149
2.448
2.736
3.023
3.410
2.982
3.371
3.747
4.124
4.633
13
2.122
2.402
2.671
2.936
3.290
2.946
3.309
3.659
4.006
4.472
14
2.098
2.363
2.614
2.861
3.189
2.914
3.257
3.585
3.907
4.337
15
2.078
2.329
2.566
2.797
3.102
2.887
3.212
3.520
3.822
4.222
16
2.059
2.299
2.524
2.742
3.028
2.863
3.172
3.464
3.749
4.123
17
2.043
2.272
2.486
2.693
2.963
2.841
3.137
3.414
3.684
4.037
18
2.029
2.249
2.453
2.650
2.905
2.822
3.105
3.370
3.627
3.960
19
2.016
2.227
2.423
2.611
2.854
2.804
3.077
3.331
3.575
3.892
20
2.004
2.208
2.396
2.576
2.808
2.789
3.052
3.295
3.529
3.832
21
1.993
2.190
2.371
2.544
2.766
2.774
3.028
3.263
3.487
3.777
22
1.983
2.174
2.349
2.515
2.729
2.761
3.007
3.233
3.449
3.727
23
1.973
2.159
2.328
2.489
2.694
2.749
2.987
3.206
3.414
3.681
24
1.965
2.145
2.309
2.465
2.662
2.738
2.969
3.181
3.382
3.640
25
1.957
2.132
2.292
2.442
2.633
2.727
2.952
3.158
3.353
3.601
26
1.949
2.120
2.275
2.421
2.606
2.718
2.937
3.136
3.325
3.566
27
1.943
2.109
2.260
2.402
2.581
2.708
2.922
3.116
3.300
3.533
28
1.936
2.099
2.246
2.384
2.558
2.700
2.909
3.098
3.276
3.502
29
1.930
2.089
2.232
2.367
2.536
2.692
2.896
3.080
3.254
3.473
30
1.924
2.080
2.220
2.351
2.515
2.684
2.884
3.064
3.233
3.447
31
1.919
2.071
2.208
2.336
2.496
2.677
2.872
3.048
3.213
3.421
32
1.914
2.063
2.197
2.322
2.478
2.671
2.862
3.034
3.195
3.398
33
1.909
2.055
2.186
2.308
2.461
2.664
2.852
3.020
3.178
3.375
34
1.904
2.048
2.176
2.296
2.445
2.658
2.842
3.007
3.161
3.354
35
1.900
2.041
2.167
2.284
2.430
2.652
2.833
2.995
3.145
3.334
36
1.895
2.034
2.158
2.272
2.415
2.647
2.824
2.983
3.131
3.315
37
1.891
2.028
2.149
2.262
2.402
2.642
2.816
2.972
3.116
3.297
38
1.888
2.022
2.141
2.251
2.389
2.637
2.808
2.961
3.103
3.280
39
1.884
2.016
2.133
2.241
2.376
2.632
2.800
2.951
3.090
3.264
40
1.880
2.010
2.125
2.232
2.364
2.627
2.793
2.941
3.078
3.249
41
1.877
2.005
2.118
2.223
2.353
2.623
2.786
2.932
3.066
3.234
42
1.874
2.000
2.111
2.214
2.342
2.619
2.780
2.923
3.055
3.220
43
1.871
1.995
2.105
2.206
2.331
2.615
2.773
2.914
3.044
3.206
44
1.868
1.990
2.098
2.198
2.321
2.611
2.767
2.906
3.034
3.193
45
1.865
1.986
2.092
2.190
2.312
2.607
2.761
2.898
3.024
3.180
46
1.862
1.981
2.086
2.183
2.303
2.604
2.756
2.890
3.014
3.168
47
1.859
1.977
2.081
2.176
2.294
2.600
2.750
2.883
3.005
3.157
48
1.857
1.973
2.075
2.169
2.285
2.597
2.745
2.876
2.996
3.146
49
1.854
1.969
2.070
2.163
2.277
2.594
2.740
2.869
2.988
3.135
50
1.852
1.965
2.065
2.156
2.269
2.590
2.735
2.862
2.980
3.125
55
1.841
1.948
2.042
2.128
2.233
2.576
2.713
2.833
2.943
3.078
60
1.832
1.933
2.022
2.103
2.202
2.564
2.694
2.807
2.911
3.038
65
1.823
1.920
2.005
2.082
2.176
2.554
2.677
2.785
2.883
3.004
70
1.816
1.909
1.990
2.063
2.153
2.544
2.662
2.765
2.859
2.974
75
1.810
1.899
1.976
2.047
2.132
2.536
2.649
2.748
2.838
2.947
80
1.804
1.890
1.964
2.032
2.114
2.528
2.638
2.733
2.819
2.924
85
1.799
1.882
1.954
2.019
2.097
2.522
2.627
2.719
2.802
2.902
90
1.794
1.874
1.944
2.006
2.082
2.516
2.618
2.706
2.786
2.883
95
1.790
1.867
1.935
1.995
2.069
2.510
2.609
2.695
2.772
2.866
100
1.786
1.861
1.927
1.985
2.056
2.505
2.601
2.684
2.759
2.850
Appendix
G
266
Table
G­
3a.
Sample
Size
Required
to
Demonstrate
With
At
Least
Confidence
That
At
Least
100
1
()%
 
 
of
a
Lot
or
Batch
of
Waste
Complies
With
the
Applicable
Standard
(No
Samples
Exceeding
the
Standard)
100p%

p
1   
 
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
0.99
0.50
1
2
222233
4
57
0.55
2
2
223334
4
68
0.60
2
2
233344
5
610
0.65
2
2
333445
6
711
0.70
2
3
334456
7
913
0.75
3
3
44556791117
0.80
4
4
5
5
6
7
8
9
11
14
21
0.85
5
5
6
7
8
9
10
12
15
19
29
0.90
7
8
9
10
12
14
16
19
22
29
44
0.95
14
16
18
21
24
28
32
37
45
59
90
0.99
69
80
92
105
120
138
161
189
230
299
459
Table
G­
3b.
Sample
Size
Required
to
Demonstrate
With
At
Least
Confidence
That
At
Least
100
1
()%
 
 
of
a
Lot
or
Batch
of
Waste
Complies
With
the
Applicable
Standard
(One
Sample
Exceeding
the
Standard)
100p%

p
1   
 
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
0.99
0.50
3
4
445556
7
811
0.55
4
4
455667
8
912
0.60
4
5
55667891014
0.65
5
5
6
6
7
7
8
9
10
12
16
0.70
6677
8
9910
12
1420
0.75
7
7
8
9
9
10
11
13
15
18
24
0.80
9
9
10
11
12
13
14
16
18
22
31
0.85
11
12
13
15
16
18
19
22
25
30
42
0.90
17
19
20
22
24
27
29
33
38
46
64
0.95
34
37
40
44
49
53
59
67
77
93
130
0.99
168
184
202
222
244
269
299
337
388
473
662
Appendix
G
267
Table
G­
4.
Coefficients
for
the
Shapiro­
Wilk
Test
for
Normality
[]
an
i
 
+
1
i
\
n
2
3
4
5
6
7
8
910
1
.7071
.7071
.6872
.6646
.6431
.6233
.6052
.5888
.5739
2
.0000
.1677
.2413
.2806
.3031
.3164
.3244
.3291
3
.0000
.0875
.1401
.1743
.1976
.2141
4
.0000
.0561
.0947
.1224
5
.0000
.0399
i
\
n
11
12
13
14
15
16
17
18
19
20
1
.5601
.5475
.5359
.5251
.5150
.5056
.4968
.4886
.4808
.4734
2
.3315
.3325
.3325
.3318
.3306
.3290
.3273
.3253
.3232
.3211
3
.2260
.2347
.2412
.2460
.2495
.2521
.2540
.2553
.2561
.2565
4
.1429
.1586
.1707
.1802
.1878
.1939
.1988
.2027
.2059
.2085
5
.0695
.0922
.1099
.1240
.1353
.1447
.1524
.1587
.1641
.1686
6
.0000
.0303
.0539
.0727
.0880
.1005
.1109
.1197
.1271
.1334
7
.0000
.0240
.0433
.0593
.0725
.0837
.0932
.1013
8
.0000
.0196
.0359
.0496
.0612
.0711
9
.0000
.0163
.0303
.0422
10
.0000
.0140
i
\
n
21
22
23
24
25
26
27
28
29
30
1
.4643
.4590
.4542
.4493
.4450
.4407
.4366
.4328
.4291
.4254
2
.3185
.3156
.3126
.3098
.3069
.3043
.3018
.2992
.2968
.2944
3
.2578
.2571
.2563
.2554
.2543
.2533
.2522
.2510
.2499
.2487
4
.2119
.2131
.2139
.2145
.2148
.2151
.2152
.2151
.2150
.2148
5
.1736
.1764
.1787
.1807
.1822
.1836
.1848
.1857
.1864
.1870
6
.1399
.1443
.1480
.1512
.1539
.1563
.1584
.1601
.1616
.1630
7
.1092
.1150
.1201
.1245
.1283
.1316
.1346
.1372
.1395
.1415
8
.0804
.0878
.0941
.0997
.1046
.1089
.1128
.1162
.1192
.1219
9
.0530
.0618
.0696
.0764
.0823
.0876
.0923
.0965
.1002
.1036
10
.0263
.0368
.0459
.0539
.0610
.0672
.0728
.0778
.0822
.0862
11
.0000
.0122
.0228
.0321
.0403
.0476
.0540
.0598
.0650
.0697
12
.0000
.0107
.0200
.0284
.0358
.0424
.0483
.0537
13
.0000
.0094
.0178
.0253
.0320
.0381
14
.0000
.0084
.0159
.0227
15
.0000
.0076
Source:
After
Shapiro
and
Wilk
(1965)
Appendix
G
268
Table
G­
4.
Coefficients
for
the
Shapiro­
Wilk
Test
for
Normality
(Continued)
[]
an
i
 
+
1
i
\
n
31
32
33
34
35
36
37
38
39
40
1
.4220
.4188
.4156
.4127
.4096
.4068
.4040
.4015
.3989
.3964
2
.2921
.2898
.2876
.2854
.2834
.2813
.2794
.2774
.2755
.2737
3
.2475
.2463
.2451
.2439
.2427
.2415
.2403
.2391
.2380
.2368
4
.2145
.2141
.2137
.2132
.2127
.2121
.2116
.2110
.2104
.2098
5
.1874
.1878
.1880
.1882
.1883
.1883
.1883
.1881
.1880
.1878
6
.1641
.1651
.1660
.1667
.1673
.1678
.1683
.1686
.1689
.1691
7
.1433
.1449
.1463
.1475
.1487
.1496
.1505
.1513
.1520
.1526
8
.1243
.1265
.1284
.1301
.1317
.1331
.1344
.1356
.1366
.1376
9
.1066
.1093
.1118
.1140
.1160
.1179
.1196
.1211
.1225
.1237
10
.0899
.0931
.0961
.0988
.1013
.1036
.1056
.1075
.1092
.1108
11
.0739
.0777
.0812
.0844
.0873
.0900
.0924
.0947
.0967
.0986
12
.0585
.0629
.0669
.0706
.0739
.0770
.0798
.0824
.0848
.0870
13
.0435
.0485
.0530
.0572
.0610
.0645
.0677
.0706
.0733
.0759
14
.0289
.0344
.0395
.0441
.0484
.0523
.0559
.0592
.0622
.0651
15
.0144
.0206
.0262
.0314
.0361
.0404
.0444
.0481
.0515
.0546
16
.0000
.0068
.0131
.0187
.0239
.0287
.0331
.0372
.0409
.0444
17
.0000
.0062
.0119
.0172
.0220
.0264
.0305
.0343
18
.0000
.0057
.0110
.0158
.0203
.0244
19
.0000
.0053
.0101
.0146
20
.0000
.0049
i
\
n
41
42
43
44
45
46
47
48
49
50
1
.3940
.3917
.3894
.3872
.3850
.3830
.3808
.3789
.3770
.3751
2
.2719
.2701
.2628
.2667
.2651
.2635
.2620
.2604
.2589
.2574
3
.2357
.2345
.2334
.2323
.2313
.2302
.2291
.2281
.2271
.2260
4
.2091
.2085
.2078
.2072
.2065
.2058
.2052
.2045
.2038
.2032
5
.1876
.1874
.1871
.1868
.1865
.1862
.1859
.1855
.1851
.1847
6
.1693
.1694
.1695
.1695
.1695
.1695
.1695
.1693
.1692
.1691
7
.1531
.1535
.1539
.1542
.1545
.1548
.1550
.1551
.1553
.1554
8
.1384
.1392
.1398
.1405
.1410
.1415
.1420
.1423
.1427
.1430
9
.1249
.1259
.1269
.1278
.1286
.1293
.1300
.1306
.1312
.1317
10
.1123
.1136
.1149
.1160
.1170
.1180
.1189
.1197
.1205
.1212
11
.1004
.1020
.1035
.1049
.1062
.1073
.1085
.1095
.1105
.1113
12
.0891
.0909
.0927
.0943
.0959
.0972
.0986
.0998
.1010
.1020
13
.0782
.0804
.0824
.0842
.0860
.0876
.0892
.0906
.0919
.0932
14
.0677
.0701
.0724
.0745
.0775
.0785
.0801
.0817
.0832
.0846
15
.0575
.0602
.0628
.0651
.0673
.0694
.0713
.0731
.0748
.0764
16
.0476
.0506
.0534
.0560
.0584
.0607
.0628
.0648
.0667
.0685
17
.0379
.0411
.0442
.0471
.0497
.0522
.0546
.0568
.0588
.0608
18
.0283
.0318
.0352
.0383
.0412
.0439
.0465
.0489
.0511
.0532
19
.0188
.0227
.0263
.0296
.0328
.0357
.0385
.0411
.0436
.0459
20
.0094
.0136
.0175
.0211
.0245
.0277
.0307
.0335
.0361
.0386
21
.0000
.0045
.0087
.0126
.0163
.0197
.0229
.0259
.0288
.0314
22
.0000
.0042
.0081
.0118
.0153
.0185
.0215
.0244
23
.0000
.0039
.0076
.0111
.0143
.0174
24
.0000
.0037
.0071
.0104
25
.0000
.0035
Appendix
G
269
Table
G­
5.
­Level
Critical
Points
for
the
Shapiro­
Wilk
Test
 
n
 
0.01
0.05
3
0.
753
0.767
4
0.
687
0.748
5
0.
686
0.762
6
0.
713
0.788
7
0.
730
0.803
8
0.
749
0.818
9
0.
764
0.829
10
0.781
0.842
11
0.792
0.850
12
0.805
0.859
13
0.814
0.866
14
0.825
0.874
15
0.835
0.881
16
0.844
0.887
17
0.851
0.892
18
0.858
0.897
19
0.863
0.901
20
0.868
0.905
21
0.873
0.908
22
0.878
0.911
23
0.881
0.914
24
0.884
0.916
25
0.888
0.918
26
0.891
0.920
27
0.894
0.923
28
0.896
0.924
29
0.898
0.926
30
0.900
0.927
31
0.902
0.929
32
0.904
0.930
33
0.906
0.931
34
0.908
0.933
35
0.910
0.934
36
0.912
0.935
37
0.914
0.936
38
0.916
0.938
39
0.917
0.939
40
0.919
0.940
41
0.920
0.941
42
0.922
0.942
43
0.923
0.943
44
0.924
0.944
45
0.926
0.945
46
0.927
0.945
47
0.928
0.946
48
0.929
0.947
49
0.929
0.947
50
0.930
0.947
Source:
After
Shapiro
and
Wilk
(1965)
Appendix
G
270
Table
G­
6.
Values
of
for
Calculating
a
One­
Sided
90­
Percent
UCL
on
a
Lognormal
Mean
H
H
1
090
 
=
 
.

sy
n
3
5
7
10
12
15
21
31
51
101
0.10
1.686
1.438
1.381
1.349
1.338
1.328
1.317
1.308
1.301
1.295
0.20
1.885
1.522
1.442
1.396
1.380
1.365
1.348
1.335
1.324
1.314
0.30
2.156
1.627
1.517
1.453
1.432
1.411
1.388
1.370
1.354
1.339
0.40
2.521
1.755
1.607
1.523
1.494
1.467
1.437
1.412
1.390
1.371
0.50
2.990
1.907
1.712
1.604
1.567
1.532
1.494
1.462
1.434
1.409
0.60
3.542
2.084
1.834
1.696
1.650
1.606
1.558
1.519
1.485
1.454
0.70
4.136
2.284
1.970
1.800
1.743
1.690
1.631
1.583
1.541
1.504
0.80
4.742
2.503
2.119
1.914
1.845
1.781
1.710
1.654
1.604
1.560
0.90
5.349
2.736
2.280
2.036
1.955
1.880
1.797
1.731
1.672
1.621
1.00
5.955
2.980
2.450
2.167
2.073
1.985
1.889
1.812
1.745
1.686
1.25
7.466
3.617
2.904
2.518
2.391
2.271
2.141
2.036
1.946
1.866
1.50
8.973
4.276
3.383
2.896
2.733
2.581
2.415
2.282
2.166
2.066
1.75
10.48
4.944
3.877
3.289
3.092
2.907
2.705
2.543
2.402
2.279
2.00
11.98
5.619
4.380
3.693
3.461
3.244
3.005
2.814
2.648
2.503
2.50
14.99
6.979
5.401
4.518
4.220
3.938
3.629
3.380
3.163
2.974
3.00
18.00
8.346
6.434
5.359
4.994
4.650
4.270
3.964
3.697
3.463
3.50
21.00
9.717
7.473
6.208
5.778
5.370
4.921
4.559
4.242
3.965
4.00
24.00
11.09
8.516
7.062
6.566
6.097
5.580
5.161
4.796
4.474
4.50
27.01
12.47
9.562
7.919
7.360
6.829
6.243
5.763
5.354
4.989
5.00
30.01
13.84
10.61
8.779
8.155
7.563
6.909
6.379
5.916
5.508
6.00
36.02
16.60
12.71
10.50
9.751
9.037
8.248
7.607
7.048
6.555
7.00
42.02
19.35
14.81
12.23
11.35
10.52
9.592
8.842
8.186
7.607
8.00
48.03
22.11
16.91
13.96
12.96
12.00
10.94
10.08
9.329
8.665
9.00
54.03
24.87
19.02
15.70
14.56
13.48
12.29
11.32
10.48
9.725
10.0
60.04
27.63
21.12
17.43
16.17
14.97
13.64
12.56
11.62
10.79
Source:
Land
(1975)
Appendix
G
271
Table
G­
7.
Values
of
the
Parameter
for
Cohen's
Adjustment
for
Nondetected
Values
$
 
 
h
.01
.02
.03
.04
.05
.06
.07
.08
.09
.10
.15
.20
.00
.010100
.020400
.030902
.041583
.052507
.063625
.074953
.08649
.09824
.11020
.17342
.24268
.05
.010551
.021294
.032225
.043350
.054670
.066159
.077909
.08983
.10197
.11431
.17925
.25033
.10
.010950
.022082
.033398
.044902
.056596
.068483
.080563
.09285
.10534
.11804
.18479
.25741
.15
.011310
.022798
.034466
.046318
.058356
.070586
.083009
.09563
.10845
.12148
.18985
.26405
.20
.011642
.023459
.035453
.047829
.059990
.072539
.085280
.09822
.11135
.12469
.19460
.27031
.25
.011952
.024076
.036377
.048858
.061522
.074372
.087413
.10065
.11408
.12772
.19910
.27626
.30
.012243
.024658
.037249
.050018
.062969
.076106
.089433
.10295
.11667
.13059
.20338
.28193
.35
.012520
.025211
.038077
.051120
.064345
.077736
.091355
.10515
.11914
.13333
.20747
.28737
.40
.012784
.025738
.038866
.052173
.065660
.079332
.093193
.10725
.12150
.13595
.21129
.29250
.45
.013036
.026243
.039624
.053182
.066921
.080845
.094958
.10926
.12377
.13847
.21517
.29765
.50
.013279
.026728
.040352
.054153
.068135
.082301
.096657
.11121
.12595
.14090
.21882
.30253
.55
.013513
.027196
.041054
.055089
.069306
.083708
.098298
.11208
.12806
.14325
.22225
.30725
.60
.013739
.027849
.041733
.055995
.070439
.085068
.099887
.11490
.13011
.14552
.22578
.31184
.65
.013958
.028087
.042391
.056874
.071538
.086388
.10143
.11666
.13209
.14773
.22910
.31630
.70
.014171
.028513
.043030
.057726
.072505
.087670
.10292
.11837
.13402
.14987
.23234
.32065
.75
.014378
.029927
.043652
.058556
.073643
.088917
.10438
.12004
.13590
.15196
.23550
.32489
.80
.014579
.029330
.044258
.059364
.074655
.090133
.10580
.12167
.13775
.15400
.23858
.32903
.85
.014773
.029723
.044848
.060153
.075642
.091319
.10719
.12225
.13952
.15599
.24158
.33307
.90
.014967
.030107
.045425
.060923
.075606
.092477
.10854
.12480
.14126
.15793
.24452
.33703
.95
.015154
.030483
.045989
.061676
.077549
.093611
.10987
.12632
.14297
.15983
.24740
.34091
1.00
.015338
.030850
.046540
.062413
.078471
.094720
.11116
.12780
.14465
.16170
.25022
.34471
Appendix
G
272
Table
G­
7.
Values
of
the
Parameter
for
Cohen's
Adjustment
for
Nondetected
Values
(Continued)
$
 
 
h
.25
.30
.35
.40
.45
.50
.55
.60
.65
.70
.80
.90
.05
.32793
.4130
.5066
.6101
.7252
.8540
.9994
1.166
1.358
1.585
2.203
3.314
.10
.33662
.4233
.5184
.6234
.7400
.8703
1.017
1.185
1.379
1.608
2.229
3.345
.15
.34480
.4330
.5296
.6361
.7542
.8860
1.035
1.204
1.400
1.630
2.255
3.376
.20
.35255
.4422
.5403
.6483
.7673
.9012
1.051
1.222
1.419
1.651
2.280
3.405
.25
.35993
.4510
.5506
.6600
.7810
.9158
1.067
1.240
1.439
1.672
2.305
3.435
.30
.36700
.4595
.5604
.6713
.7937
.9300
1.083
1.257
1.457
1.693
2.329
3.464
.35
.37379
.4676
.5699
.6821
.8060
.9437
1.098
1.274
1.475
1.713
2.353
3.492
.40
.38033
.4735
.5791
.6927
.8179
.9570
1.113
1.290
1.494
1.732
2.376
3.520
.45
.38665
.4831
.5880
.7029
.8295
.9700
1.127
1.306
1.511
1.751
2.399
3.547
.50
.39276
.4904
.5967
.7129
.8408
.9826
1.141
1.321
1.528
1.770
2.421
3.575
.55
.39679
.4976
.6061
.7225
.8517
.9950
1.155
1.337
1.545
1.788
2.443
3.601
.60
.40447
.5045
.6133
.7320
.8625
1.007
1.169
1.351
1.561
1.806
2.465
3.628
.65
.41008
.5114
.6213
.7412
.8729
1.019
1.182
1.368
1.577
1.824
2.486
3.654
.70
.41555
.5180
.6291
.7502
.8832
1.030
1.195
1.380
1.593
1.841
2.507
3.679
.75
.42090
.5245
.6367
.7590
.8932
1.042
1.207
1.394
1.608
1.851
2.528
3.705
.80
.42612
.5308
.6441
.7676
.9031
1.053
1.220
1.408
1.624
1.875
2.548
3.730
.85
.43122
.5370
.6515
.7781
.9127
1.064
1.232
1.422
1.639
1.892
2.568
3.754
.90
.43622
.5430
.6586
.7844
.9222
1.074
1.244
1.435
1.653
1.908
2.588
3.779
.95
.44112
.5490
.6656
.7925
.9314
1.085
1.255
1.448
1.668
1.924
2.607
3.803
1.00
.44592
.5548
.6724
.8005
.9406
1.095
1.287
1.461
1.882
1.940
2.626
3.827
273
APPENDIX
H
STATISTICAL
SOFTWARE
Since
publication
of
Chapter
Nine
("
Sampling
Plan")
of
SW­
846
in
1986,
great
advances
have
been
made
in
desktop
computer
hardware
and
software.
In
implementing
the
procedures
recommended
in
this
chapter,
you
should
take
advantage
of
the
powerful
statistical
software
now
available
for
low
cost
or
no
cost.
A
number
of
useful
"freeware"
packages
are
available
from
EPA
and
other
organizations,
and
many
are
downloadable
from
the
Internet.
Commercially
available
software
also
may
be
used.

This
appendix
provides
a
list
of
software
that
you
might
find
useful.
EPA
Guidance
for
Quality
Assurance
Project
Plans,
EPA
QA/
G­
5
(USEPA
1998a)
also
provides
an
extensive
list
of
software
that
can
assist
you
in
developing
and
preparing
a
quality
assurance
project
plan.

Sampling
Design
Software
Title
Description
Decision
Error
Feasibility
Trials
(DEFT)*
This
software
package
allows
quick
generation
of
cost
information
about
several
simple
sampling
designs
based
on
DQO
constraints,
which
can
be
evaluated
to
determine
their
appropriateness
and
feasibility
before
the
sampling
and
analysis
design
is
finalized.
This
software
supports
the
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4
(USEPA
2000b),
which
provides
general
guidance
to
organizations
developing
data
quality
criteria
and
performance
specifications
for
decision
making.
The
Data
Quality
Objectives
Decision
Error
Feasibility
Trials
Software
(DEFT)
­
User's
Guide
(EPA/
240/
B­
01/
007)
contains
detailed
instructions
on
how
to
use
DEFT
software
and
provides
background
information
on
the
sampling
designs
that
the
software
uses.

Download
from
EPA's
World
Wide
Web
site
at:
http://
www.
epa.
gov/
quality/
qa_
docs.
html.

GeoEAS*
Geostatistical
Environmental
Assessment
Software
(GeoEAS)
(USEPA
1991b)
is
a
collection
of
interactive
software
tools
for
performing
twodimensional
geostatistical
analyses
of
spatially
distributed
data.
Programs
are
provided
for
data
file
management,
data
transformations,
univariate
statistics,
variogram
analysis,
cross­
validation,
kriging,
contour
mapping,
post
plots,
and
line/
scatter
plots.
Users
may
alter
parameters
and
re­
calculate
results
or
reproduce
graphs,
providing
a
"what­
if"
analysis
capability.

GeoEAS
Version
1.2.1
(April
1989)
software
and
documentation
is
available
from
EPA's
Web
site
at
http://
www.
epa.
gov/
ada/
csmos/
models/
geoeas.
html
*
Also
available
on
EPA's
CD­
ROM
Site
Characterization
Library
Volume
1
(Release
2)
(USEPA
1998c)
Appendix
H
274
Sampling
Design
Software
(Continued)

Title
Description
ELIPGRID­
PC
ELIPGRID­
PC
is
a
program
for
the
design
and
analysis
of
sampling
grids
for
locating
elliptical
targets
(e.
g.,
contamination
"hot
spots").
It
computes
the
probability
of
success
in
locating
targets
based
on
the
assumed
size,
shape,
and
orientation
of
the
targets,
as
well
as
the
specified
grid
spacing.
It
also
can
be
used
to
compute
a
grid
spacing
from
a
specified
success
probability,
compute
cost
information
associated
with
specified
sampling
grids,
determine
the
size
of
the
smallest
"hot
spot"
detected
given
a
particular
grid,
and
create
graphs
of
the
results.

Information,
software,
and
user's
guide
are
available
on
the
World
Wide
Web
at:
http://
dqo.
pnl.
gov/
software/
elipgrid.
htm
The
site
is
operated
for
the
U.
S.
Department
of
Energy
Office
of
Environmental
Management
by
the
Pacific
Northwest
National
Laboratory.

DQO­
PRO
This
software
comprises
a
series
of
programs
with
a
user
interface
such
as
a
common
calculator
and
it
is
accessed
using
Microsoft
Windows.
DQO­
PRO
provides
answers
for
three
objectives:

1.
Determining
the
rate
at
which
an
event
occurs
2.
Determining
an
estimate
of
an
average
within
a
tolerable
error
3.
Determining
the
sampling
grid
necessary
to
detect
"hot
spots."

DQO­
PRO
facilitates
understanding
the
significance
of
DQOs
by
showing
the
relationships
between
numbers
of
samples
and
DQO
parameters,
such
as
(1)
confidence
levels
versus
numbers
of
false
positive
or
negative
conclusions;
(2)
tolerable
error
versus
analyte
concentration,
standard
deviation,
etc.,
and
(3)
confidence
levels
versus
sampling
area
grid
size.
The
user
has
only
to
type
in
his
or
her
requirements
and
the
calculator
instantly
provides
the
answers.

Contact:
Information
and
software
are
available
on
the
Internet
at
the
American
Chemical
Society,
Division
of
Environmental
Chemistry
Web
site
at
http://
www.
acs­
envchem.
duq.
edu/
dqopro.
htm
Visual
Sample
Plan
(VSP)
VSP
provides
statistical
solutions
for
optimizing
the
sampling
design.
The
software
can
answer
two
important
questions
in
sample
planning:
(1)
How
many
samples
are
needed?
VSP
can
quickly
calculate
the
number
of
samples
needed
for
various
scenarios
at
different
costs.
(2)
Where
should
the
samples
be
taken?
Sample
placement
based
on
personal
judgment
is
prone
to
bias.
VSP
provides
random
or
grided
sampling
locations
overlaid
on
the
site
map.

Information
and
software
available
at
http://
dqo.
pnl.
gov/
VSP/
Index.
htm
VSP
was
developed
in
part
by
Department
of
Energy's
(DOE's)
National
Analytical
Management
Program
(NAMP)
and
through
a
joint
effort
between
Pacific
Northwest
National
Laboratory
(PNNL)
and
Advanced
Infrastructure
Management
Technologies
(AIMTech).
Appendix
H
275
Data
Quality
Assessment
Software
Title
Description
DataQUEST
This
software
tool
is
designed
to
provide
a
quick­
and­
easy
way
for
managers
and
analysts
to
perform
baseline
Data
Quality
Assessment.
The
goal
of
the
system
is
to
allow
those
not
familiar
with
standard
statistical
packages
to
review
data
and
verify
assumptions
that
are
important
in
implementing
the
DQA
Process.
This
software
supports
the
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(USEPA
2000d)
which
demonstrates
the
use
of
the
DQA
Process
in
evaluating
environmental
data
sets.

Download
from
EPA's
World
Wide
Web
site
at
http://
www.
epa.
gov/
quality/
qa_
docs.
html
ASSESS
1.01a*
This
software
tool
was
designed
to
calculate
variances
for
quality
assessment
samples
in
a
measurement
process.
The
software
performs
the
following
functions:
(1)
transforming
the
entire
data
set,
(2)
producing
scatter
plots
of
the
data,
(3)
displaying
error
bar
graphs
that
demonstrate
the
variance,
and
(4)
generating
reports
of
the
results
and
header
information.

Available
on
EPA's
CD­
ROM
Site
Characterization
Library
Volume
1
(Release
2)
(USEPA
1998c)

MTCAStat
This
software
package
is
published
by
the
Washington
Department
of
Ecology
and
can
be
used
to
calculate
sample
sizes
(for
both
normal
and
lognormal
distributions),
basic
statistical
quantities,
and
confidence
intervals.
Requires
MS
Excel
97.

The
USEPA
Office
of
Solid
Waste
has
not
evaluated
this
software
for
use
in
connection
with
RCRA
programs,
however,
users
of
this
guidance
may
wish
to
review
the
software
for
possible
application
to
some
of
the
concepts
described
in
this
document.

Available
from
Washington
Department
of
Ecology's
"Site
Cleanup,
Sediments,
and
Underground
Storage
Tanks"
World
Wide
Web
site
at
http://
www.
ecy.
wa.
gov/
programs/
tcp/
tools/
toolmain.
html
*
Also
available
on
EPA's
CD­
ROM
Site
Characterization
Library
Volume
1
(Release
2)
(USEPA
1998c)
276
This
page
intentionally
left
blank
277
APPENDIX
I
EXAMPLES
OF
PLANNING,
IMPLEMENTATION,
AND
ASSESSMENT
FOR
RCRA
WASTE
SAMPLING
This
appendix
presents
the
following
two
hypothetical
examples
of
planning,
implementation,
and
assessment
for
RCRA
waste
sampling:

Example
1:
Sampling
soil
in
a
RCRA
Solid
Waste
Management
Unit
(SWMU)
to
confirm
attainment
of
the
cleanup
standard
(using
the
mean
to
measure
compliance
with
a
standard)

Example
2:
Sampling
of
a
process
waste
to
make
a
hazardous
waste
determination
(using
a
maximum
or
upper
percentile
to
measure
compliance
with
a
standard).

Example
1:
Sampling
Soil
at
a
RCRA
SWMU
to
Confirm
Attainment
of
a
Cleanup
Standard
Introduction
In
this
example,
the
owner
of
a
permitted
TSDF
completed
removal
of
contaminated
soil
at
a
SWMU
as
required
under
the
facility's
RCRA
permit
under
EPA's
RCRA
Corrective
Action
Program.
The
permit
required
the
facility
owner
to
conduct
sampling
and
analysis
to
determine
if
the
remaining
soil
attains
the
facility­
specific
risk­
based
standard
specified
in
the
permit.
This
hypothetical
example
describes
how
the
planning,
implementation,
and
assessment
activities
were
conducted.

Planning
Phase
The
planning
phase
included
implementation
of
EPA's
systematic
planning
process
known
as
the
Data
Quality
Objectives
(DQO)
Process
and
preparation
of
a
quality
assurance
project
plan
(QAPP).
A
DQO
planning
team
was
assembled,
and
the
DQO
Process
was
implemented
following
EPA's
guidance
in
Guidance
for
the
Data
Quality
Objectives
Process
for
Hazardous
Waste
Site
Operations
EPA
QA/
G­
4HW
(USEPA
2000a),
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4
(USEPA
2000b),
and
Chapter
Nine
of
SW­
846.

The
outputs
of
the
seven
steps
of
the
DQO
Process
are
outlined
below.

DQO
Step
1:
Stating
the
Problem
°
The
DQO
planning
team
included
the
facility
owner,
a
technical
project
manager,
a
chemist,
environmental
technician
(sampler),
and
a
facility
engineer
familiar
with
statistical
methods.
As
part
of
the
DQO
Process,
the
team
consulted
with
their
state
regulator
to
determine
if
the
State
has
any
additional
regulations
or
guidance
that
applies.
A
state
guidance
document
provided
recommendations
for
the
parameter
of
interest
and
the
acceptable
Type
I
decision
error
rate.
Appendix
I
Example
1
278
°
A
concise
description
of
the
problem
was
developed
as
follows:
The
facility
conducted
a
soil
removal
action
at
the
SWMU.
Soil
with
concentrations
greater
than
the
risk­
based
cleanup
standard
of
10
mg/
kg
of
pentachlorophenol
(PCP)
was
excavated
for
off­
site
disposal.
Removal
was
guided
by
the
results
of
grab
samples
analyzed
for
PCP
using
a
semi­
quantitative
field
analytical
method.

°
The
conceptual
site
model
(CSM)
assumed
that
the
PCP
migrated
downward
into
the
soil,
and
that
if
a
soil
layer
were
found
to
be
"clean,"
then
the
underlying
soil
layer
also
would
be
assumed
"clean."

°
The
technical
staff
were
given
six
weeks
to
complete
the
study
and
submit
a
draft
report
to
the
regulatory
agency.

DQO
Step
2:
Identifying
Possible
Decisions
°
Decision
statement:
The
study
objective
was
to
determine
if
the
soil
remaining
in
the
SWMU
after
removal
of
the
contaminated
soil
attained
the
cleanup
standard.
If
the
standard
is
attained,
then
the
area
will
be
backfilled
with
clean
fill
and
reserved
for
future
industrial
development.
If
the
standard
is
not
attained,
then
the
next
layer
of
soil
within
the
SWMU
will
be
removed.

DQO
Step
3:
Identifying
Inputs
to
the
Decision
°
The
sample
analysis
results
for
total
PCP
(in
mg/
kg)
in
soil
were
used
to
decide
whether
or
not
the
soil
attained
the
cleanup.
PCP
was
designated
as
the
only
constituent
of
concern,
and
its
distribution
within
the
SWMU
was
assumed
to
be
random.
The
risk­
based
cleanup
level
for
PCP
in
soil
was
set
at
10
mg/
kg.

°
The
decision
was
based
on
the
concentrations
in
the
top
six­
inch
layer
of
soil
across
the
entire
SWMU.
The
study
was
designed
to
determine
whether
the
entire
unit
attains
the
standards,
or
does
not.

°
The
chemist
identified
two
candidate
analytical
methods
for
measuring
PCP
concentrations
in
soil:
(1)
SW­
846
Method
4010A
"Screening
For
Pentachlorophenol
By
Immunoassay"
($
20/
analysis),
and
(2)
SW­
846
Method
8270
(and
prep
method
3550)
($
110/
analysis).
The
project
chemist
confirmed
that
both
methods
were
capable
of
achieving
a
quantitation
limit
well
below
the
action
level
of
10
mg/
kg.
During
Step
7
of
the
DQO
Process,
the
chemist
revisited
this
step
to
select
a
final
method
and
prepare
method
performance
criteria
as
part
of
the
overall
specification
of
decision
performance
criteria.

°
The
planning
team
identified
the
need
to
specify
the
size,
shape,
and
orientation
of
each
sample
to
satisfy
the
acceptable
sampling
error
(specified
in
DQO
Process
Step
7)
and
to
enable
selection
of
the
appropriate
sampling
device
(during
development
of
the
QAPP).
Because
the
soil
exists
in
a
relatively
flat
stationary
three­
dimensional
unit,
it
was
considered
a
series
of
overlapping
twodimensional
surfaces
for
the
purposes
of
sampling.
The
correct
orientation,
size,
Example
1
Appendix
I
279
and
shape
of
each
sample
was
a
vertical
core
capturing
the
full
six­
inch
thickness
of
the
soil
unit.
The
minimum
mass
of
each
primary
field
sample
was
determined
during
DQO
Process
Step
7
using
the
particle
size­
weight
relationship
required
to
control
fundamental
error
at
an
acceptable
level.

DQO
Step
4:
Defining
Boundaries
°
The
dimensions
of
the
SWMU
were
approximately
125
feet
by
80
feet
(10,000
square
feet).
The
SWMU
was
relatively
flat.
The
depth
of
interest
was
limited
to
the
top
six
inches
of
soil
in
the
unit
after
removal
of
the
contaminated
soil.
The
spatial
boundary
of
the
SWMU
was
defined
by
the
obvious
excavation
and
by
wooden
stakes
at
the
corners
of
the
excavation.

°
The
soil
within
the
study
boundary
was
loamy
sand
with
a
maximum
particle
size
of
about
1.5
mm
(0.15
cm).

°
The
project
team
planned
to
collect
samples
within
a
reasonable
time
frame,
and
degradation
or
transformation
of
the
PCP
over
the
investigation
period
was
not
a
concern.

DQO
Step
5:
Developing
Decision
Rules
°
The
population
parameter
of
interest
was
the
mean.
The
mean
was
selected
as
the
parameter
of
interest
because
the
risk­
based
cleanup
standard
(Action
Level)
was
derived
based
upon
long­
term
average
health
effects
predicted
from
exposures
to
the
contaminated
soil.

°
The
risk­
based
action
level
was
10
mg/
kg
total
pentachlorophenol
(PCP)
in
soil.

°
The
decision
rule
was
then
established
as
follows:
"If
the
mean
concentration
for
PCP
in
the
soil
is
less
than
10
mg/
kg,
then
the
cleanup
standard
is
attained.
Otherwise,
the
SWMU
will
be
considered
contaminated
and
additional
remedial
action
will
be
required."

DQO
Step
6:
Specifying
Limits
on
Decision
Errors
°
The
major
sources
of
variability
(measured
as
the
relative
variance)
were
identified
as
within­
sample
unit
variability
(
)
(including
analytical
imprecision
sw
2
and
Gy's
fundamental
error)
and
between­
sample
unit
variability
(
)
(or
sb
2
population
variability).
The
total
study
variance
(
)
,
expressed
as
the
relative
sT
2
variance,
was
estimated
using
the
following
relationship:

s
ss
s
s
s
T
bw
b
sa
2
22
2
22
=
+

=
+
+
Appendix
I
Example
1
280
where
=
between­
unit
variance
(population
variance),
=
sample
collection
sb
2
ss
2
imprecision
(estimated
by
Gy's
fundamental
error,
),
and
=
analytical
sFE
2
sa
2
imprecision
(determined
from
the
measurement
of
laboratory
control
samples
with
concentrations
near
the
Action
Level).

°
Sample
analysis
results
for
eight
samples
of
soil
excavated
from
the
previous
lift
gave
a
standard
deviation
and
mean
of
=
7.1
and
=
10.9
respectively.
The
s
x
total
study
relative
standard
deviation
(
)
was
then
estimated
as
0.65.
sT
°
The
relative
standard
deviation
(RSD)
of
the
sampling
error
(
)
was
estimated
ss
as
0.10
(as
estimated
by
Gy's
fundamental
error),
based
a
maximum
observed
particle
size
of
approximately
1.5
mm
(0.15
cm)
and
a
sample
mass
of
10
grams.

°
The
RSD
for
the
analytical
imprecision
(
)
associated
with
the
field
screening
sa
method
(SW­
846
Method
4010A
­
"Screening
For
Pentachlorophenol
By
Immunoassay")
was
estimated
from
replicate
measurements
as
0.40.

°
The
between­
unit
(population)
relative
standard
deviation
(
)
was
then
sb
estimated
as:

s
sss
b
Tsa
=
 
+

=
 
+
=
2
22
2
22
65
10
40
050
()

(.
)
(.
.
)
.

°
Two
potential
decision
errors
could
be
made
based
on
interpreting
sampling
and
analytical
data:

Decision
Error
A:
Concluding
that
the
mean
PCP
concentration
within
the
SWMU
was
less
than
10
mg/
kg
when
it
was
truly
greater
than
10
mg/
kg,
or
Decision
Error
B:
Concluding
that
the
mean
PCP
concentration
within
the
SWMU
was
greater
than
10
mg/
kg
when
it
was
truly
less
than
10
mg/
kg.

The
consequences
of
Decision
Error
A,
incorrectly
deciding
the
SWMU
was
"clean"
(mean
PCP
concentration
less
than
10
mg/
kg),
would
leave
contaminated
soil
undetected
and
would
likely
increase
health
risks
for
onsite
workers
and
pose
potential
future
legal
problems
for
the
owner.

The
consequences
of
Decision
Error
B,
incorrectly
deciding
the
SWMU
was
"not
clean"
(mean
PCP
concentration
greater
than
or
equal
to
10
mg/
kg),
would
cause
the
needless
expenditure
of
resources
(e.
g.,
funding,
time,
backhoe
and
operator,
soil
disposal,
sampling
crew
labor,
and
analytical
capacity)
for
unnecessary
further
remedial
action.
Example
1
Appendix
I
281
Error
A,
incorrectly
deciding
that
the
mean
PCP
concentration
is
less
than
the
action
level
of
10
mg/
kg,
posed
more
severe
consequences
for
human
health
plus
liability
and
compliance
concerns.
Consequently,
the
baseline
condition
chosen
for
the
SWMU
was
that
the
mean
PCP
concentration
within
the
SWMU
is
truly
greater
than
or
equal
to
the
action
level
of
10
mg/
kg.

Table
I­
1.
Null
Hypothesis
and
Possible
Decision
Errors
for
Example
1
"Null
Hypothesis"
(baseline
condition)
Possible
Decision
Errors
Type
I
Error
(
),
 
False
Rejection
Type
II
Error
(
),
 
False
Acceptance
The
true
mean
concentration
of
PCP
in
the
SWMU
is
greater
than
or
equal
to
the
risk­
based
cleanup
standard
(i.
e.,
the
SWMU
is
contaminated).
Concluding
the
site
is
"clean"
when,
in
fact,
it
is
contaminated.
Concluding
the
site
is
still
contaminated
when,
in
fact,
it
is
"clean."

°
Next,
it
was
necessary
to
specify
the
boundaries
of
the
gray
regions.
The
gray
region
defines
a
range
that
is
less
than
the
action
limit,
but
too
close
to
the
Action
Level
to
be
considered
"clean,"
given
uncertainty
in
the
data.
When
the
null
hypothesis
(baseline
condition)
assumes
that
the
site
is
contaminated
(as
in
this
example),
the
upper
limit
of
the
gray
region
is
bounded
by
the
Action
Level;
the
lower
limit
is
determined
by
the
decision
maker.
The
project
team
sets
the
lower
bound
of
the
gray
region
at
7.5
mg/
kg,
with
the
understanding
that
this
bound
could
be
modified
after
review
of
the
outputs
of
Step
7
of
the
DQO
Process.

°
The
planning
team
set
the
acceptable
probability
of
making
a
Type
I
(false
rejection)
error
at
5
percent
(
)
based
on
guidance
provided
by
the
State
 =
005
.
regulatory
agency.
In
other
words,
the
team
was
willing
to
accept
a
5
percent
chance
of
concluding
the
SWMU
was
clean,
if
in
fact
it
was
not.
While
a
Type
II
(false
acceptance)
error
could
prove
to
be
costly
to
the
company,
environmental
protection
and
permit
compliance
are
judged
to
be
most
important.
The
planning
team
decides
to
set
the
Type
II
error
rate
at
only
20
percent.

°
The
information
collected
in
Step
6
of
the
DQO
Process
is
summarized
below.
Appendix
I
Example
1
282
Table
I­
2.
Initial
Outputs
of
Step
6
of
the
DQO
Process
Needed
Parameter
Output
Action
Level
(AL)
10
mg/
kg
Gray
Region
7.5
­
10
mg/
kg
(width
of
gray
region,
=
2.5)
 
Relative
Width
of
Gray
Region
(10
­
7.5)/
7.5
=
0.33
Null
Hypothesis
(Ho
)
Mean
(PCP)
10
mg/
kg
 
False
Rejection
Decision
Error
Limit
(probability
of
a
Type
I
error)
 =
005
.

False
Acceptance
Decision
Error
Limit
(probability
of
a
Type
II
error)
 =
020
.

DQO
Step
7:
Optimizing
the
Data
Collection
Design
1.
Review
outputs
from
the
first
six
steps
of
the
DQO
Process.
The
project
team
reviewed
the
outputs
of
the
first
six
steps
of
the
DQO
Process.
They
expected
the
PCP
concentration
to
be
near
the
cleanup
standard
(Action
Level);
thus,
it
was
decided
that
a
probabilistic
sampling
design
would
be
used
so
that
the
results
could
be
stated
with
a
known
probability
of
making
a
decision
error.

2.
Consider
various
data
collection
designs.
The
objective
of
this
step
was
to
find
cost­
effective
design
alternatives
that
balance
the
number
of
samples
and
the
measurement
performance,
given
the
feasible
choices
for
sampling
designs
and
measurement
methods.
Based
on
characterization
data
from
the
excavated
soil,
the
planning
team
assumed
that
the
between­
sample
unit
variability
or
population
variability
would
remain
relatively
stable
at
approximately
,
sb
=
050
.
independent
of
the
sampling
and
analytical
methods
used.
The
planning
team
investigated
various
combinations
of
sampling
and
analytical
methods
(with
varying
associated
levels
of
precision
and
cost)
as
a
means
find
the
optimal
study
design.

The
planning
team
considered
three
probabilistic
sampling
designs:
simple
random,
stratified
random,
and
systematic
(grid­
based)
designs.
A
composite
sampling
strategy
also
was
considered.
All
designs
allowed
for
an
estimate
of
the
mean
to
be
made.
Because
the
existence
of
strata
was
not
expected
(although
could
be
discovered
during
the
investigation),
the
stratified
design
was
eliminated
from
consideration.
A
simple
random
design
is
the
simplest
of
the
probabilistic
sampling
methods,
but
it
may
not
provide
very
even
coverage
of
the
SWMU;
thus,
if
spatial
variability
becomes
a
concern,
then
it
may
go
undetected
with
a
simple
random
design.
The
systematic
design
provides
more
even
coverage
of
the
SWMU
and
typically
is
easy
to
implement.

The
practical
considerations
were
considered
for
each
alternative
design,
including
site
access
and
conditions,
equipment
selection/
use,
experience
Example
1
Appendix
I
283
needed,
special
analytical
needs,
health
and
safety
requirements,
and
scheduling.
There
were
no
significant
practical
constraints
that
would
limit
the
use
of
either
the
systematic
or
the
simple
random
sampling
designs;
however,
the
systematic
design
was
preferred
because
it
provides
sampling
locations
that
are
easier
to
survey
and
locate
in
the
field,
and
it
provides
better
spatial
coverage.
Ultimately,
two
sampling
designs
were
evaluated:
a
systematic
sampling
design
and
a
systematic
sampling
design
that
incorporates
composite
sampling.

The
acceptable
mass
of
each
primary
field
sample
was
determined
using
the
particle
size­
weight
relationship
required
to
control
fundamental
error.
The
soil
in
the
SWMU
is
a
granular
solid,
and
the
95
th
percentile
particle
size
(d)
was
estimated
at
1.5
mm
(0.15
cm).
To
maintain
the
relative
standard
deviation
of
the
fundamental
error
at
0.10,
a
sample
mass
of
at
least
8.2
grams
was
required
(using
Equation
D.
4
in
Appendix
D).
To
maintain
the
relative
standard
deviation
of
the
fundamental
error
at
0.05,
a
sample
mass
of
at
least
30
grams
would
be
required.
There
were
no
practical
constraints
on
obtaining
samples
of
these
sizes.

Next,
it
was
necessary
to
estimate
unit
costs
for
sampling
and
analysis.
Based
on
prior
experience,
the
project
team
estimated
the
cost
of
collecting
a
grab
sample
at
$40
–
plus
an
additional
$30
per
sample
for
documentation,
processing
of
field
screening
samples,
and
$60
per
sample
for
documentation,
processing,
and
shipment
for
samples
sent
for
fixed
laboratory
analysis.

3.
Select
the
optimal
number
of
samples.
Using
the
initial
outputs
of
Step
6,
the
appropriate
number
of
samples
was
calculated
for
each
sampling
design:

For
the
systematic
sampling
design
(without
compositing),
the
following
formula
was
used
(Equation
8
from
Section
5.4.1):

n
z
zsz
T
=
+
+
 
 
 
()
1
1
2
2
2
1
2
2
  
 
 
where
=
the
quantile
of
the
standard
normal
distribution
(from
z
1   
 
pth
the
last
row
of
Table
G­
1,
Appendix
G),
where
is
the
 
probability
of
making
a
Type
I
error
(the
significance
level
of
the
test)
set
in
DQO
Step
6.
=
the
quantile
of
the
standard
normal
distribution
(from
z1   
 
pth
the
last
row
of
Table
G­
1,
Appendix
G),
where
is
the
 
probability
of
making
a
Type
II
error
set
in
DQO
Step
6.
=
an
estimate
of
the
total
study
relative
standard
deviation.
sT
=
the
width
of
the
gray
region
from
DQO
Step
6
(expressed
 
as
the
relative
error
in
this
example).
Appendix
I
Example
1
284
[EPA's
DEFT
software
could
be
used
to
calculate
the
appropriate
number
of
samples
(see
Data
Quality
Objectives
Decision
Error
Feasibility
Trials
Software
(DEFT)
­
User's
Guide,
USEPA
2001h).
Note,
however,
that
the
DEFT
program
asks
for
the
bounds
of
the
gray
region
specified
in
absolute
units.
If
the
planning
team
uses
the
relative
standard
deviation
(or
coefficient
of
variation)
in
the
sample
size
equation
rather
than
the
absolute
standard
deviation,
then
the
bounds
of
the
gray
region
also
must
be
input
into
DEFT
as
relative
values.
Thus,
the
Action
Level
would
be
set
equal
to
1,
and
the
other
bound
of
the
gray
region
would
be
set
equal
to
1
­
(relative
width
of
gray
region)
or
1
+
(relative
width
of
gray
region)
depending
what
baseline
condition
is
selected.]

Note
that
if
there
were
more
than
one
constituent
of
concern,
then
the
appropriate
number
of
samples
would
need
to
be
calculated
for
each
constituent
using
preliminary
estimates
of
their
standard
deviations.
The
number
of
samples
would
then
be
determined
by
the
highest
number
of
samples
obtained
for
any
single
constituent
of
concern.

The
sample
size
for
systematic
composite
sampling
also
was
evaluated.
In
comparison
to
non­
composite
sampling,
composite
sampling
can
have
the
effect
of
minimizing
between­
sample
variation,
thereby
reducing
somewhat
the
total
number
of
composite
samples
that
must
be
submitted
for
analysis.
In
addition,
composite
samples
are
expected
to
generate
normally
distributed
data
thereby
allowing
the
team
to
apply
normal
theory
statistical
methods.
To
estimate
the
sample
size,
the
planning
team
again
required
an
estimate
of
the
standard
deviation.
However,
since
the
original
estimate
of
the
standard
deviation
was
based
on
available
individual
or
"grab"
sample
data
rather
than
composite
samples,
it
was
necessary
to
adjust
the
variance
term
in
the
sample
size
equation
for
the
appropriate
number
of
composite
samples.
In
the
sample
size
equation,
the
between­
unit
(population)
component
of
variance
(
)
was
sb
2
replaced
with
,
where
is
the
number
of
individual
or
"grab"
samples
s
g
b
2
g
used
to
form
each
composite.
Sample
sizes
were
then
calculated
assuming
.
g
=
4
Table
I­
3
and
Table
I­
4
summarize
the
inputs
and
outputs
of
Step
7
of
the
DQO
Process
and
provides
the
estimated
costs
for
the
various
sampling
and
analysis
designs
evaluated.
Example
1
Appendix
I
285
Table
I­
3.
Summary
of
Inputs
for
Candidate
Sampling
Designs
Parameter
Systematic
Sampling
­
Fixed
Lab
Analyses
Systematic
Sampling
­
Field
Analyses
Systematic
Composite
Sampling
­
Fixed
Lab
Analyses
Systematic
Composite
Sampling
­
Field
Analyses
Inputs
Sampling
Costs
Collection
Cost
(per
"grab")
$40
ea.
$40
ea.
$40
ea.
$40
ea.

Documentation,
processing,
shipment
$60
ea.
$30
ea.
$60
ea.
$30
ea.

Analytical
Costs
SW­
846
Method
3550/
8270
(fixed
lab)
$110
ea.
$110
ea.*
$110
ea.
$110
ea.*

SW­
846
Method
4010A
(field
screening)
NA
$20
ea.
NA
$20
ea.

Relative
Width
of
Gray
Region
(
)
 
0.33
0.33
0.33
0.33
Null
Hypothesis
(Ho
)
Mean
(PCP)
10
 
mg/
kg
Mean
(PCP)
10
 
mg/
kg
Mean
(PCP)
10
 
mg/
kg
Mean
(PCP)
10
 
mg/
kg
False
Rejection
Decision
Error
Limit
 =
005
.
 =
005
.
 =
005
.
 =
005
.

False
Acceptance
Decision
Error
Limit
 =
0
20
.
 =
0
20
.
 =
0
20
.
 =
0
20
.

Relative
Std.
Dev.

Sampling
(
)
ss
0.10
0.10
0.10
0.10
Analytical
(
),
SW­
sa
846
Method
8270
0.10
NA
0.10
NA
Analytical
(
)
SW­
sa
846
Method
4010A
NA
0.40
NA
0.40
"Population"
(
)
sb
0.50
0.50
0.50
0.50
Total
Study
s
T
s
s
s
a
s
b
=
+
+
2
2
2
0.52
0.65
0.29**
0.
48**

NA:
Not
applicable
*
Assumes
20­
percent
of
all
field
analyses
must
be
confirmed
via
fix
laboratory
method.

**
For
composite
sampling,
the
total
study
relative
standard
deviation
(
)
was
estimated
by
replacing
with
sT
sb
2
,
where
=
the
number
of
"grabs"
per
composite.
s
g
b
2
g
Appendix
I
Example
1
286
Table
I­
4.
Summary
of
Outputs
for
Candidate
Sampling
Designs
Parameter
Systematic
Sampling
­
Fixed
Lab
Analyses
Systematic
Sampling
­
Field
Analyses
Systematic
Composite
Sampling
­
Fixed
Lab
Analyses
Systematic
Composite
Sampling
­
Field
Analyses
Outputs
Number
of
Samples
(
)
n
17
25
6
15
Cost
Estimate
"Grab"
Sampling
$40
x
17
$40
x
25
$40
x
4
x
6
(see
note
1)
$40
x
4
x
15
(see
note
1)

Documentation,
processing,
and
shipment
$60
x
17
($
30
x
25)
+
($
60
x
5)
(see
note
2)
$60
x
6
($
30
x
15)
+
($
60
x
3)
(see
note
2)

SW­
846
Method
3550/
8270
(fixed
lab)
$110
x
17
$110
x
5
(see
note
2)
$110
x
6
$110
x
3
(see
note
2)

SW­
846
Method
4010A
(field
screening)
NA
$20
x
25
NA
$20
x
15
Cost
$3,570
$3,100
$1,980
$3,660
1.
The
calculation
assumes
four
grabs
per
composite
sample.
2.
The
calculation
includes
costs
for
shipment
and
analysis
of
20%
of
field
screening
samples
for
fixed
laboratory
analysis.
NA:
Not
applicable
4.
Select
a
resource­
effective
design.
It
was
determined
that
all
of
the
systematic
designs
and
systematic
composite
sampling
designs
would
meet
the
statistical
performance
requirements
for
the
study
in
estimating
the
mean
PCP
concentration
in
the
SWMU.
The
project
team
selected
the
systematic
composite
sampling
design
­
with
fixed
laboratory
analysis
­
based
on
the
cost
savings
projected
over
the
other
sampling
designs.

The
planning
team
decided
that
one
additional
field
quality
control
sample
(an
equipment
rinsate
blank),
analyzed
by
SW­
846
Method
8720,
was
required
to
demonstrate
whether
the
sampling
equipment
was
free
of
contamination.

The
outputs
of
the
DQO
Process
were
summarized
in
a
memo
report
which
was
then
used
help
prepare
the
QAPP.

5.
Prepare
a
QAPP.
The
operational
details
of
the
sampling
and
analytical
activities
were
documented
in
the
QAPP
using
EPA
Guidance
for
Quality
Assurance
Project
Plans,
EPA
QA/
G­
5
(USEPA
1998a)
and
Chapter
One
of
SW846
for
guidance.
Example
1
Appendix
I
287
Implementation
Phase
The
QAPP
was
implemented
in
accordance
with
the
schedule,
sampling
plan,
and
safety
plan.
The
exact
location
of
each
field
sample
was
established
using
a
grid
on
a
map
of
the
SWMU.
The
start
point
for
constructing
the
grid
was
selected
at
random.

The
QAPP
established
the
following
DQOs
and
performance
goals
for
the
sampling
equipment:

°
The
correct
orientation
and
shape
of
each
sample
is
a
vertical
core.

°
Each
sample
must
capture
the
full
depth
of
interest
(six
inches).

°
The
minimum
mass
of
each
sample
is
10
g.

°
The
device
must
be
constructed
of
materials
that
will
not
alter
analyte
concentrations
due
to
loss
or
gain
of
analytes
via
sorption,
desorption,
degradation,
or
corrosion.

°
The
device
must
be
easy
to
use,
safe,
and
low
cost.

A
sampling
device
was
selecting
using
the
four­
steps
described
in
Figure
28
in
Section
7.1.

Step
1
­
Identify
the
Medium
to
be
Sampled
The
material
to
be
sampled
is
a
soil.
Using
Table
8
in
Section
7.1,
we
find
the
media
descriptor
that
most
closely
matches
the
waste
in
the
first
column
of
the
table:
"Soil
and
other
unconsolidated
geologic
material."

Step
2
­
Select
the
Sample
Location
The
second
column
of
Table
8
in
Section
7.1
provides
a
list
of
possible
sampling
sites
(or
units
types)
for
soil
(i.
e.,
surface
or
subsurface).
In
this
example,
the
sampling
location
is
surface
soil
and
"Surface"
is
found
in
the
second
column
in
the
table.

Step
3
­
Identify
Candidate
Sampling
Devices
The
third
column
of
Table
8
in
Section
7.1
provides
a
list
of
candidate
sampling
devices.
For
the
waste
stream
in
this
example,
the
list
includes
bucket
auger,
concentric
tube
thief,
coring
type
sampler,
miniature
core
sampler,
modified
syringe,
penetrating
probe
sampler,
sampling
scoop/
trowel/
shovel,
thin­
walled
tube,
and
trier.

Step
4
­
Select
Devices
Sampling
devices
were
selected
from
the
list
of
candidate
sampling
devices
after
review
of
Table
9
in
Section
7.1.
Selection
of
the
equipment
was
made
after
consideration
of
the
DQOs
for
the
sample
support
(i.
e.,
required
volume,
depth,
shape,
and
orientation),
the
performance
goals
established
for
the
sampling
device,
ease
of
use
and
decontamination,
worker
safety
issues,
cost,
and
any
practical
considerations.
Appendix
I
Example
1
288
Table
I­
5
demonstrates
how
the
DQOs
and
performance
goals
can
be
used
together
to
narrow
the
candidate
devices
down
to
just
one
or
two.

Table
I­
5.
Using
DQOs
and
Performance
Goals
to
Select
a
Final
Sampling
Device
Candidate
Devices
Data
Quality
Objectives
and
Performance
Goals
Required
Depth
Orientation
and
Shape
Sample
Volume
Operational
Considerations
Desired
Material
of
Construction
6
inches
Vertical
undisturbed
core
>10
g
Device
is
portable,
safe,
&
low
cost?
Stainless
or
carbon
steel
Bucket
auger
Y
N
Y
Y
Y
Concentric
tube
thief
Y
NYYY
Coring
Type
Sampler
Y
NYYY
Miniature
core
sampler
Y
YNYN
Modified
syringe
sampling
N
NNYN
Penetrating
Probe
Sampler
Y
YY
YY
Scoop,
trowel,
or
shovel
Y
NYYY
Thin­
walled
tube
Y
Y
Y
Y
Y
Trier
Y
N
Y
Y
Y
Key:
Y
=
The
device
is
capable
of
achieving
the
specified
DQO
or
performance
goal.
N
=
The
device
is
not
capable
of
achieving
the
DQO
or
performance
goal.

The
"penetrating
probe
sampler"
and
the
"thin­
walled
tube"
were
identified
as
the
preferred
devices
because
they
could
satisfy
all
of
the
DQOs
and
performance
goals
for
the
sampling
devices.
The
penetrating
probe
was
selected
because
it
was
easy
to
use
and
was
readily
available
to
the
field
sampling
crew.

A
penetrating
probe
sampler
was
then
used
to
take
the
field
samples
at
each
location
on
the
systematic
square
grid
(see
Figure
I­
1).
Each
composite
sample
was
formed
by
pooling
and
mixing
individual
samples
collected
from
within
each
of
four
quadrants.
The
process
was
repeated
until
six
composite
samples
were
obtained.
Because
the
total
mass
of
each
individual
(grab)
sample
used
to
form
composite
samples
exceeded
that
required
by
the
laboratory
for
analysis,
a
field
subsampling
routine
was
used
to
reduce
the
volume
of
material
submitted
to
the
laboratory.

The
field
samples
and
associated
field
QC
samples
were
submitted
to
the
laboratory
where
a
subsample
was
taken
from
each
field
sample
for
analysis.
The
samples
were
analyzed
in
accordance
with
the
QAPP.
Example
1
Appendix
I
289
Boundary
of
SWMU
80
ft.
125
ft.

Field
Sample
No.
6
Mixture
of
four
"grab"
samples
Field
Subsampling
1
2
3
4
5
6
L
A
n
ft
=
=
=
 
24
20
2
10,000ft
20.4
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
Not
to
scale
Figure
I­
1.
Systematic
sampling
with
compositing.
The
distance
between
sampling
points
(L)
is
determined
using
the
approach
described
in
Section
5.2.3
(Box
5).
Samples
with
the
same
number
are
pooled
and
mixed
to
form
each
composite
sample.
A
field
sample
is
formed
from
each
composite
using
one
of
the
subsampling
methods
described
in
Section
7.3.2
(e.
g.,
by
fractional
shoveling).

Assessment
Phase
Data
Verification
and
Validation
Sampling
and
analytical
records
were
reviewed
to
check
compliance
with
the
QAPP.
The
data
collected
during
the
study
met
the
measurement
objectives.
Sampling
and
analytical
error
were
minimized
through
the
use
of
a
statistical
sampling
design,
correct
field
sampling
and
subsampling
procedures,
and
adherence
to
the
requirements
of
the
analytical
methods.
The
soil
that
was
sampled
did
not
present
any
special
problems
concerning
access
to
sampling
locations,
equipment
usage,
particle­
size
distribution,
or
matrix
interferences.
A
quantitation
limit
of
0.5
mg/
kg
was
achieved.
The
analytical
package
was
verified
and
validated,
and
the
data
generated
were
judged
acceptable
for
their
intended
purpose.

Data
Quality
Assessment
(DQA)

DQA
was
performed
using
the
approach
outlined
in
Section
8.2:

1.
Review
DQOs
and
sampling
design.
The
DQO
planning
team
reviewed
the
original
objectives:
"If
the
mean
concentration
for
PCP
in
the
soil
is
less
than
10
mg/
kg,
then
the
cleanup
standard
is
attained.
Otherwise,
the
SWMU
will
be
considered
contaminated
and
additional
remedial
action
will
be
required."
Appendix
I
Example
1
290
STATISTICAL
QUANTITIES
Number
of
Observations:
6
Minimum:
6.000
Maximum:
10.500
Mean:
7.
833
Median:
7.
750
Variance:
2.
267
Std
De:
1.
506
Range:
4.500
IQR:
1.
000
Coefficient
of
Variation:
0.
192
Coefficient
of
Skewness:
0.
783
Coefficient
of
Kurtosis:
­0.
087
Percentiles:
1st:
6.000
75th:
8.000
7th:
6.000
90th:
10.
500
10th:
6.
000
95th:
10.
500
25th:
7.
000
99th:
10.
500
50th:
7.
750
(median)
DataQUEST
Figure
I­
2.
Statistical
quantities
using
DataQUEST
software
2.
Prepare
the
data
for
statistical
analysis.
The
summary
of
the
verified
and
validated
data
were
received
in
hard­
copy
format
and
an
electronic
data
base
was
created
by
manual
data
entry
into
spreadsheet
software.
The
data
base
was
checked
by
a
second
person
for
accuracy.
The
results
for
the
data
collection
effort
are
listed
in
Table
I­
6.
A
data
file
was
created
in
a
format
suitable
for
import
into
EPA's
DataQUEST
software.

Table
I­
6.
Soil
Sample
Analysis
Results
for
PCP
(mg/
kg)

Sample
Identification
Result
(PCP,
mg/
kg)

1
8.
0
2
8.
0
3
7.
0
4
6.
0
5
10.5
6
7.
5
3.
Conduct
preliminary
analysis
of
data
and
check
distributional
assumptions:
Using
EPA's
DataQUEST,
statistical
quantities
were
computed
as
shown
in
Figure
I­
2.

On
a
normal
probability
plot,
the
data
plot
as
a
straight
line,
indicating
approximate
normality
(see
Figure
I­
3).
Example
1
Appendix
I
291
N:
6
StDev:
1.506
Average:
7.833
10
9
8
7
6
.999
.99
.95
.80
.50
.20
.05
.01
.001
Probability
PCP
(mg/
kg)
Normal
Probability
Plot
Figure
I­
3.
Normal
probability
plot
Shapiro­
Wilk
Test
Null
Hypothesis:
`Data
are
normally
distributed'

Sample
Value:
0.914
Tabled
Value:
0.788
There
is
not
enough
evidence
to
reject
the
assumption
of
normality
with
a
5%
significance
level.

DataQUEST
Figure
I­
4.
Results
of
the
Shapiro­
Wilk
test
using
EPA's
DataQUEST
software
The
data
also
were
checked
for
normality
by
the
Shapiro­
Wilk
test.
Using
the
DataQUEST
software,
the
Shapiro­
Wilk
test
was
performed
at
the
0.05
percent
significant
level.
The
Shapiro­
Wilk
test
did
not
reject
the
null
hypothesis
of
normality
(see
Figure
I­
4).
Appendix
I
Example
1
292
4.
Select
and
perform
the
statistical
test:
The
analysis
of
the
data
showed
there
were
no
"non­
detects"
and
a
normal
distribution
was
an
acceptable
model.
Using
the
guidance
in
Figure
38
(Section
8.2.4),
a
parametric
upper
confidence
limit
(UCL)
on
the
mean
was
selected
as
the
correct
statistic
to
compare
to
the
regulatory
level.
The
95%
UCL
on
the
mean
was
calculated
as
follows:

UCL
x
t
s
n
n
0.
95
0.95,
1
7
833
2
015
1506
6
91
=
+

=
+
 
 
 
 
 
 
=
 
..
.

.mg/
kg
The
tabulated
"t
value"
(2.015)
was
obtained
from
Table
G­
1
in
Appendix
G
and
based
on
a
95­
percent
one­
tailed
confidence
interval
with
and
5
 =
005
.
degrees
of
freedom.

5.
Draw
conclusions
and
report
results:
The
95%
UCL
for
the
mean
of
the
sample
analysis
results
for
PCP,
9.1
mg/
kg,
was
less
than
the
specified
cleanup
level
of
10
mg/
kg.
Thus,
the
null
hypothesis
was
rejected,
and
the
owner
made
the
determination
that
the
soil
remaining
in
the
SWMU
attains
the
cleanup
standard
for
PCP
based
on
the
established
decision
rule.

A
summary
report
including
a
description
of
all
planning,
implementation,
and
assessment
activities
was
submitted
to
the
regulatory
agency
for
review.
Example
2
Appendix
I
293
Example
2:
Sampling
of
a
Process
Waste
to
Make
a
Hazardous
Waste
Determination
Introduction
An
aircraft
manufacturing
and
maintenance
facility
strips
paint
from
parts
before
remanufacturing
them.
The
facility
recently
switched
its
paint
stripping
process
from
a
solventbased
system
to
use
of
an
abrasive
plastic
blasting
media
(PBM).
The
waste
solvent,
contaminated
with
stripped
paint,
had
to
be
managed
as
a
hazardous
waste.
The
facility
owner
changed
the
process
to
reduce
­
or
possibly
eliminate
­
the
generation
of
hazardous
waste
from
this
operation
and
thereby
reduce
environmental
risks
and
lower
waste
treatment
and
disposal
costs.

The
plant
operators
thought
the
spent
PBM
could
include
heavy
metals
such
as
chromium
and
cadmium
from
the
paint,
and
therefore
there
was
a
need
to
make
a
hazardous
waste
determination
in
order
to
comply
with
the
RCRA
regulations
at
40
CFR
Part
262.11.
The
facility
owner
determined
that
the
spent
PBM
is
a
solid
waste
under
RCRA
but
not
a
listed
hazardous
waste.
The
facility
owner
then
needed
to
determine
if
the
solid
waste
exhibits
any
of
the
characteristics
of
hazardous
waste:
ignitability
(§
261.21),
corrosivity
(§
261.22),
reactivity
(§
261.23),
or
toxicity
(§
261.24).
Using
process
and
materials
knowledge,
the
owner
determined
that
the
waste
blasting
media
would
not
exhibit
the
characteristics
of
ignitability,
corrosivity,
or
reactivity.
The
facility
owner
elected
to
conduct
waste
testing
to
determine
if
the
waste
blasting
media
exhibits
the
characteristic
of
toxicity.

This
hypothetical
example
describes
how
the
planning,
implementation,
and
assessment
activities
were
conducted.

Planning
Phase
The
planning
phase
comprises
the
Data
Quality
Objectives
(DQO)
Process
and
preparation
of
a
quality
assurance
project
plan
(QAPP)
including
a
sampling
and
analysis
plan.
A
DQO
planning
team
was
assembled
and
the
DQO
Process
was
implemented
following
EPA's
guidance
in
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4
(USEPA
2000b)
and
SW­
846.

The
outputs
of
the
seven
steps
of
the
DQO
Process
are
outlined
below.

DQO
Step
1:
Stating
the
Problem
°
The
DQO
planning
team
included
the
plant
manager,
a
technical
project
manager,
a
consulting
chemist,
and
the
paint
stripping
booth
operator
who
also
served
as
the
sampler.

°
The
conceptual
model
of
the
waste
generation
process
was
developed
as
follows:
The
de­
painting
operation
consists
of
a
walk­
in
blast
booth
with
a
reclamation
floor.
After
blasting,
the
plastic
blast
media,
mixed
with
paint
fines,
is
passed
through
a
reclamation
system;
the
reusable
media
is
separated
out
for
reloading
to
the
blast
unit,
while
the
spent
media
and
paint
waste
is
discharged
to
a
container.
Appendix
I
Example
2
294
°
A
concise
description
of
the
problem
was
developed
as
follows:
The
problem
was
described
as
determining
whether
the
new
waste
stream
(the
spent
plastic
blasting
media
and
waste
paint)
should
be
classified
as
a
hazardous
waste
that
requires
treatment
and
subsequent
disposal
in
a
RCRA
Subtitle
C
landfill
(at
$300
per
ton),
or
whether
it
is
a
nonhazardous
industrial
waste
that
can
be
landdisposed
in
an
industrial
landfill
(at
$55
per
ton).

°
The
plant
manager
gave
the
plant
staff
and
consultant
60
days
to
complete
the
study.
The
turn­
around
time
was
established
to
minimize
the
amount
of
time
that
the
waste
was
stored
at
the
facility
while
the
data
were
being
generated,
and
to
allow
adequate
time
to
have
the
waste
shipped
off
site
­
if
it
were
found
to
be
a
hazardous
waste
­
within
the
90­
day
accumulation
time
specified
at
40
CFR
Part
262.34(
a).

DQO
Step
2:
Identifying
Possible
Decisions
°
Decision
statement:
The
decision
statement
was
determining
whether
the
spent
PBM
paint
waste
was
hazardous
under
the
RCRA
regulations.

°
Alternative
actions:
If
the
waste
was
hazardous,
then
treatment
and
subsequent
disposal
in
a
RCRA
landfill
would
be
required.

DQO
Step
3:
Identifying
Inputs
to
the
Decision
°
The
decision
was
to
be
based
on
the
quantity
of
waste
generated
over
approximately
a
one­
month
period,
but
not
to
exceed
the
quantity
placed
in
a
single
10­
cubic
yard
roll
off
box.

°
Based
on
process
and
materials
knowledge,
the
team
specified
cadmium
and
chromium
as
the
constituents
of
concern.

°
To
resolve
the
decision
statement,
the
planning
team
needed
to
determine
if,
using
the
Toxicity
Characteristic
Leaching
Procedure
(TCLP)
SW­
846
Method
1311,
the
extract
from
a
representative
sample
of
the
waste
contained
the
constituents
of
concern
at
concentrations
equal
to
or
greater
than
their
regulatory
levels
as
required
by
the
RCRA
regulations
at
40
CFR
261.24.
The
chemist
noted,
however,
that
the
TCLP
method
allows
the
following:
"If
a
total
analysis
of
the
waste
demonstrates
that
individual
analytes
are
not
present
in
the
waste,
or
that
they
are
present
but
at
such
low
concentrations
that
the
appropriate
regulatory
levels
could
not
possibly
be
exceeded,
the
TCLP
need
not
be
run."
With
that
flexibility
in
mind,
the
planning
team
identified
a
candidate
method
for
total
analysis
(including
SW­
846
Method
3050B/
6010),
and
noted
that
the
TCLP
would
be
required
if
the
total
analysis
indicated
TC
levels
could
be
exceeded.

°
The
project
chemist
found
that
SW­
846
Methods
3010A
(prep)
and
6010B
were
suitable
for
analysis
of
the
TCLP
extracts
at
quantitation
limits
at
or
below
the
applicable
regulatory
levels.
Example
2
Appendix
I
295
°
The
minimum
sample
"support"
was
determined
as
follows:
Method
1311
(TCLP)
specifies
a
minimum
sample
mass
of
100
grams
for
analysis
of
nonvolatile
constituents
and
a
maximum
particle
size
of
9.5
mm.
The
waste
stream,
composed
of
dry
fine
to
medium­
grained
plastic
and
paint
chips,
was
well
within
the
particle
size
requirements
of
the
TCLP.
During
Step
7
of
the
DQO
Process,
the
planning
team
revisited
this
step
to
determine
whether
a
sample
mass
larger
than
100­
grams
would
be
necessary
to
satisfy
the
overall
decision
performance
criteria.

DQO
Step
4:
Defining
Boundaries
°
The
paint
stripping
operation
includes
a
blast
booth,
a
PBM
reclamation
unit,
and
a
waste
collection
roll­
off
box
that
complies
with
the
applicable
container
requirements
of
Subparts
I
and
CC
of
40
CFR
part
265.
The
spent
blast
media
and
paint
waste
is
discharged
to
the
roll­
off
box
from
the
reclamation
unit.
Each
discharge
event
was
considered
a
"batch"
for
the
purposes
of
the
waste
classification
study.

°
When
testing
a
solid
waste
to
determine
if
it
exhibits
a
characteristic
of
hazardous
waste,
the
determination
must
be
made
when
management
of
the
solid
waste
would
potentially
be
subject
to
the
RCRA
hazardous
waste
regulations
at
40
CFR
Part
262
through
265.
Accordingly,
the
planning
team
decided
samples
should
be
obtained
at
the
point
where
the
waste
discharges
from
the
reclamation
unit
into
the
roll­
off
container
(i.
e.,
the
point
of
generation).
Until
such
time
that
the
generator
determined
that
the
waste
is
not
a
hazardous
waste,
the
generator
complied
with
the
applicable
pre­
transport
requirements
at
40
CFR
Part
262
­
Subpart
C
(i.
e.,
packaging,
labeling,
marking,
and
accumulation
time).

°
The
boundary
of
the
decision
was
set
as
the
extent
of
time
over
which
the
decision
applies.
The
boundary
would
change
only
if
there
were
a
process
or
materials
change
that
would
alter
the
composition
of
the
waste.
Such
a
process
or
materials
change
could
include,
for
example,
a
change
in
the
composition,
particle
size
or
particle
shape
of
the
blasting
media,
or
a
significant
change
in
the
application
(pressure)
rate
of
the
blast
media.

DQO
Step
5:
Developing
Decision
Rules
°
The
planning
team
reviewed
the
RCRA
regulations
at
for
the
Toxicity
Characteristic
at
40
CFR
261.24
and
found
the
regulation
does
not
specify
a
parameter
of
interest
(such
as
the
mean
or
a
percentile).
They
observed,
however,
that
the
Toxicity
Characteristic
(TC)
regulatory
levels
specified
in
Table
1
of
Part
261.24
represent
"maximum"
concentrations
that
cannot
be
equaled
or
exceeded;
otherwise,
the
solid
waste
must
be
classified
as
hazardous.
While
the
regulations
for
hazardous
waste
determination
do
not
require
the
use
of
any
statistical
test
to
make
a
hazardous
waste
determination,
the
planning
team
decided
to
use
a
high
percentile
value
as
a
reasonable
approximation
of
the
maximum
TCLP
sample
analysis
result
that
could
be
obtained
from
a
sample
of
the
waste.
Their
objective
was
to
"prove
the
negative"
­
that
is,
to
demonstrate
Appendix
I
Example
2
296
with
a
desired
level
of
confidence
that
the
vast
majority
of
the
waste
was
nonhazardous.
The
upper
90th
percentile
was
selected.
The
team
specified
an
additional
constraint
that
no
single
sample
could
exceed
the
standard.
Otherwise,
there
may
be
evidence
that
the
waste
is
hazardous
at
least
part
of
the
time.

°
The
Action
Levels
were
set
at
the
TC
regulatory
limits
specified
in
Table
1
of
40
CFR
Part
261.24:

Cadmium:
1.0
mg/
L
TCLP
Chromium:
5.
0
mg/
L
TCLP
°
The
decision
rule
was
then
established
as
follows:
"If
the
upper
90
th
percentile
TCLP
concentration
for
cadmium
or
chromium
in
the
waste
and
all
samples
analysis
results
are
less
than
their
respective
action
levels
of
1.0
and
5.0
mg/
L
TCLP,
then
the
waste
can
be
classified
as
nonhazardous
waste
under
RCRA;
otherwise,
the
waste
will
be
considered
a
hazardous
waste."

DQO
Step
6:
Specifying
Limits
on
Decision
Errors
°
The
null
hypothesis
was
that
the
waste
is
hazardous,
i.
e.,
the
true
proportion
(P)
of
samples
with
concentrations
of
cadmium
or
chromium
less
than
their
regulatory
thresholds
is
less
than
0.90,
or
Ho:
P
<
0.90.

°
Two
potential
decision
errors
could
be
made
based
on
interpreting
sampling
and
analytical
data:

Decision
Error
A:
Concluding
that
the
true
proportion
(P)
of
the
waste
that
is
nonhazardous
was
greater
than
0.90
when
it
was
truly
less
than
0.90,
or
Decision
Error
B:
Concluding
that
the
true
proportion
(P)
of
the
waste
that
is
nonhazardous
was
less
than
0.90
when
it
was
truly
greater
than
0.90.

The
consequences
of
Decision
Error
A
­
incorrectly
deciding
the
waste
was
nonhazardous
­
would
lead
the
facility
to
ship
untreated
hazardous
waste
off
site
for
disposal
in
solid
waste
landfill,
likely
increase
health
risks
for
onsite
workers,
and
pose
potential
future
legal
problems
for
the
owner.

The
consequences
of
Decision
Error
B
­
incorrectly
deciding
the
waste
was
hazardous
when
in
fact
it
is
not
hazardous
­
would
cause
the
needless
costs
for
treatment
and
disposal,
but
with
no
negative
environmental
consequences.

Error
A,
incorrectly
deciding
that
a
hazardous
waste
is
a
nonhazardous
waste,
posed
more
severe
consequences
for
the
generator
in
terms
of
liability
and
compliance
concerns.
Consequently,
the
baseline
condition
(null
hypothesis)
chosen
was
that
the
true
proportion
of
waste
that
is
nonhazardous
is
less
than
90
percent.
Example
2
Appendix
I
297
Table
I­
7.
Null
Hypothesis
and
Possible
Decision
Errors
for
Example
2
"Null
Hypothesis"
(baseline
condition)
Possible
Decision
Errors
Type
I
Error
(
),
 
False
Rejection
Type
II
Error
(
),
 
False
Acceptance
The
true
proportion
(P)
of
waste
that
is
nonhazardous
is
less
than
0.90.
Concluding
the
waste
is
nonhazardous
when,
in
fact,
it
is
hazardous.
Concluding
the
waste
is
hazardous
when,
in
fact,
it
is
nonhazardous.

°
Next,
it
was
necessary
to
specify
the
boundaries
of
the
gray
region.
When
the
null
hypothesis
(baseline
condition)
assumes
that
the
waste
is
hazardous
(as
in
this
example),
one
limit
of
the
gray
region
is
bounded
by
the
Action
Level
and
the
other
limit
is
set
at
a
point
where
it
is
desirable
to
control
the
Type
II
(false
acceptance)
error.
The
project
team
set
one
bound
of
the
gray
region
at
0.90
(the
Action
Level).
Since
a
"no
exceedance"
criterion
is
included
in
the
decision
rule,
the
other
bound
of
the
gray
region
is
effectively
set
at
1.

°
The
DQO
planning
team
then
sets
the
acceptable
probability
of
making
a
Type
I
(false
rejection)
error
at
10
percent
(
).
In
other
words,
they
are
willing
 =
010
.
to
accept
a
10
percent
chance
of
concluding
the
waste
is
nonhazardous
when
at
least
a
portion
of
the
waste
is
hazardous.
The
use
of
the
exceedance
rule
method
does
not
require
specification
of
the
Type
II
(false
acceptance)
error
rate.

°
The
information
collected
in
Step
6
of
the
DQO
Process
is
summarized
below.

Table
I­
8.
Initial
Outputs
of
Step
6
of
the
DQO
Process
­
Example
2
Needed
Parameter
Output
Action
Level
0.90
Gray
Region
0.90
to
1.0
(
=
0.10)
 
Null
Hypothesis
(Ho
)
P
<
0.90
False
Rejection
Decision
Error
Limit
(probability
of
a
Type
I
error)
 =
010
.

False
Acceptance
Decision
Error
Limit
(probability
of
a
Type
II
error)
Not
specified
Appendix
I
Example
2
298
DQO
Step
7:
Optimizing
the
Data
Collection
Design
°
Review
outputs
from
the
first
six
steps
of
the
DQO
Process.
The
planning
team
reviewed
the
outputs
of
the
first
six
steps
of
the
DQO
Process.

°
Consider
various
data
collection
designs.
The
DQO
planning
team
considered
two
probabilistic
sampling
designs:
simple
random
and
systematic
(random
within
time
intervals).
Both
the
simple
random
and
the
systematic
design
would
allow
the
facility
owner
to
estimate
whether
a
high
percentage
of
the
waste
complies
with
the
standard.
The
team
also
considered
using
an
authoritative
"biased"
sampling
design
to
estimate
the
high
end
or
"worst
case"
waste
characteristics.

Two
analytical
plans
were
then
considered:
One
in
which
the
full
TCLP
would
be
performed
on
each
sample,
and
one
in
which
TCLP
concentrations
could
be
estimated
from
total
concentration
by
comparing
each
total
sample
analysis
result
to
20
times
the
TC
regulatory
limit
(to
account
for
the
20:
1
dilution
used
in
the
TCLP).

The
laboratory
requested
a
sample
mass
of
at
least
300
grams
(per
sample)
to
allow
the
laboratory
to
perform
the
preliminary
analyses
required
by
the
TCLP
and
to
provide
sufficient
mass
to
perform
the
full
TCLP
(if
required).

The
practical
considerations
were
then
evaluated
for
each
alternative
design,
including
access
to
sampling
locations,
worker
safety,
equipment
selection/
use,
experience
needed,
special
analytical
needs,
and
scheduling.

°
Select
the
optimal
number
of
samples.
Since
the
decision
rule
specified
no
exceedance
of
the
standard
in
any
sample,
the
number
of
samples
was
determined
from
Table
G­
3a
in
Appendix
G.
The
table
is
based
on
the
formula
.
For
a
desired
and
,
the
number
n
p
=
log(
)
log(
)
 
p
=
090
.
().
1
090
 
=
 
of
samples
(
)
for
a
simple
random
or
systematic
sampling
design
was
22.
n
The
team
also
considered
how
many
samples
might
be
required
if
a
nonprobabilistic
authoritative
sampling
design
were
used.
Some
members
of
the
planning
team
thought
that
significantly
fewer
samples
(e.
g.,
four)
could
be
used
to
make
a
hazardous
waste
determination,
and
they
pointed
out
that
the
RCRA
regulations
do
not
require
statistical
sampling
for
waste
classification.
On
the
other
hand,
other
members
of
the
planning
team
argued
against
the
authoritative
design.
They
argued
that
there
was
insufficient
knowledge
of
the
waste
to
implement
authoritative
sampling
and
noted
that
a
few
samples
taken
in
a
nonprobabilistic
manner
would
limit
their
ability
to
quantify
any
possible
decision
errors.

°
Select
a
resource­
effective
design.
The
planning
team
evaluated
the
sampling
and
analytical
design
options
and
costs.
The
following
table
summarizes
the
estimated
costs
for
the
four
sampling
designs
evaluated.
Example
2
Appendix
I
299
Table
I­
9.
Estimated
Costs
for
Implementing
Candidate
Sampling
Designs
Simple
Random
or
Systematic
Sampling
(total
metals
only)
Simple
Random
or
Systematic
Sampling
(TCLP
metals)
Authoritative
(Biased)
Sampling
(total
metals
only)
Authoritative
(Biased)
Sampling
(TCLP
metals)

Sample
collection
cost
(per
sample)
$50
$50
$50
$50
Analysis
cost
°
SW­
846
Methods
3050B/
6010B
(total
Cd
and
Cr)
(per
sample)
$40
$40
°
SW­
846
TCLP
Method
1311.
Extract
analyzed
by
SW­
846
Methods
3010A/
6010B
(per
sample)
$220
$220
Number
of
samples
22
22
4
4
Total
Estimated
Cost
$1,980
$5,940
$360
$1,080
While
the
authoritative
design
with
total
metals
analysis
offered
the
least
cost
compared
to
the
probabilistic
designs,
the
team
decided
that
they
did
not
have
sufficient
knowledge
of
the
waste,
its
leaching
characteristics,
or
the
process
yet
to
use
an
authoritative
sampling
approach
with
total
metals
analysis
only.
Furthermore,
the
team
needed
to
quantify
the
probability
of
making
a
decision
error.
The
planning
team
selected
the
systematic
design
with
total
metals
analysis
for
Cd
and
Cr
with
the
condition
that
if
any
total
sample
analysis
result
indicated
the
maximum
theoretical
TCLP
result
could
exceed
the
TC
limit,
then
the
TCLP
would
be
performed
for
that
sample.
This
approach
was
selected
for
its
ease
of
implementation,
it
would
provide
adequate
waste
knowledge
for
future
waste
management
decisions
(assuming
no
change
in
the
waste
generation
process),
and
would
satisfy
other
cost
and
performance
objectives
specified
by
the
planning
team.

°
Prepare
a
QAPP/
SAP.
The
operational
details
of
the
sampling
and
analytical
activities
are
documented
in
a
Quality
Assurance
Project
Plan
and
Sampling
and
Analysis
Plan
(QAPP/
SAP).

Implementation
Phase
The
QAPP/
SAP
was
implemented
in
accordance
with
the
schedule
and
the
facility's
safety
program.
Based
on
the
rate
of
waste
generation,
it
was
estimated
that
the
roll­
off
box
would
be
filled
in
about
30
work
days
assuming
one
"batch"
of
waste
was
placed
in
the
roll
off
box
each
day.
It
was
decided
to
obtain
one
random
sample
from
each
batch
as
the
waste
was
discharge
from
the
reclamation
unit
to
the
roll­
off
container
(i.
e.,
at
the
point
of
waste
generation).
See
Figure
I­
5.
Appendix
I
Example
2
300
Roll­
Off
Box
Blast
Booth
Waste
Point
of
waste
generation
and
sampling
point
If
hazardous,
accumulation
less
than
90
days
prior
to
shipment
off
site
per
40
CFR
Part
262.34(
a).
Random
Sampling
Within
Batches
Batch
1
Batch
2,
etc
Reclaimed
Blast
Media
Recoveryreclamation
system
Not
to
scale
Figure
I­
5.
Systematic
sampling
design
with
random
sampling
times
selected
within
each
batch
The
QAPP/
SAP
established
the
following
DQOs
and
performance
goals
for
the
equipment.

The
sampling
device
must
meet
the
following
criteria:

°
Be
able
to
obtain
a
minimum
mass
of
300
grams
for
each
sample
°
Be
constructed
of
materials
that
will
not
alter
analyte
concentrations
due
to
loss
or
gain
of
analytes
via
sorption,
desorption,
degradation,
or
corrosion
°
Be
easy
to
use,
safe,
and
low
cost
°
Be
capable
of
obtaining
increments
of
the
waste
at
the
discharge
drop
without
introducing
sampling
bias.

The
following
four
steps
were
taken
to
select
the
sampling
device
(from
Section
7.1):

Step
1
­
Identify
the
Medium
To
Be
Sampled
Based
on
a
prior
inspection,
it
was
known
that
the
waste
is
a
unconsolidated
dry
granular
solid.
Using
Table
8
in
Section
7.1,
we
find
the
media
descriptor
that
most
closely
matches
the
waste
in
the
first
column
of
the
table:
"Other
Solids
­
Unconsolidated."

Step
2
­
Select
the
Sample
Location
The
second
column
of
Table
8
provides
a
list
of
common
sampling
locations
for
unconsolidated
solids.
The
discharge
drop
opening
is
four
inches
wide,
and
the
waste
is
released
downward
into
the
collection
box.
"Pipe
or
Conveyor"
found
in
the
table
is
the
closest
match
to
the
Example
2
Appendix
I
301
configuration
of
the
waste
discharge
point.

Step
3
­
Identify
Candidate
Sampling
Devices
The
third
column
of
Table
8
provides
a
list
of
candidate
sampling
devices
for
sampling
solids
from
a
pip
or
conveyor.
For
this
waste
stream,
the
list
of
devices
for
sampling
a
pipe
or
conveyor
includes
bucket,
dipper,
pan,
sample
container,
miniature
core
sampler,
scoop/
trowel/
shovel,
and
trier.
The
planning
team
immediately
eliminated
miniature
core
sampler,
scoop/
trowel/
shovel,
and
trier
because
they
are
not
suitable
for
obtaining
samples
from
a
falling
stream
or
vertical
discharge.

Step
4
­
Select
Devices
From
the
list
of
candidate
sampling
devices,
one
device
was
selected
for
use
in
the
field
from
Table
9
in
Section
7.1.
Selection
of
the
equipment
was
made
after
consideration
of
the
DQOs
for
the
sample
support
(i.
e.,
required
volume,
width,
shape,
and
orientation),
the
performance
goals
established
for
the
sampling
device,
ease
of
use
and
decontamination,
worker
safety
issues,
cost,
and
any
practical
considerations.
Table
I­
10
demonstrates
how
the
DQOs
and
performance
goals
were
used
to
narrow
the
candidate
devices
down
to
just
one
or
two.

Table
I­
10.
Using
DQOs
and
Performance
Goals
To
Select
a
Final
Sampling
Device
Candidate
Devices
Data
Quality
Objectives
and
Performance
Goals
Required
Width
Orientation
and
Shape
Sample
Volume
Operational
Considerations
Desired
Material
of
Construction
4
inches
Cross­
section
of
entire
stream
>300
g
Device
is
portable,
safe,
and
low
cost?
Polyethylene
or
PTFE
Bucket
Y
Y
Y
Y
Y
Dipper
N
Y
Y
Y
Y
Pan
Y
Y
Y
Y
Y
Sample
container
N
NYYY
Key:
Y
=
The
device
is
capable
of
achieving
the
specified
DQO
or
performance
goal.
N
=
The
device
is
not
capable
of
achieving
the
specified
DQO
or
performance
goal.

The
sampling
mode
was
"one­
dimensional,"
that
is,
the
material
is
relatively
linear
in
time
and
space.
The
ideal
sampling
device
would
obtain
a
sample
of
constant
thickness
and
must
be
capable
of
obtaining
the
entire
width
of
the
stream
for
a
fraction
of
the
time
(see
discussion
at
Section
6.3.2.1).
Either
a
bucket
or
pan
wide
enough
(preferably
3
times
the
width
of
the
stream)
to
obtain
all
of
the
flow
for
a
fraction
of
the
time
are
identified
as
suitable
devices
because
they
are
capable
of
achieving
all
the
performance
goals.

A
flat
12­
inch
wide
polyethylene
pan
with
vertical
sides
was
used
to
collect
each
primary
field
sample.
Each
primary
field
sample
was
approximately
2
kilograms,
therefore,
the
field
team
used
the
"fractional
shoveling"
technique
(see
Section
7.3.2)
to
reduce
the
sample
mass
to
a
subsample
of
approximately
300
grams.
The
field
samples
(each
in
a
32­
oz
jar)
and
associated
Appendix
I
Example
2
302
field
QC
samples
were
submitted
to
the
laboratory
in
accordance
with
the
sample
handling
and
shipping
instructions
specified
in
the
QAPP/
SAP.

A
total
of
30
samples
were
obtained
by
the
time
the
roll­
off
box
was
filled,
so
it
was
necessary
to
randomly
select
22
samples
from
the
set
of
30
for
laboratory
analysis.

All
22
samples
were
first
analyzed
for
total
cadmium
and
chromium
to
determine
if
the
maximum
theoretical
TCLP
concentration
in
any
one
sample
could
exceed
the
applicable
TC
limit.
Samples
whose
maximum
theoretical
TCLP
value
exceeded
the
applicable
TC
limit
were
then
analyzed
using
the
full
TCLP.

For
the
TCLP
samples,
no
particle­
size
reduction
was
required
for
the
sample
extraction
because
the
maximum
particle
size
in
the
waste
passed
through
a
9.5
mm
sieve
(the
maximum
particle
size
allowed
for
the
TCLP).
(On
a
small
subsample
of
the
waste,
however,
particle
size
reduction
to
1
mm
was
required
to
determine
the
TCLP
extract
type
(I
or
II)).
A
100­
gram
subsample
was
taken
from
each
field
sample
for
TCLP
analysis.

Assessment
Phase
Data
Verification
and
Validation
Sampling
and
analytical
records
were
reviewed
to
check
compliance
with
the
QAPP/
SAP.
The
data
collected
during
the
study
met
the
DQOs.
Sampling
and
analytical
error
were
minimized
through
the
use
of
a
statistical
sampling
design,
correct
field
sampling
and
subsampling
procedures,
and
adherence
to
the
requirements
of
the
analytical
methods.
The
material
that
was
sampled
did
not
present
any
special
problems
concerning
access
to
sampling
locations,
equipment
usage,
particle­
size
distribution,
or
matrix
interferences.
Quantitation
limits
achieved
for
total
cadmium
and
chromium
were
5
mg/
kg
and
10
mg/
kg
respectively.
Quantitation
limits
achieved
for
cadmium
and
chromium
in
the
TCLP
extract
were
0.10
mg/
L
and
1.0
mg/
L
respectively.
The
analytical
package
was
validated
and
the
data
generated
were
judged
acceptable
for
their
intended
purpose.

Data
Quality
Assessment
DQA
was
performed
using
the
approach
outlined
in
Section
9.8.2
and
EPA
QA/
G­
9
(USEPA
2000d):

1.
Review
DQOs
and
sampling
design.
The
DQO
planning
team
reviewed
the
original
objectives:
"If
the
upper
90
th
percentile
TCLP
concentration
for
cadmium
or
chromium
in
the
waste
and
all
samples
analysis
results
are
less
than
their
respective
action
levels
of
1.0
and
5.0
mg/
L
TCLP,
then
the
waste
can
be
classified
as
nonhazardous
waste
under
RCRA;
otherwise,
the
waste
will
be
considered
a
hazardous
waste."

2.
Prepare
the
data
for
statistical
analysis.
The
summary
of
the
verified
and
validated
data
were
received
in
hard
copy
format,
and
summarized
in
a
table.
The
table
was
checked
by
a
second
person
for
accuracy.
The
results
for
the
data
collection
effort
are
listed
in
Table
I­
11.
Example
2
Appendix
I
303
Table
I­
11.
Total
and
TCLP
Sample
Analysis
Results
Sample
No.
Cadmium
Chromium
Total
(mg/
kg)
Total
/
20
(TC
limit
=
1
mg/
L)
Total
(mg/
kg)
Total
/
20
(TC
limit
=
5
mg/
L)
1
<5
<0.25
11
0.55
2
6
0.3
<10
<0.
5
3
29
1.45
(full
TCLP
=
0.72)
<10
<0.
5
4
<5
<0.25
<10
<0.5
5
<5
<0.25
42
2.1
6
7
0.35
<10
<0.5
7
7
0.35
<10
<0.5
8
13
0.
65
26
1.
3
9
<5
<0.25
19
0.95
10
<5
<0.
25
<10
<0.
5
11
36
1.8
(full
TCLP
=
0.8)
<10
<0.
5
12
<5
<0.
25
<10
<0.
5
13
<5
<0.
25
<10
<0.
5
14
<5
<0.
25
12
0.
6
15
<5
<0.
25
<10
<0.
5
16
9
0.
45
<10
<0.
5
17
<5
<0.
25
<10
<0.
5
18
<5
<0.
25
<10
<0.
5
19
<5
<0.
25
31
1.
55
20
20
1
(full
TCLP
=
<0.
10)
<10
<0.
5
21
<5
<0.
25
<10
<0.
5
22
<5
<0.
25
<10
<0.
5
3.
Conduct
preliminary
analysis
of
data
and
check
distributional
assumptions.
To
use
the
nonparametric
"exceedance
rule"
no
distributional
assumptions
are
required.
The
only
requirements
are
a
random
sample,
and
that
the
quantitation
limit
is
less
than
the
applicable
standard.
These
requirements
were
met.

4.
Select
and
perform
the
statistical
test:
The
maximum
TCLP
sample
analysis
results
for
cadmium
and
chromium
were
compared
to
their
respective
TC
regulatory
limits.
While
several
of
the
total
results
indicated
the
maximum
theoretical
TCLP
result
could
exceed
the
regulatory
limit,
subsequent
analysis
of
the
TCLP
extracts
from
these
samples
indicated
the
TCLP
concentrations
were
below
the
regulatory
limits.
Appendix
I
Example
2
1
Note
that
if
fewer
than
22
samples
were
analyzed
­
for
example,
due
to
a
lost
sample
­
and
all
sample
analysis
results
indicated
concentrations
less
than
the
applicable
standard,
then
one
still
could
conclude
that
90­
percent
of
all
possible
samples
are
less
than
the
standard
but
with
a
lower
level
of
confidence.
See
Section
5.5.2,
Equation
17.

304
5.
Draw
conclusions
and
report
results.
All
22
sample
analysis
results
were
less
than
the
applicable
TC
limits,
therefore
the
owner
concluded
with
at
least
90­
percent
confidence
that
at
least
90­
percent
of
all
possible
samples
of
the
waste
would
be
below
the
TC
regulatory
levels.
Based
on
the
decision
rule
established
for
the
study,
the
owner
decided
to
manage
the
waste
as
a
nonhazardous
waste.
1
A
summary
report
including
a
description
of
all
planning,
implementation,
and
assessment
activities
was
placed
in
the
operating
record.
305
Contact
ASTM
For
more
information
on
ASTM
or
how
to
purchase
their
publications,
including
the
standards
referenced
by
this
appendix,
contact
them
at:
ASTM,
100
Barr
Harbor
Drive,
West
Conshohocken,
PA
19428­
2959;
telephone:
610­
832­
9585;
World
Wide
Web:
http://
www.
astm.
org.
APPENDIX
J
SUMMARIES
OF
ASTM
STANDARDS
ASTM
(the
American
Society
for
Testing
and
Materials)
is
one
of
the
entities
that
can
provide
additional
useful
information
on
sampling.
This
appendix
references
many
of
the
standards
published
by
ASTM
that
are
related
to
sampling.

ASTM
is
a
not­
for­
profit
organization
that
provides
a
forum
for
writing
standards
for
materials,
products,
systems,
and
services.
The
Society
develops
and
publishes
standard
test
methods,
specifications,
practices,
guides,
classifications,
and
terminology.

Each
ASTM
standard
is
developed
within
the
consensus
principles
of
the
Society
and
meets
the
approved
requirements
of
its
procedures.
The
voluntary,
full­
consensus
approach
brings
together
people
with
diverse
backgrounds
and
knowledge.
The
standards
undergo
intense
round­
robin
testing.
Strict
balloting
and
due
process
procedures
guarantee
accurate,
upto
date
information.

To
help
you
determine
which
ASTM
standards
may
be
most
useful,
this
appendix
includes
text
found
in
the
scope
of
each
standard.
The
standards,
listed
in
alpha­
numerical
order,
each
deal
in
some
way
with
sample
collection.
ASTM
has
future
plans
to
publish
these
standards
together
in
one
volume
on
sampling.

D
140
Standard
Practice
for
Sampling
Bituminous
Materials
This
practice
applies
to
the
sampling
of
bituminous
materials
at
points
of
manufacture,
storage,
or
delivery.

D
346
Standard
Practice
for
Collection
and
Preparation
of
Coke
Samples
for
Laboratory
Analysis
This
practice
covers
procedures
for
the
collection
and
reduction
of
samples
of
coke
to
be
used
for
physical
tests,
chemical
analyses,
and
the
determination
of
total
moisture.

D
420
Guide
to
Site
Characterization
for
Engineering,
Design,
and
Construction
Purposes
This
guide
refers
to
ASTM
methods
by
which
soil,
rock,
and
ground­
water
conditions
may
be
determined.
The
objective
of
the
investigation
should
be
to
identify
and
locate,
both
horizontally
and
vertically,
significant
soil
and
rock
types
and
ground­
water
conditions
present
within
a
given
site
area
and
to
establish
the
characteristics
of
the
subsurface
materials
by
sampling
or
in
situ
testing,
or
both.
Appendix
J
306
D
1452
Standard
Practice
for
Soil
Investigation
and
Sampling
by
Auger
Borings
This
practice
covers
equipment
and
procedures
for
the
use
of
earth
augers
in
shallow
geotechnical
exploration.
It
does
not
apply
to
sectional
continuous
flight
augers.
This
practice
applies
to
any
purpose
for
which
disturbed
samples
can
be
used.
Augers
are
valuable
in
connection
with
ground
water
level
determinations,
to
help
indicate
changes
in
strata,
and
in
the
advancement
of
a
hole
for
spoon
and
tube
sampling.

D
1586
Standard
Test
Method
for
Penetration
Test
and
Split­
Barrel
Sampling
of
Soils
This
test
method
describes
the
procedure,
generally
known
as
the
Standard
Penetration
Test,
for
driving
a
split­
barrel
sampler.
The
procedure
is
used
to
obtain
a
representative
soil
sample
and
to
measure
the
resistance
of
the
soil
to
penetration
of
the
sampler.

D
1587
Standard
Practice
for
Thin­
Walled
Tube
Geotechnical
Sampling
of
Soils
This
practice
covers
a
procedure
for
using
a
thin­
walled
metal
tube
to
recover
relatively
undisturbed
soil
samples
suitable
for
laboratory
tests
of
structural
properties.
Thin­
walled
tubes
used
in
piston,
plug,
or
rotary­
type
samplers,
such
as
the
Denison
or
Pitcher
sampler,
should
comply
with
the
portions
of
this
practice
that
describe
the
thin­
walled
tubes.
This
practice
is
used
when
it
is
necessary
to
obtain
a
relatively
undisturbed
sample.
It
does
not
apply
to
liners
used
within
the
above
samplers.

D
2113
Standard
Practice
for
Diamond
Core
Drilling
for
Site
Investigation
This
practice
describes
equipment
and
procedures
for
diamond
core
drilling
to
secure
core
samples
of
rock
and
some
soils
that
are
too
hard
to
sample
by
soil­
sampling
methods.
This
method
is
described
in
the
context
of
obtaining
data
for
foundation
design
and
geotechnical
engineering
purposes
rather
than
for
mineral
and
mining
exploration.

D
2234
Standard
Practice
for
Collection
of
a
Gross
Sample
of
Coal
This
practice
covers
procedures
for
the
collection
of
a
gross
sample
of
coal
under
various
conditions
of
sampling.
The
practice
describes
general
and
special
purpose
sampling
procedures
for
coals
by
size
and
condition
of
preparation
(e.
g.,
mechanically
cleaned
coal
or
raw
coal)
and
by
sampling
characteristics.
The
sample
is
to
be
crushed
and
further
prepared
for
analysis
in
accordance
with
ASTM
Method
D
2013.
This
practice
also
gives
procedures
for
dividing
large
samples
before
any
crushing.

D
3213
Standard
Practices
for
Handling,
Storing,
and
Preparing
Soft
Undisturbed
Marine
Soil
These
practices
cover
methods
for
project/
cruise
reporting;
and
for
the
handling,
transporting
and
storing
of
soft
cohesive
undisturbed
marine
soil.
The
practices
also
cover
procedures
for
preparing
soil
specimens
for
triaxial
strength,
and
procedures
for
consolidation
testing.
These
practices
may
include
the
handling
and
transporting
of
sediment
specimens
contaminated
with
hazardous
materials
and
samples
subject
to
quarantine
regulations.
Appendix
J
307
D
3326
Standard
Practice
for
Preparation
of
Samples
for
Identification
of
Waterborne
Oils
This
practice
covers
the
preparation
for
analysis
of
waterborne
oils
recovered
from
water.
The
identification
is
based
on
the
comparison
of
physical
and
chemical
characteristics
of
the
waterborne
oils
with
oils
from
suspect
sources.
These
oils
may
be
of
petroleum
or
vegetable/
animal
origin,
or
both.
The
practice
covers
the
following
seven
procedures
(A
through
G):
Procedure
A,
for
samples
of
more
than
50­
mL
volume
containing
significant
quantities
of
hydrocarbons
with
boiling
points
above
280°
C;
Procedure
B,
for
samples
containing
significant
quantities
of
hydrocarbons
with
boiling
points
above
280°
C;
Procedure
C,
for
waterborne
oils
containing
significant
amounts
of
components
boiling
below
280°
C
and
to
mixtures
of
these
and
higher
boiling
components;
Procedure
D,
for
samples
containing
both
petroleum
and
vegetable/
animal
derived
oils;
Procedure
E,
for
samples
of
light
crudes
and
medium
distillate
fuels;
Procedure
F,
for
thin
films
of
oil­
on­
water;
and
Procedure
G,
for
oil­
soaked
samples.

D
3370
Standard
Practices
for
Sampling
Water
from
Closed
Conduits
These
practices
cover
the
equipment
and
methods
for
sampling
water
from
closed
conduits
(e.
g.,
process
streams)
for
chemical,
physical,
and
microbiological
analyses.
It
provides
practices
for
grab
sampling,
composite
sampling,
and
continual
sampling
of
closed
conduits.

D
3550
Standard
Practice
for
Ring­
Lined
Barrel
Sampling
of
Soils
This
practice
covers
a
procedure
for
using
a
ring­
lined
barrel
sampler
to
obtain
representative
samples
of
soil
for
identification
purposes
and
other
laboratory
tests.
In
cases
in
which
it
has
been
established
that
the
quality
of
the
sample
is
adequate,
this
practice
provides
shear
and
consolidation
specimens
that
can
be
used
directly
in
the
test
apparatus
without
prior
trimming.
Some
types
of
soils
may
gain
or
lose
significant
shear
strength
or
compressibility,
or
both,
as
a
result
of
sampling.
In
cases
like
these,
suitable
comparison
tests
should
be
made
to
evaluate
the
effect
of
sample
disturbance
on
shear
strength
and
compressibility.
This
practice
is
not
intended
to
be
used
as
a
penetration
test;
however,
the
force
required
to
achieve
penetration
or
a
blow
count,
when
driving
is
necessary,
is
recommended
as
supplemental
information.

D
3665
Standard
Practice
for
Random
Sampling
of
Construction
Materials
This
practice
covers
the
determination
of
random
locations
(or
timing)
at
which
samples
of
construction
materials
can
be
taken.
For
the
exact
physical
procedures
for
securing
the
sample,
such
as
a
description
of
the
sampling
tool,
the
number
of
increments
needed
for
a
sample,
or
the
size
of
the
sample,
reference
should
be
made
to
the
appropriate
standard
method.

D
3975
Standard
Practice
for
Development
and
Use
(Preparation)
of
Samples
for
Collaborative
Testing
of
Methods
for
Analysis
of
Sediments
This
practice
establishes
uniform
general
procedures
for
the
development,
preparation,
and
use
of
samples
in
the
collaborative
testing
of
methods
for
chemical
analysis
of
sediments
and
similar
materials.
The
principles
of
this
practice
are
applicable
to
aqueous
samples
with
suitable
technical
modifications.
Appendix
J
308
D
3976
Standard
Practice
for
Preparation
of
Sediment
Samples
for
Chemical
Analysis
This
practice
describes
standard
procedures
for
preparing
test
samples
(including
the
removal
of
occluded
water
and
moisture)
of
field
samples
collected
from
locations
such
as
streams,
rivers,
ponds,
lakes,
and
oceans.
These
procedures
are
applicable
to
the
determination
of
volatile,
semivolatile,
and
nonvolatile
constituents
of
sediments.

D
3694
Standard
Practices
for
Preparation
of
Sample
Containers
and
for
Preservation
of
Organic
Constituents
These
practices
cover
the
various
means
of
(1)
preparing
sample
containers
used
for
collection
of
waters
to
be
analyzed
for
organic
constituents
and
(2)
preservation
of
such
samples
from
the
time
of
sample
collection
until
the
time
of
analysis.
The
sample
preservation
practice
depends
on
the
specific
analysis
to
be
conducted.
Preservation
practices
are
listed
with
the
corresponding
applicable
general
and
specific
constituent
test
method.
The
preservation
method
for
waterborne
oils
is
given
in
Practice
D
3325.
Use
of
the
information
given
will
make
it
possible
to
choose
the
minimum
number
of
sample
preservation
practices
necessary
to
ensure
the
integrity
of
a
sample
designated
for
multiple
analysis.

D
4136
Standard
Practice
for
Sampling
Phytoplankton
with
Water­
Sampling
Bottles
This
practice
covers
the
procedures
for
obtaining
quantitative
samples
of
a
phytoplankton
community
by
the
use
of
water­
sampling
bottles.

D
4220
Standard
Practices
for
Preserving
and
Transporting
Soil
Samples
These
practices
cover
procedures
for
preserving
soil
samples
immediately
after
they
are
obtained
in
the
field
and
accompanying
procedures
for
transporting
and
handling
the
samples.
These
practices
are
not
intended
to
address
requirements
applicable
to
transporting
of
soil
samples
known
or
suspected
to
contain
hazardous
materials.

D
4342
Standard
Practice
for
Collecting
of
Benthic
Macroinvertebrates
with
Ponar
Grab
Sampler
This
practice
covers
the
procedures
for
obtaining
qualitative
or
quantitative
samples
of
macroinvertebrates
inhabiting
a
wide
range
of
bottom
substrate
types
(e.
g.,
coarse
sand,
fine
gravel,
clay,
mud,
marl,
and
similar
substrates.
The
Ponar
grab
sampler
is
used
in
freshwater
lakes,
rivers,
estuaries,
reservoirs,
oceans,
and
similar
habitats.

D
4343
Standard
Practice
for
Collecting
Benthic
Macroinvertebrates
with
Ekman
Grab
Sampler
This
practice
covers
the
procedures
for
obtaining
qualitative
or
quantitative
samples
of
macroinvertebrates
inhabiting
soft
sediments.
The
Ekman
grab
sampler
is
used
in
freshwater
lakes,
reservoirs,
and,
usually,
small
bodies
of
water.
Appendix
J
309
D
4387
Standard
Guide
for
Selecting
Grab
Sampling
Devices
for
Collecting
Benthic
Macroinvertebrates
This
guide
covers
the
selection
of
grab
sampling
devices
for
collecting
benthic
macroinvertebrates.
Qualitative
and
quantitative
samples
of
macroinvertebrates
in
sediments
or
substrates
are
usually
taken
by
grab
samplers.
The
guide
discusses
the
advantages
and
limitations
of
the
Ponar,
Peterson,
Ekman
and
other
grab
samplers.

D
4411
Standard
Guide
for
Sampling
Fluvial
Sediment
in
Motion
This
guide
covers
the
equipment
and
basic
procedures
for
sampling
to
determine
discharge
of
sediment
transported
by
moving
liquids.
Equipment
and
procedures
were
originally
developed
to
sample
mineral
sediments
transported
by
rivers
but
they
also
are
applicable
to
sampling
a
variety
of
sediments
transported
in
open
channels
or
closed
conduits.
Procedures
do
not
apply
to
sediments
transported
by
flotation.
This
guide
does
not
pertain
directly
to
sampling
to
determine
nondischarge­
weighted
concentrations,
which
in
special
instances
are
of
interest.
However,
much
of
the
descriptive
information
on
sampler
requirements
and
sediment
transport
phenomena
is
applicable
in
sampling
for
these
concentrations
and
the
guide
briefly
specifies
suitable
equipment.

D
4448
Standard
Guide
for
Sampling
Groundwater
Monitoring
Wells
This
guide
covers
procedures
for
obtaining
valid
representative
samples
from
ground­
water
monitoring
wells.
The
scope
is
limited
to
sampling
and
"in
the
field"
preservation
and
does
not
include
well
location,
depth,
well
development,
design
and
construction,
screening,
or
analytical
procedures.
This
guide
provides
a
review
of
many
of
the
most
commonly
used
methods
for
sampling
ground­
water
quality
monitoring
wells
and
is
not
intended
to
serve
as
a
ground­
water
monitoring
plan
for
any
specific
application.
Because
of
the
large
and
ever­
increasing
number
of
options
available,
no
single
guide
can
be
viewed
as
comprehensive.
The
practitioner
must
make
every
effort
to
ensure
that
the
methods
used,
whether
or
not
they
are
addressed
in
this
guide,
are
adequate
to
satisfy
the
monitoring
objectives
at
each
site.

D
4489
Standard
Practices
for
Sampling
of
Waterborne
Oils
These
practices
describe
the
procedures
to
be
used
in
collecting
samples
of
waterborne
oils,
oil
found
on
adjoining
shorelines,
or
oil­
soaked
debris,
for
comparison
of
oils
by
spectroscopic
and
chromatographic
techniques,
and
for
elemental
analyses.
Two
practices
are
described.
Practice
A
involves
"grab
sampling"
macro
oil
samples.
Practice
B
involves
sampling
most
types
of
waterborne
oils
and
is
particularly
applicable
in
sampling
thin
oil
films
or
slicks.
Practice
selection
will
be
dictated
by
the
physical
characteristics
and
the
location
of
the
spilled
oil.
Specifically,
the
two
practices
are
(1)
Practice
A,
for
grab
sampling
thick
layers
of
oil,
viscous
oils
or
oil
soaked
debris,
oil
globules,
tar
balls,
or
stranded
oil,
and
(2)
Practice
B,
for
TFE­
fluorocarbon
polymer
strip
samplers.
Each
of
the
two
practices
collect
oil
samples
with
a
minimum
of
water,
thereby
reducing
the
possibility
of
chemical,
physical,
or
biological
alteration
by
prolonged
contact
with
water
between
the
time
of
collection
and
analysis.
Appendix
J
310
D
4547
Standard
Guide
for
Sampling
Waste
and
Soils
for
Volatile
Organic
Compounds
This
guide
describes
recommended
procedures
for
the
collection,
handling,
and
preparation
of
solid
waste,
soil,
and
sediment
subsamples
for
subsequent
determination
of
volatile
organic
compounds
(VOCs).
This
class
of
compounds
includes
low
molecular
weight
aromatics,
hydrocarbons,
halogenated
hydrocarbons,
ketones,
acetates,
nitriles,
acrylates,
ethers,
and
sulfides
with
boiling
points
below
200°
C
that
are
insoluble
or
slightly
soluble
in
water.
Methods
of
subsample
collection,
handling,
and
preparation
for
analysis
are
described.
This
guide
does
not
cover
the
details
of
sampling
design,
laboratory
preparation
of
containers,
and
the
analysis
of
the
subsamples.

D
4687
Standard
Guide
for
General
Planning
of
Waste
Sampling
This
guide
provides
information
for
formulating
and
planning
the
many
aspects
of
waste
sampling
that
are
common
to
most
waste­
sampling
situations.
This
guide
addresses
the
following
aspects
of
sampling:
Sampling
plans,
safety
plans,
quality
assurance
considerations,
general
sampling
considerations,
preservation
and
containerization,
cleaning
equipment,
labeling
and
shipping
procedures,
and
chain­
of­
custody
procedures.
This
guide
does
not
provide
comprehensive
sampling
procedures
for
these
aspects,
nor
does
it
serve
as
a
guide
to
any
specific
application.

D
4696
Standard
Guide
for
Pore­
Liquid
Sampling
from
the
Vadose
Zone
This
guide
discusses
equipment
and
procedures
used
for
sampling
pore­
liquid
from
the
vadose
zone
(unsaturated
zone).
The
guide
is
limited
to
in­
situ
techniques
and
does
not
include
soil
core
collection
and
extraction
methods
for
obtaining
samples.
The
term
"pore­
liquid"
is
applicable
to
any
liquid
from
aqueous
pore­
liquid
to
oil,
however,
all
of
the
samplers
described
in
this
guide
are
designed
to
sample
aqueous
pore­
liquids
only.
The
abilities
of
these
samplers
to
collect
other
pore­
liquids
may
be
quite
different
than
those
described.
Some
of
the
samplers
described
in
the
guide
currently
are
not
commercially
available.
These
samplers
are
presented
because
they
may
have
been
available
in
the
past,
and
may
be
encountered
at
sites
with
established
vadose
zone
monitoring
programs.
In
addition,
some
of
these
designs
are
particularly
suited
to
specific
situations.
If
needed,
these
samplers
could
be
fabricated.

D
4700
Standard
Guide
for
Soil
Sampling
from
the
Vadose
Zone
This
guide
addresses
procedures
that
may
be
used
for
obtaining
soil
samples
from
the
vadose
zone
(unsaturated
zone).
Samples
can
be
collected
for
a
variety
of
reasons,
including
the
following:

°
Stratigraphic
description
°
Hydraulic
conductivity
testing
°
Moisture
content
measurement
°
Moisture
release
curve
construction
°
Geotechnical
testing
°
Soil
gas
analyses
°
Microorganism
extraction
°
Pore­
liquid
and
soil
chemical
analyses.
Appendix
J
311
This
guide
focuses
on
methods
that
provide
soil
samples
for
chemical
analyses
of
the
soil
or
contained
liquids
or
contaminants.
Comments
on
how
methods
may
be
modified
for
other
objectives,
however,
also
are
included.
This
guide
does
not
describe
sampling
methods
for
lithified
deposits
and
rocks
(e.
g.,
sandstone,
shale,
tuff,
granite).

D
4823
Standard
Guide
for
Core
Sampling
Submerged,
Unconsolidated
Sediments
This
guide
covers
core­
sampling
terminology,
advantages
and
disadvantages
of
various
core
samplers,
core
distortions
that
may
occur
during
sampling,
techniques
for
detecting
and
minimizing
core
distortions,
and
methods
for
dissecting
and
preserving
sediment
cores.
In
this
guide,
sampling
procedures
and
equipment
are
divided
into
the
following
categories
(based
on
water
depth):
sampling
in
depths
shallower
than
0.5
m,
sampling
in
depths
between
0.5
m
and
10
m,
and
sampling
in
depths
exceeding
10
m.
Each
category
is
divided
into
two
sections:
(1)
equipment
for
collecting
short
cores
and
(2)
equipment
for
collecting
long
cores.
This
guide
also
emphasizes
general
principles.
Only
in
a
few
instances
are
step­
by­
step
instructions
given.
Because
core
sampling
is
a
field­
based
operation,
methods
and
equipment
usually
must
be
modified
to
suit
local
conditions.
Drawings
of
samplers
are
included
to
show
sizes
and
proportions.
These
samplers
are
offered
primarily
as
examples
(or
generic
representations)
of
equipment
that
can
be
purchased
commercially
or
built
from
plans
in
technical
journals.
This
guide
is
a
brief
summary
of
published
scientific
articles
and
engineering
reports,
and
the
references
are
listed.
These
documents
provide
operational
details
that
are
not
given
in
the
guide
but
are
nevertheless
essential
to
the
successful
planning
and
completion
of
core
sampling
projects.

D
4840
Standard
Guide
for
Sampling
Chain­
of­
Custody
Procedures
This
guide
contains
a
comprehensive
discussion
of
potential
requirements
for
a
sample
chain­
of­
custody
program
and
describes
the
procedures
involved
in
sample
chain­
of­
custody.
The
purpose
of
these
procedures
is
to
provide
accountability
for
and
documentation
of
sample
integrity
from
the
time
of
sample
collection
until
sample
disposal.
These
procedures
are
intended
to
document
sample
possession
during
each
stage
of
a
sample's
life
cycle,
that
is,
during
collection,
shipment,
storage,
and
the
process
of
analysis.
Sample
chain
of
custody
is
just
one
aspect
of
the
larger
issue
of
data
defensibility.
A
sufficient
chain­
of­
custody
process
(i.
e.,
one
that
provides
sufficient
evidence
of
sample
integrity
in
a
legal
or
regulatory
setting)
is
situationally
dependent.
The
procedures
presented
in
this
guide
are
generally
considered
sufficient
to
assure
legal
defensibility
of
sample
integrity.
In
a
given
situation,
less
stringent
measures
may
be
adequate.
It
is
the
responsibility
of
the
users
of
this
guide
to
determine
their
exact
needs.
Legal
counsel
may
be
needed
to
make
this
determination.

D
4854
Standard
Guide
for
Estimating
the
Magnitude
of
Variability
from
Expected
Sources
in
Sampling
Plans
The
guide
explains
how
to
estimate
the
contributions
of
the
variability
of
lot
sampling
units,
laboratory
sampling
units,
and
specimens
to
the
variation
of
the
test
result
of
a
sampling
plan.
The
guide
explains
how
to
combine
the
estimates
of
the
variability
from
the
three
sources
to
obtain
an
estimate
of
the
variability
of
the
sampling
plan
results.
The
guide
is
applicable
to
all
sampling
plans
that
produce
variables
data.
It
is
not
applicable
to
plans
that
produce
attribute
data,
since
such
plans
do
not
take
specimens
in
stages,
but
require
that
specimens
be
taken
at
random
from
all
of
the
individual
items
in
the
lot.
Appendix
J
312
D
4916
Standard
Practice
for
Mechanical
Auger
Sampling
This
practice
describes
procedures
for
the
collection
of
an
increment,
partial
sample,
or
gross
sample
of
material
using
mechanical
augers.
Reduction
and
division
of
the
material
by
mechanical
equipment
at
the
auger
also
is
covered.

D
5013
Standard
Practices
for
Sampling
Wastes
from
Pipes
and
Other
Point
Discharges
These
practices
provide
guidance
for
obtaining
samples
of
waste
at
discharge
points
from
pipes,
sluiceways,
conduits,
and
conveyor
belts.
The
following
are
included:
Practice
A
–
Liquid
or
Slurry
Discharges,
and
Practice
B
–
Solid
or
Semisolid
Discharges.
These
practices
are
intended
for
situations
in
which
there
are
no
other
applicable
ASTM
sampling
methods
for
the
specific
industry.
These
practices
do
not
address
flow
and
time­
proportional
samplers
and
other
automatic
sampling
devices.
Samples
are
taken
from
a
flowing
waste
stream
or
moving
waste
mass
and,
therefore,
are
descriptive
only
within
a
certain
period.
The
length
of
the
period
for
which
a
sample
is
descriptive
will
depend
on
the
sampling
frequency
and
compositing
scheme.

D
5088
Standard
Practice
for
Decontamination
of
Field
Equipment
Used
at
Nonradioactive
Waste
Sites
This
practice
covers
the
decontamination
of
field
equipment
used
in
the
sampling
of
soils,
soil
gas,
sludges,
surface
water,
and
ground
water
at
waste
sites
that
are
to
undergo
both
physical
and
chemical
analyses.
This
practice
is
applicable
only
at
sites
at
which
chemical
(organic
and
inorganic)
wastes
are
a
concern
and
is
not
intended
for
use
at
radioactive
or
mixed
(chemical
and
radioactive)
waste
sites.
Procedures
are
included
for
the
decontamination
of
equipment
that
comes
into
contact
with
the
sample
matrix
(sample
contacting
equipment)
and
for
ancillary
equipment
that
has
not
contacted
the
portion
of
sample
to
be
analyzed
(nonsample
contacting
equipment).
This
practice
is
based
on
recognized
methods
by
which
equipment
may
be
decontaminated.
When
collecting
environmental
matrix
samples,
one
should
become
familiar
with
the
site­
specific
conditions.
Based
on
these
conditions
and
the
purpose
of
the
sampling
effort,
the
most
suitable
method
of
decontamination
can
be
selected
to
maximize
the
integrity
of
analytical
and
physical
testing
results.
This
practice
is
applicable
to
most
conventional
sampling
equipment
constructed
of
metallic
and
synthetic
materials.
The
manufacturer
of
a
specific
sampling
apparatus
should
be
contacted
if
there
is
concern
regarding
the
reactivity
of
a
decontamination
rinsing
agent
with
the
equipment.

D
5092
Standard
Practice
for
Design
and
Installation
of
Ground
Water
Monitoring
Wells
in
Aquifers
This
practice
addresses
the
selection
and
characterization
(by
defining
soil,
rock
types,
and
hydraulic
gradients)
of
the
target
monitoring
zone
as
an
integral
component
of
monitoring
well
design
and
installation.
The
development
of
a
conceptual
hydrogeologic
model
for
the
intended
monitoring
zone(
s)
is
recommended
prior
to
the
design
and
installation
of
a
monitoring
well.
The
guidelines
are
based
on
recognized
methods
by
which
monitoring
wells
may
be
designed
and
installed
for
the
purpose
of
detecting
the
presence
or
absence
of
a
contaminant,
and
collecting
representative
ground
water
quality
data.
The
design
standards
and
installation
procedures
in
the
practice
are
applicable
to
both
detection
and
assessment
monitoring
programs
for
facilities.
The
recommended
monitoring
well
design,
as
presented
in
this
practice,
Appendix
J
313
is
based
on
the
assumption
that
the
objective
of
the
program
is
to
obtain
representative
groundwater
information
and
water
quality
samples
from
aquifers.
Monitoring
wells
constructed
following
this
practice
should
produce
relatively
turbidity­
free
samples
for
granular
aquifer
materials
ranging
from
gravels
to
silty
sand
and
sufficiently
permeable
consolidated
and
fractured
strata.
Strata
having
grain
sizes
smaller
than
the
recommended
design
for
the
smallest
diameter
filter
pack
materials
should
be
monitored
by
alternative
monitoring
well
designs
not
addressed
by
this
practice.

D
5283
Standard
Practice
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities
Quality
Assurance
and
Quality
Control
Planning
and
Implementation
This
practice
addresses
the
planning
and
implementation
of
the
sampling
and
analysis
aspects
of
environmental
data
generation
activities.
It
defines
the
criteria
that
must
be
considered
to
assure
the
quality
of
the
field
and
analytical
aspects
of
environmental
data
generation
activities.
Environmental
data
include,
but
are
not
limited
to,
the
results
from
analyses
of
samples
of
air,
soil,
water,
biota,
waste,
or
any
combinations
thereof.
DQOs
should
be
adopted
prior
to
application
of
this
practice.
Data
generated
in
accordance
with
this
practice
are
subject
to
a
final
assessment
to
determine
whether
the
DQOs
were
met.
For
example,
many
screening
activities
do
not
require
all
of
the
mandatory
quality
assurance
and
quality
control
steps
found
in
this
practice
to
generate
data
adequate
to
meet
the
project
DQOs.
The
extent
to
which
all
of
the
requirements
must
be
met
remains
a
matter
of
technical
judgment
as
it
relates
to
the
established
DQOs.
This
practice
presents
extensive
management
requirements
designed
to
ensure
high­
quality
environmental
data.

D
5314
Standard
Guide
for
Soil
Gas
Monitoring
in
the
Vadose
Zone
This
guide
covers
information
pertaining
to
a
broad
spectrum
of
practices
and
applications
of
soil
atmosphere
sampling,
including
sample
recovery
and
handling,
sample
analysis,
data
interpretation,
and
data
reporting.
This
guide
can
increase
the
awareness
of
soil
gas
monitoring
practitioners
concerning
important
aspects
of
the
behavior
of
the
soil­
water­
gas
contaminant
system
in
which
this
monitoring
is
performed,
as
well
as
inform
them
of
the
variety
of
available
techniques
of
each
aspect
of
the
practice.
Appropriate
applications
of
soil
gas
monitoring
are
identified,
as
are
the
purposes
of
the
various
applications.
Emphasis
is
placed
on
soil
gas
contaminant
determinations
in
certain
application
examples.
This
guide
suggests
a
variety
of
approaches
useful
in
monitoring
vadose
zone
contaminants
with
instructions
that
offer
direction
to
those
who
generate
and
use
soil
gas
data.
This
guide
does
not
recommend
a
standard
practice
to
follow
in
all
cases,
nor
does
it
recommend
definite
courses
of
action.
The
success
of
any
one
soil
gas
monitoring
methodology
is
strongly
dependent
upon
the
environment
in
which
it
is
applied.

D
5358
Standard
Practice
for
Sampling
with
a
Dipper
or
Pond
Sampler
This
practice
describes
the
procedure
and
equipment
for
taking
surface
samples
of
water
or
other
liquids
using
a
dipper.
A
pond
sampler
or
dipper
with
an
extension
handle
allows
the
operator
to
sample
streams,
ponds,
waste
pits,
and
lagoons
as
far
as
15
feet
from
the
bank
or
other
secure
footing.
The
dipper
is
useful
in
filling
a
sample
bottle
without
contaminating
the
outside
of
the
bottle.
Appendix
J
314
D
5387
Standard
Guide
for
Elements
of
a
Complete
Data
Set
for
Non­
Cohesive
Sediments
This
guide
covers
criteria
for
a
complete
sediment
data
set,
and
it
provides
guidelines
for
the
collection
of
non­
cohesive
sediment
alluvial
data.
This
guide
describes
what
parameters
should
be
measured
and
stored
to
obtain
a
complete
sediment
and
hydraulic
data
set
that
could
be
used
to
compute
sediment
transport
using
any
prominently
known
sediment­
transport
equations.

D
5451
Standard
Practice
for
Sampling
Using
a
Trier
Sampler
This
practice
covers
sampling
using
a
trier.
A
trier
resembles
an
elongated
scoop,
and
is
used
to
collect
samples
of
granular
or
powdered
materials
that
are
moist
or
sticky
and
have
a
particle
diameter
less
than
one­
half
the
diameter
of
the
trier.
The
trier
can
be
used
as
a
vertical
coring
device
only
when
it
is
certain
that
a
relatively
complete
and
cylindrical
sample
can
be
extracted.

D
5495
Standard
Practice
for
Sampling
with
a
Composite
Liquid
Waste
Sampler
(COLIWASA)

This
practice
describes
the
procedure
for
sampling
liquids
with
the
composite
liquid
waste
sampler
(COLIWASA).
The
COLIWASA
is
an
appropriate
device
for
obtaining
a
representative
sample
from
stratified
or
unstratified
liquids.
Its
most
common
use
is
for
sampling
containerized
liquids,
such
as
tanks,
barrels,
and
drums.
It
may
also
be
used
for
pools
and
other
open
bodies
of
stagnant
liquid.
(A
limitation
of
the
COLIWASA
is
that
the
stopper
mechanism
may
not
allow
collection
of
approximately
the
bottom
inch
of
material,
depending
on
construction
of
the
stopper.)
The
COLIWASA
should
not
be
used
to
sample
flowing
or
moving
liquids.

D
5608
Standard
Practice
for
Decontamination
of
Field
Equipment
Used
at
Low
Level
Radioactive
Waste
Sites
This
practice
covers
the
decontamination
of
field
equipment
used
in
the
sampling
of
soils,
soil
gas,
sludges,
surface
water,
and
ground
water
at
waste
sites
known
or
suspected
of
containing
low­
level
radioactive
wastes.
This
practice
is
applicable
at
sites
where
low­
level
radioactive
wastes
are
known
or
suspected
to
exist.
By
itself
or
in
conjunction
with
Practice
D
5088,
this
practice
may
also
be
applicable
for
the
decontamination
of
equipment
used
in
the
vicinity
of
known
or
suspected
transuranic
or
mixed
wastes.
Procedures
are
contained
in
this
practice
for
the
decontamination
of
equipment
that
comes
into
contact
with
the
sample
matrix
(sample
contacting
equipment),
and
for
ancillary
equipment
that
has
not
contacted
the
sample,
but
may
have
become
contaminated
during
use
(noncontacting
equipment).
This
practice
is
applicable
to
most
conventional
sampling
equipment
constructed
of
metallic
and
hard
and
smooth
synthetic
materials.
Materials
with
rough
or
porous
surfaces,
or
having
a
high
sorption
rate,
should
not
be
used
in
radioactive­
waste
sampling
due
to
the
difficulties
with
decontamination.
In
those
cases
in
which
sampling
will
be
periodically
performed,
such
as
sampling
of
wells,
consideration
should
be
given
to
the
use
of
dedicated
sampling
equipment
if
legitimate
concerns
exist
for
the
production
of
undesirable
or
unmanageable
waste
byproducts,
or
both,
during
the
decontamination
of
tools
and
equipment.
This
practice
does
not
address
regulatory
requirements
for
personnel
protection
or
decontamination,
or
for
the
handling,
labeling,
shipping,
or
storing
of
wastes,
or
samples.
Specific
radiological
release
requirements
and
limits
must
be
determined
by
users
in
accordance
with
local,
State
and
Federal
regulations.
Appendix
J
315
D
5633
Standard
Practice
for
Sampling
with
a
Scoop
This
procedure
covers
the
method
and
equipment
used
to
collect
surface
and
near­
surface
samples
of
soils
and
physically
similar
materials
using
a
scoop.
This
practice
is
applicable
to
rapid
screening
programs,
pilot
studies,
and
other
semi­
quantitative
investigations.
The
practice
describes
how
a
shovel
is
used
to
remove
the
top
layers
of
soil
to
the
appropriate
sample
depth
and
either
a
disposable
scoop
or
a
reusable
scoop
is
used
to
collect
and
place
the
sample
in
the
sample
container.

D
5658
Standard
Practice
for
Sampling
Unconsolidated
Waste
from
Trucks
This
practice
covers
several
methods
for
collecting
waste
samples
from
trucks.
These
methods
are
adapted
specifically
for
sampling
unconsolidated
solid
wastes
in
bulk
loads
using
several
types
of
sampling
equipment.

D
5679
Standard
Practice
for
Sampling
Consolidated
Solids
in
Drums
or
Similar
Containers
This
practice
covers
typical
equipment
and
methods
for
collecting
samples
of
consolidated
solids
in
drums
or
similar
containers.
These
methods
are
adapted
specifically
for
sampling
drums
having
a
volume
of
110
U.
S.
gallons
(416
L)
or
less,
and
are
applicable
to
a
hazardous
material,
product,
or
waste.

D
5680
Standard
Practice
for
Sampling
Unconsolidated
Solids
in
Drums
or
Similar
Containers
This
practice
covers
typical
equipment
and
methods
for
collecting
samples
of
unconsolidated
solids
in
drums
or
similar
containers.
These
methods
are
adapted
specifically
for
sampling
drums
having
a
volume
of
110
U.
S.
gallons
(416
L)
or
less,
and
are
applicable
to
a
hazardous
material,
product,
or
waste.

D
5730
Standard
Guide
for
Site
Characterization
for
Environmental
Purposes
with
Emphasis
on
Soil,
Rock,
the
Vadose
Zone
and
Ground
Water
This
guide
covers
a
general
approach
to
planning
field
investigations
that
is
useful
for
any
type
of
environmental
investigation
with
a
primary
focus
on
the
subsurface
and
major
factors
affecting
the
surface
and
subsurface
environment.
Generally,
such
investigations
should
identify
and
locate,
both
horizontally
and
vertically,
significant
soil
and
rock
masses
and
groundwater
conditions
present
within
a
given
site
area
and
establish
the
characteristics
of
the
subsurface
materials
by
sampling
or
in
situ
testing,
or
both.
The
extent
of
characterization
and
specific
methods
used
will
be
determined
by
the
environmental
objectives
and
data
quality
requirements
of
the
investigation.
This
guide
focuses
on
field
methods
for
determining
site
characteristics
and
collection
of
samples
for
further
physical
and
chemical
characterization.
It
does
not
address
special
considerations
required
for
characterization
of
karst
and
fractured
rock
terrain.
Appendix
J
316
D
5743
Standard
Practice
for
Sampling
Single
or
Multilayered
Liquids,
with
or
without
Solids,
in
Drums
or
Similar
Containers
This
practice
covers
typical
equipment
and
methods
for
collecting
samples
of
single
or
multilayered
liquids,
with
or
without
solids,
in
drums
or
similar
containers.
These
methods
are
adapted
specifically
for
sampling
drums
having
a
volume
of
110
gallons
(416
L)
or
less,
and
are
applicable
to
a
hazardous
material,
product,
or
waste.

D
5792
Standard
Practice
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities:
Development
of
Data
Quality
Objectives
This
practice
covers
the
development
of
data
quality
objectives
(DQOs)
for
the
acquisition
of
environmental
data.
Optimization
of
sampling
and
analysis
design
is
a
part
of
the
DQO
Process.
This
practice
describes
the
DQO
Process
in
detail.
The
various
strategies
for
design
optimization
are
too
numerous
to
include
in
this
practice.
Many
other
documents
outline
alternatives
for
optimizing
sampling
and
analysis
design,
therefore,
only
an
overview
of
design
optimization
is
included.
Some
design
aspects
are
included
in
the
examples
for
illustration
purposes.

D
5903
Standard
Guide
for
Planning
and
Preparing
for
a
Groundwater
Sampling
Event
This
guide
covers
planning
and
preparing
for
a
ground­
water
sampling
event.
It
includes
technical
and
administrative
considerations
and
procedures.
Example
checklists
are
also
provided
as
appendices.
This
guide
may
not
cover
every
consideration
and
procedure
that
is
necessary
before
all
ground­
water
sampling
projects.
This
guide
focuses
on
sampling
of
ground
water
from
monitoring
wells;
however,
most
of
the
guidance
herein
can
apply
to
the
sampling
of
springs
as
well.

D
5911
Standard
Practice
for
Minimum
Set
of
Data
Elements
to
Identify
a
Soil
Sampling
Site
This
practice
covers
what
information
should
be
obtained
to
uniquely
identify
any
soil
sampling
or
examination
site
where
an
absolute
and
recoverable
location
is
necessary
for
quality
control
of
the
study,
such
as
for
a
waste
disposal
project.
The
minimum
set
of
data
elements
was
developed
considering
the
needs
for
informational
data
bases,
such
as
geographic
information
systems.
Other
distinguishing
details,
such
as
individual
site
characteristics,
help
in
singularly
cataloging
the
site.
For
studies
that
are
not
environmentally
regulated,
such
as
for
an
agricultural
or
preconstruction
survey,
the
data
specifications
established
by
a
client
and
the
project
manager
may
be
different
from
that
of
the
minimum
set.
As
used
in
this
practice,
a
soil
sampling
site
is
meant
to
be
a
single
point,
not
a
geographic
area
or
property,
located
by
an
X,
Y,
and
Z
coordinate
position
at
land
surface
or
a
fixed
datum.
All
soil
data
collected
for
the
site
are
directly
related
to
the
coordinate
position,
e.
g.,
a
sample
is
collected
from
a
certain
number
of
feet
(or
meters)
or
sampled
from
a
certain
interval
to
feet
(or
meters)
below
the
X,
Y,
and
Z
coordinate
position.
A
soil
sampling
site
can
include
a
test
well,
augered
or
bored
hole,
excavation,
grab
sample,
test
pit,
sidewall
sample,
stream
bed,
or
any
other
site
where
samples
of
the
soil
can
be
collected
or
examined
for
the
purpose
intended.
Samples
of
soil
(sediment)
filtered
from
the
water
of
streams,
rivers,
or
lakes
are
not
in
the
scope
of
this
practice.
Appendix
J
317
D
5956
Standard
Guide
for
Sampling
Strategies
for
Heterogeneous
Wastes
This
guide
is
a
practical
nonmathematical
discussion
for
heterogeneous
waste
sampling
strategies.
This
guide
is
consistent
with
the
particulate
material
sampling
theory,
as
well
as
inferential
statistics,
and
may
serve
as
an
introduction
to
the
statistical
treatment
of
sampling
issues.
This
guide
does
not
provide
comprehensive
sampling
procedures,
nor
does
it
serve
as
a
guide
to
any
specification.

D
6001
Standard
Guide
for
Direct­
Push
Water
Sampling
for
Geoenvironmental
Investigations
This
guide
reviews
methods
for
sampling
ground
water
at
discrete
points
or
in
increments
by
insertion
of
sampling
devices
by
static
force
or
impact
without
drilling
and
removal
of
cuttings.
By
directly
pushing
the
sampler,
the
soil
is
displaced
and
helps
to
form
an
annular
seal
above
the
sampling
zone.
Direct­
push
water
sampling
can
be
one­
time
or
multiple­
sampling
events.
Methods
for
obtaining
water
samples
for
water
quality
analysis
and
detection
of
contaminants
are
presented.
Field
test
methods
described
in
this
guide
include
installation
of
temporary
well
points
and
insertion
of
water
samplers
using
a
variety
of
insertion
methods.
The
insertion
methods
include
(1)
soil
probing
using
combinations
of
impact,
percussion,
or
vibratory
driving
with
or
without
additions
of
smooth
static
force;
(2)
smooth
static
force
from
the
surface
using
hydraulic
penetrometer
or
drilling
equipment
and
incremental
drilling
combined
with
direct­
push
water
sampling
events.
Methods
for
borehole
abandonment
by
grouting
are
also
addressed.

D
6008
Standard
Practice
for
Conducting
Environmental
Baseline
Surveys
The
purpose
of
this
practice
is
to
define
good
commercial
and
customary
practice
in
the
United
States
for
conducting
an
environmental
baseline
survey
(EBS).
Such
surveys
are
conducted
to
determine
certain
elements
of
the
environmental
condition
of
Federal
real
property,
including
excess
and
surplus
property
at
closing
and
realigning
military
installations.
This
effort
is
conducted
to
fulfill
certain
requirements
of
the
Comprehensive
Environmental
Response
Compensation
and
Liability
Act
of
1980
(CERCLA)
section
120(
h),
as
amended
by
the
Community
Environmental
Response
Facilitation
Act
of
1992
(CERFA).
As
such,
this
practice
is
intended
to
help
a
user
to
gather
and
analyze
data
and
information
in
order
to
classify
property
into
seven
environmental
condition
of
property
area
types
(in
accordance
with
the
Standard
Classification
of
Environmental
Condition
of
Property
Area
Types).
Once
documented,
the
EBS
is
used
to
support
Findings
of
Suitability
to
Lease,
or
uncontaminated
property
determinations,
or
a
combination
thereof,
pursuant
to
the
requirements
of
CERFA.
Users
of
this
practice
should
note
that
it
does
not
address
(except
where
explicitly
noted)
requirements
of
CERFA.
The
practice
also
does
not
address
(except
where
explicitly
noted)
requirements
for
appropriate
and
timely
regulatory
consultation
or
concurrence,
or
both,
during
the
conduct
of
the
EBS
or
during
the
identification
and
use
of
the
standard
environmental
condition
of
property
area
types.

D
6009
Standard
Guide
for
Sampling
Waste
Piles
This
guide
provides
guidance
for
obtaining
representative
samples
from
waste
piles.
Guidance
is
provided
for
site
evaluation,
sampling
design,
selection
of
equipment,
and
data
interpretation.
Waste
piles
include
areas
used
primarily
for
waste
storage
or
disposal,
including
above­
grade
dry
land
disposal
units.
This
guide
can
be
applied
to
sampling
municipal
waste
piles,
and
it
addresses
how
the
choice
of
sampling
design
and
sampling
methods
depends
on
specific
Appendix
J
318
features
of
the
pile.

D
6044
Standard
Guide
for
Representative
Sampling
for
Management
of
Waste
and
Contaminated
Media
This
guide
covers
the
definition
of
representativeness
in
environmental
sampling,
identifies
sources
that
can
affect
representativeness
(especially
bias),
and
describes
the
attributes
that
a
representative
sample
or
a
representative
set
of
samples
should
possess.
For
convenience,
the
term
"representative
sample"
is
used
in
this
guide
to
denote
both
a
representative
sample
and
a
representative
set
of
samples,
unless
otherwise
qualified
in
the
text.
This
guide
outlines
a
process
by
which
a
representative
sample
may
be
obtained
from
a
population,
and
it
describes
the
attributes
of
a
representative
sample
and
presents
a
general
methodology
for
obtaining
representative
samples.
It
does
not,
however,
provide
specific
or
comprehensive
sampling
procedures.
It
is
the
user's
responsibility
to
ensure
that
proper
and
adequate
procedures
are
used.

D
6051
Standard
Guide
for
Composite
Sampling
and
Field
Subsampling
for
Environmental
Waste
Management
Activities
This
guide
discusses
the
advantages
and
appropriate
use
of
composite
sampling,
field
procedures
and
techniques
to
mix
the
composite
sample
and
procedures
to
collect
an
unbiased
and
precise
subsample
from
a
larger
sample.
Compositing
and
subsampling
are
key
links
in
the
chain
of
sampling
and
analytical
events
that
must
be
performed
in
compliance
with
project
objectives
and
instructions
to
ensure
that
the
resulting
data
are
representative.
This
guide
discusses
the
advantages
and
limitations
of
using
composite
samples
in
designing
sampling
plans
for
characterization
of
wastes
(mainly
solid)
and
potentially
contaminated
media.
This
guide
assumes
that
an
appropriate
sampling
device
is
selected
to
collect
an
unbiased
sample.
It
does
not
address
where
samples
should
be
collected
(depends
on
the
objectives),
selection
of
sampling
equipment,
bias
introduced
by
selection
of
inappropriate
sampling
equipment,
sample
collection
procedures
or
collection
of
a
representative
specimen
from
a
sample,
or
statistical
interpretation
of
resultant
data
and
devices
designed
to
dynamically
sample
process
waste
streams.
It
also
does
not
provide
sufficient
information
to
statistically
design
an
optimized
sampling
plan,
or
to
determine
the
number
of
samples
to
collect
or
to
calculate
the
optimum
number
of
samples
to
composite
to
achieve
specified
data
quality
objectives.
The
mixing
and
subsampling
described
in
this
guide
is
expected
to
cause
significant
losses
of
volatile
constituents.
Specialized
procedures
should
be
used
for
compositing
samples
for
determination
of
volatiles.

D
6063
Standard
Guide
for
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel
This
guide
covers
information,
including
flow
charts,
for
field
personnel
to
follow
in
order
to
collect
samples
from
drums
and
similar
containers.
The
purpose
of
this
guide
is
to
help
field
personnel
in
planning
and
obtaining
samples
from
drums
and
similar
containers,
using
equipment
and
techniques
that
will
ensure
that
the
objectives
of
the
sampling
activity
will
be
met.
It
can
also
be
used
as
a
training
tool.
Appendix
J
319
D
6169
Standard
Guide
for
Selection
of
Soil
and
Rock
Sampling
Devices
Used
With
Drill
Rigs
for
Environmental
Investigations
This
guide
covers
the
selection
of
soil
and
rock
sampling
devices
used
with
drill
rigs
for
the
purpose
of
characterizing
in
situ
physical
and
hydraulic
properties,
chemical
characteristics,
subsurface
lithology,
stratigraphy,
and
structure,
and
hydrogeologic
units
in
environmental
investigations.

D
6232
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities
This
guide
covers
criteria
that
should
be
considered
when
selecting
sampling
equipment
for
collecting
environmental
and
waste
samples
for
waste
management
activities.
This
guide
includes
a
list
of
equipment
that
is
used
and
is
readily
available.
Many
specialized
sampling
devices
are
not
specifically
included
in
this
guide,
however,
the
factors
that
should
be
weighed
when
choosing
any
piece
of
equipment
are
covered
and
remain
the
same
for
the
selection
of
any
piece
of
equipment.
Sampling
equipment
described
in
this
guide
include
automatic
samplers,
pumps,
bailers,
tubes,
scoops,
spoons,
shovels,
dredges,
and
coring
and
augering
devices.
The
selection
of
sampling
locations
is
outside
the
scope
of
this
guide.

D
6233
Standard
Guide
for
Data
Assessment
for
Environmental
Waste
Management
Activities
This
guide
covers
a
practical
strategy
for
examining
an
environmental
project
data
collection
effort
and
the
resulting
data
to
determine
conformance
with
the
project
plan
and
impact
on
data
usability.
This
guide
also
leads
the
user
through
a
logical
sequence
to
determine
which
statistical
protocols
should
be
applied
to
the
data.

D
6250
Standard
Practice
for
Derivation
of
Decision
Point
and
Confidence
Limit
for
Statistical
Testing
of
Mean
Concentration
in
Waste
Management
Decisions
This
practice
covers
a
logical
basis
for
the
derivation
of
a
decision
point
and
confidence
limit
when
the
mean
concentration
is
used
for
making
environmental
waste
management
decisions.
The
determination
of
a
decision
point
or
confidence
limit
should
be
made
in
the
context
of
the
defined
problem.
The
main
focus
of
this
practice
is
on
the
determination
of
a
decision
point.
In
environmental
management
decisions,
the
derivation
of
a
decision
point
allows
a
direct
comparison
of
a
sample
mean
against
this
decision
point.
Similar
decisions
can
be
made
by
comparing
a
confidence
limit
against
a
concentration
limit.
This
practice
focuses
on
making
environmental
decisions
using
this
kind
of
statistical
comparison.
Other
factors,
such
as
any
qualitative
information
that
also
may
be
important
to
decision
making,
are
not
considered
in
the
practice.
This
standard
derives
the
decision
point
and
confidence
limit
in
the
framework
of
a
statistical
test
of
hypothesis
under
three
different
presumptions.
The
relationship
between
decision
point
and
confidence
limit
also
is
described.

D
6282
Standard
Guide
for
Direct
Push
Soil
Sampling
for
Environmental
Site
Characterizations
This
guide
addresses
direct
push
soil
samplers,
which
may
be
driven
into
the
ground
from
the
surface
or
through
pre­
bored
holes.
The
samplers
can
be
continuous
or
discrete
interval
Appendix
J
320
units.
The
samplers
are
advanced
to
the
depth
of
interest
by
a
combination
of
static
push,
or
impacts
from
hammers,
or
vibratory
methods,
or
a
combination
thereof.
Field
methods
described
in
this
guide
include
the
use
of
discreet
and
continuous
sampling
tools,
split
and
solid
barrel
samplers
and
thin
walled
tubes
with
or
without
fixed
piston
style
apparatus.
Insertion
methods
described
include
static
push,
impact,
percussion,
other
vibratory/
sonic
driving,
and
combinations
of
these
methods
using
direct
push
equipment
adapted
to
drilling
rigs,
cone
penetrometer
units,
and
specially
designed
percussion/
direct
push
combination
machines.
Hammers
described
by
this
guide
for
providing
force
for
insertion
include
drop
style,
hydraulically
activated,
air
activated
and
mechanical
lift
devices.
The
guide
does
not
cover
open
chambered
samplers
operated
by
hand
such
as
augers,
agricultural
samplers
operated
at
shallow
depths,
or
side
wall
samplers.

D
6286
Standard
Guide
for
Selection
of
Drilling
Methods
for
Environmental
Site
Characterization
This
guide
provides
descriptions
of
various
drilling
methods
for
environmental
site
characterization,
along
with
the
advantages
and
disadvantages
associated
with
each
method.
This
guide
is
intended
to
aid
in
the
selection
of
drilling
method(
s)
for
environmental
soil
and
rock
borings
and
the
installation
of
monitoring
wells
and
other
water­
quality
monitoring
devices.
This
guide
does
not
address
methods
of
well
construction,
well
development,
or
well
completion.

D
6311
Standard
Guide
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities:
Selection
and
Optimization
of
Sampling
Design
This
guide
provides
practical
information
on
the
selection
and
optimization
of
sample
designs
in
waste
management
sampling
activities,
within
the
context
of
the
requirements
established
by
the
data
quality
objectives
or
other
planning
process.
Specifically,
this
document
provides
(1)
guidance
for
the
selection
of
sampling
designs;
(2)
techniques
to
optimize
candidate
designs;
and
(3)
descriptions
of
the
variables
that
need
to
be
balanced
in
choosing
the
final
optimized
design.

D
6323
Standard
Guide
for
Laboratory
Subsampling
of
Media
Related
to
Waste
Management
Activities
This
guide
covers
common
techniques
for
obtaining
representative
subsamples
from
a
sample
received
at
a
laboratory
for
analysis.
These
samples
may
include
solids,
sludges,
liquids,
or
multilayered
liquids
(with
or
without
solids).
The
procedures
and
techniques
discussed
in
this
guide
depend
upon
the
sample
matrix,
the
type
of
sample
preparation
and
analysis
performed,
the
characteristic(
s)
of
interest,
and
the
project
specific
instructions
or
data
quality
objectives.
This
guide
includes
several
sample
homogenization
techniques,
including
mixing
and
grinding,
as
well
as
information
on
how
to
obtain
a
specimen
or
split
laboratory
samples.
This
guide
does
not
apply
to
air
or
gas
sampling.

D
6418
Standard
Practice
for
Using
the
Disposable
EnCore™
Sampler
for
Sampling
and
Storing
Soil
for
Volatile
Organic
Analysis
This
practice
provides
a
procedure
for
using
the
disposable
EnCore™
sampler
to
collect
and
store
a
soil
sample
of
approximately
5
grams
or
25
grams
for
volatile
organic
analysis.
The
EnCore™
sampler
is
designed
to
collect
and
hold
a
soil
sample
during
shipment
to
the
Appendix
J
321
laboratory.
It
consists
of
a
coring
body/
storage
chamber,
O­
ring
sealed
plunger,
and
O­
ring
sealed
cap.
In
performing
the
practice,
the
integrity
of
the
soil
sample
structure
is
maintained
and
there
is
very
limited
exposure
of
the
sample
to
the
atmosphere.
Laboratory
subsampling
is
not
required;
the
sample
is
expelled
directly
from
the
sampler
body
into
the
appropriate
container
for
analysis.

D
6538
Standard
Guide
for
Sampling
Wastewater
With
Automatic
Samplers
This
guide
covers
the
selection
and
use
of
automatic
wastewater
samplers
including
procedures
for
their
use
in
obtaining
representative
samples.
Automatic
wastewater
samplers
are
intended
for
the
unattended
collection
of
samples
that
are
representative
of
the
parameters
of
interest
in
the
wastewater
body.
While
this
guide
primarily
addresses
the
sampling
of
wastewater,
the
same
automatic
samplers
may
be
used
to
sample
process
streams
and
natural
water
bodies.

D
6582
Standard
Guide
for
Ranked
Set
Sampling:
Efficient
Estimation
of
a
Mean
Concentration
in
Environmental
Sampling
This
guide
describes
ranked
set
sampling,
discusses
its
relative
advantages
over
simple
random
sampling,
and
provides
examples
of
potential
applications
in
environmental
sampling.
Ranked
set
sampling
is
useful
and
cost­
effective
when
there
is
an
auxiliary
variable,
which
can
be
inexpensively
measured
relative
to
the
primary
variable,
and
when
the
auxiliary
variable
has
correlation
with
the
primary
variable.
The
resultant
estimation
of
the
mean
concentration
is
unbiased,
more
precise
than
simple
random
sampling,
and
more
representative
of
the
population
under
a
wide
variety
of
conditions.

D
6771
Standard
Practice
for
Low­
Flow
Purging
and
Sampling
for
Wells
and
Devices
Used
for
Ground­
Water
Quality
Investigations
This
practice
covers
the
method
for
purging
and
sampling
wells
and
devices
used
for
ground­
water
quality
investigations
and
monitoring
programs
known
as
low­
flow
purging
and
sampling.
The
method
is
also
known
by
the
terms
minimal
drawdown
purging
or
low­
stress
purging.
The
method
could
be
used
for
other
types
of
ground­
water
sampling
programs
but
these
uses
are
not
specifically
addressed
in
this
practice.
This
practice
applies
only
to
wells
sampled
at
the
wellhead.
This
practice
does
not
address
sampling
of
wells
containing
either
light
or
dense
non­
aqueous­
phase
liquids
(LNAPLs
or
DNAPLs).

E
122
Standard
Practice
for
Choice
of
Sample
Size
to
Estimate
the
Average
for
a
Characteristic
of
a
Lot
or
Process
This
practice
covers
methods
for
calculating
the
sample
size
(the
number
of
units
to
include
in
a
random
sample
from
a
lot
of
material)
in
order
to
estimate,
with
a
prescribed
precision,
an
average
of
some
characteristic
for
that
lot
or
process.
The
characteristic
may
be
either
a
numerical
value
of
some
property
or
the
fraction
of
nonconforming
units
with
respect
to
an
attribute.
If
sampling
from
a
process,
the
process
must
be
in
a
state
of
statistical
control
for
the
results
to
have
predictive
value.

E
178
Standard
Practice
for
Dealing
with
Outlying
Observations
This
practice
covers
outlying
observations
in
samples
and
how
to
test
the
statistical
significance
Appendix
J
322
of
them.
An
outlying
observation,
or
"outlier,"
is
an
observation
that
appears
to
deviate
markedly
from
other
members
of
the
sample
in
which
it
occurs.
An
outlying
observation
may
be
merely
an
extreme
manifestation
of
the
random
variability
inherent
in
the
data.
If
this
is
true,
the
value
should
be
retained
and
processed
in
the
same
manner
as
the
other
observations
in
the
sample.
On
the
other
hand,
an
outlying
observation
may
be
the
result
of
gross
deviation
from
prescribed
experimental
procedure
or
an
error
in
calculating
or
recording
the
numerical
value.
In
such
cases,
it
may
be
desirable
to
institute
an
investigation
to
ascertain
the
reason
for
the
aberrant
value.
The
observation
may
even
actually
be
rejected
as
a
result
of
the
investigation,
though
not
necessarily
so.
At
any
rate,
in
subsequent
data
analysis
the
outlier
or
outliers
probably
will
be
recognized
as
being
from
a
different
population
than
that
of
the
other
sample
values.
The
procedures
covered
herein
apply
primarily
to
the
simplest
kind
of
experimental
data;
that
is,
replicate
measurements
of
some
property
of
a
given
material,
or
observations
in
a
supposedly
single
random
sample.
Nevertheless,
the
tests
suggested
do
cover
a
wide
enough
range
of
cases
in
practice
to
have
broad
utility.

E
300
Standard
Practice
for
Sampling
Industrial
Chemicals
This
practice
covers
procedures
for
sampling
several
classes
of
industrial
chemicals,
as
well
as
recommendations
for
determining
the
number
and
location
of
such
samples
to
ensure
representativeness
in
accordance
with
accepted
probability
sampling
principles.
Although
this
practice
describes
specific
procedures
for
sampling
various
liquids,
solids,
and
slurries,
in
bulk
or
in
packages,
these
recommendations
only
outline
the
principles
to
be
observed.
They
should
not
take
precedence
over
specific
sampling
instructions
contained
in
other
ASTM
product
or
method
standards.

E
1402
Standard
Terminology
Relating
to
Sampling
This
standard
includes
those
items
related
to
statistical
aspects
of
sampling.
It
is
applicable
to
sampling
in
any
matrix
and
provides
definitions,
descriptions,
discussions,
and
comparisons
of
trends.

E
1727
Standard
Practice
for
Field
Collection
of
Soil
Samples
for
Lead
Determination
by
Atomic
Spectrometry
Techniques
This
practice
covers
the
collection
of
soil
samples
using
coring
and
scooping
methods.
Soil
samples
are
collected
in
a
manner
that
will
permit
subsequent
digestion
and
determination
of
lead
using
laboratory
analysis
techniques
such
as
Inductively
Coupled
Plasma
Atomic
Emission
Spectrometry
(ICP­
AES),
Flame
Atomic
Absorption
Spectrometry
(FAAS),
and
Graphite
Furnace
Atomic
Absorption
Spectrometry
(GFAAS).

F
301
Standard
Practice
for
Open
Bottle
Tap
Sampling
of
Liquid
Streams
This
practice
covers
a
general
method
to
take
samples
of
liquid
streams
in
such
a
way
so
that
the
samples
are
representative
of
the
liquid
in
the
sampled
stream
and
that
the
sample
acquisition
process
does
not
interfere
with
any
operations
taking
place
in
the
stream.
The
practice
is
particularly
applicable
for
sampling
the
feed
and
filtrate
streams
around
a
filter
medium.
The
practice
includes
consideration
of
potential
limits
in
the
sample
size
or
sample
flow
rate
observation
capability
of
the
device
used
to
measure
particle
content
in
the
sample.
323
REFERENCES
Note:
Due
to
the
dynamic
nature
of
the
Internet,
the
location
and
content
of
World
Wide
Web
sites
given
in
this
document
may
change
over
time.
If
you
find
a
broken
link
to
an
EPA
document,
use
the
search
engine
at
http://
www.
epa.
gov/
to
find
the
document.
Links
to
web
sites
outside
the
U.
S.
EPA
web
site
are
provided
for
the
convenience
of
the
user,
and
the
U.
S.
EPA
does
not
exercise
any
editorial
control
over
the
information
you
may
find
at
these
external
web
sites.

Air
Force
Center
for
Environmental
Excellence
(AFCEE).
1995.
"Disposal
of
Construction
and
Demolition
Debris."
Pro­
Act
Fact
Sheet
TI5040.
Brooks
Air
Force
Base,
TX.

American
Society
for
Quality
(ASQ).
1988.
Sampling
Procedures
and
Tables
for
Inspection
of
Isolated
Lots
by
Attributes.
American
National
Standard
ANSI/
ASQC
Standard
Q3­
1988.
Milwaukee,
Wisconsin.

ASQ.
1993.
Sampling
Procedures
and
Tables
for
Inspection
By
Attributes.
American
National
Standard
ANSI/
ASQC
Z1.4­
1993.
Milwaukee,
Wisconsin.

American
Society
for
Testing
and
Materials
(ASTM)
D
1452­
80.
1980.
Standard
Practice
for
Soil
Investigation
and
Sampling
by
Auger
Borings.
West
Conshohocken,
PA.
http://
www.
astm.
org/

ASTM
D
1586­
84.
1984.
Standard
Test
Method
for
Penetration
Test
and
Split­
Barrel
Sampling
of
Soils.
West
Conshohocken,
PA.

ASTM
D
1587­
94.
1994.
Standard
Practice
for
Thin­
Walled
Tube
Geotechnical
Sampling
of
Soils.
West
Conshohocken,
PA.

ASTM
D
3665­
95.
1995.
Standard
Practice
for
Random
Sampling
of
Construction
Materials.
West
Conshohocken,
PA.

ASTM
D
4220­
95.
1995.
Standard
Practices
for
Preserving
and
Transporting
Soil
Samples.
West
Conshohocken,
PA.

ASTM
D
4342­
84.
1984.
Standard
Practice
for
Collecting
of
Benthic
Macroinvertebrates
with
Ponar
Grab
Sampler.
West
Conshohocken,
PA.

ASTM
D
4387­
97.
Standard
Guide
for
Selecting
Grab
Sampling
Devices
for
Collecting
Benthic
Macroinvertebrates.
West
Conshohocken,
PA.

ASTM
D
4448­
85a.
1985.
Standard
Guide
for
Sampling
Groundwater
Monitoring
Wells.
West
Conshohocken,
PA.

ASTM
D
4489­
95.
1995.
Standard
Practices
for
Sampling
of
Waterborne
Oils.
West
Conshohocken,
PA.

ASTM
D
4547­
98.
1998.
Standard
Guide
for
Sampling
Waste
and
Soils
for
Volatile
Organics.
West
Conshohocken,
PA.
References
324
ASTM
D
4700­
91.
1991.
Standard
Guide
for
Soil
Sampling
from
the
Vadose
Zone.
West
Conshohocken,
PA.

ASTM
D
4823­
95.
1995.
Standard
Guide
for
Core
Sampling
Submerged,
Unconsolidated
Sediments.
West
Conshohocken,
PA.

ASTM
D
4840­
95.
1995.
Standard
Guide
for
Sampling
Chain­
of­
Custody
Procedures.
West
Conshohocken,
PA.

ASTM
D
5013­
89.
1989.
Standard
Practices
for
Sampling
Wastes
from
Pipes
and
Other
Point
Discharges.
West
Conshohocken,
PA.

ASTM
D
5088­
90.
1990.
Standard
Practice
for
Decontamination
of
Field
Equipment
Used
at
Nonradioactive
Waste
Sites.
West
Conshohocken,
PA.

ASTM
D
5092­
90.
1990.
Standard
Practice
for
Design
and
Installation
of
Ground
Water
Monitoring
Wells
in
Aquifers.
West
Conshohocken,
PA.

ASTM
D
5283­
92.
1992.
Standard
Practice
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities
Quality
Assurance
and
Quality
Control
Planning
and
Implementation.
West
Conshohocken,
PA.

ASTM
D
5314­
92.
1992.
Standard
Guide
for
Soil
Gas
Monitoring
in
the
Vadose
Zone.
West
Conshohocken,
PA.

ASTM
D
5358­
93.
1993.
Standard
Practice
for
Sampling
with
a
Dipper
or
Pond
Sampler.
West
Conshohocken,
PA.

ASTM
D
5387­
93.
1993.
Standard
Guide
for
Elements
of
a
Complete
Data
Set
for
NonCohesive
Sediments.
West
Conshohocken,
PA.

ASTM
D
5451­
93.
1993.
Standard
Practice
for
Sampling
Using
a
Trier
Sampler.
West
Conshohocken,
PA.

ASTM
D
5495­
94.
1994.
Standard
Practice
for
Sampling
with
a
Composite
Liquid
Waste
Sampler
(COLIWASA).
West
Conshohocken,
PA.

ASTM
D
5633­
94.
1994.
Standard
Practice
for
Sampling
with
a
Scoop.
West
Conshohocken,
PA.

ASTM
D
5658­
95.
1995.
Standard
Practice
for
Sampling
Unconsolidated
Waste
from
Trucks.
West
Conshohocken,
PA.

ASTM
D
5679­
95a.
1995.
Standard
Practice
for
Sampling
Consolidated
Solids
in
Drums
or
Similar
Containers.
West
Conshohocken,
PA.

ASTM
D
5680­
95a.
1995.
Standard
Practice
for
Sampling
Unconsolidated
Solids
in
Drums
or
Similar
Containers.
West
Conshohocken,
PA.
References
325
ASTM
D
5730­
96.
1996.
Standard
Guide
for
Site
Characterization
for
Environmental
Purposes
with
Emphasis
on
Soil,
Rock,
the
Vadose
Zone
and
Ground
Water.
West
Conshohocken,
PA.

ASTM
D
5743­
97.
1997.
Standard
Practice
for
Sampling
Single
or
Multilayered
Liquids,
With
or
Without
Solids,
in
Drums
or
Similar
Containers.
West
Conshohocken,
PA.

ASTM
D
5792­
95.
1995.
Standard
Practice
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities:
Development
of
Data
Quality
Objectives.
West
Conshohocken,
PA.

ASTM
D
5956­
96.
1996.
Standard
Guide
for
Sampling
Strategies
for
Heterogeneous
Waste.
West
Conshohocken,
PA.

ASTM
D
6009­
96.
1996.
Standard
Guide
for
Sampling
Waste
Piles.
West
Conshohocken,
PA.

ASTM
D
6044­
96.
1996.
Standard
Guide
for
Representative
Sampling
for
Management
of
Waste
and
Contaminated
Media.
West
Conshohocken,
PA.

ASTM
D
6051­
96.
1996.
Standard
Guide
for
Composite
Sampling
and
Field
Subsampling
for
Environmental
Waste
Management
Activities.
West
Conshohocken,
PA.

ASTM
D
6063­
96.
1996.
Standard
Guide
for
Sampling
of
Drums
and
Similar
Containers
by
Field
Personnel.
West
Conshohocken,
PA.

ASTM
D
6169­
98.
1998.
Standard
Guide
for
Selection
of
Soil
and
Rock
Sampling
Devices
Used
With
Drill
Rigs
for
Environmental
Investigations.
West
Conshohocken,
PA.

ASTM
D
6232­
98.
1998.
Standard
Guide
for
Selection
of
Sampling
Equipment
for
Waste
and
Contaminated
Media
Data
Collection
Activities.
West
Conshohocken,
PA.

ASTM
D
6233­
98.
1998.
Standard
Guide
for
Data
Assessment
for
Environmental
Waste
Management
Activities.
West
Conshohocken,
PA.

ASTM
D
6250­
98.
1998.
Standard
Practice
for
Derivation
of
Decision
Point
and
Confidence
Limit
for
Statistical
Testing
of
Mean
Concentration
in
Waste
Management
Decisions.
West
Conshohocken,
PA.

ASTM
D
6282­
98.
1998.
Standard
Guide
for
Direct
Push
Soil
Sampling
for
Environmental
Site
Characterizations.
West
Conshohocken,
PA.

ASTM
D
6286­
98.
1998.
Standard
Guide
for
Selection
of
Drilling
Methods
for
Environmental
Site
Characterization.
West
Conshohocken,
PA.

ASTM
D
6311­
98.
1998.
Standard
Guide
for
Generation
of
Environmental
Data
Related
to
Waste
Management
Activities:
Selection
and
Optimization
of
Sampling
Design.
West
Conshohocken,
PA.
References
326
ASTM
D
6323­
98.
1998.
Standard
Guide
for
Laboratory
Subsampling
of
Media
Related
to
Waste
Management
Activities.
West
Conshohocken,
PA.

ASTM
D
6418­
99.
1999.
Standard
Practice
for
Using
the
Disposable
EnCore™
Sampler
for
Sampling
and
Storing
Soil
for
Volatile
Organic
Analysis.
West
Conshohocken,
PA.

ASTM
E
1727­
95.
1995.
Standard
Practice
for
Field
Collection
of
Soil
Samples
for
Lead
Determination
by
Atomic
Spectrometry
Techniques.
West
Conshohocken,
PA.

Barth,
D.
S.,
B.
J.
Mason,
T.
H.
Starks,
and
K.
W.
Brown.
1989.
Soil
Sampling
Quality
Assurance
User's
Guide.
2
nd
ed.
EPA
600/
8­
89/
046.
NTIS
PB89­
189864.
Environmental
Monitoring
Systems
Laboratory.
Las
Vegas,
NV.

Blacker,
S.
and
D.
Goodman.
1994a.
"An
Integrated
Approach
for
Efficient
Site
Cleanup."
Environmental
Science
&
Technology
28(
11).

Blacker,
S.
and
D.
Goodman.
1994b.
"Case
Study:
Application
at
a
Superfund
Site."
Environmental
Science
&
Technology
28(
11).

Cameron,
K.
1999.
Personal
communication
between
Dr.
Kirk
Cameron
(Statistical
Scientist,
MacStat
Consulting,
Ltd.)
and
Bob
Stewart
(Science
Applications
International
Corporation),
March
9.

Cochran,
W.
G.
1977.
Sampling
Techniques.
3
rd
ed.
New
York:
John
Wiley
&
Sons,
Inc.

Cohen,
A.
C.,
Jr.
1959.
"Simplified
Estimator
for
the
Normal
Distribution
When
Samples
Are
Single
Censored
or
Truncated."
Technometrics
1:
217­
37.

Conover,
W.
J.
1999.
Practical
Nonparametric
Statistics,
Third
Edition.
New
York:
John
Wiley
&
Sons,
Inc.

Crockett,
A.
B.,
H.
D.
Craig,
T.
F.
Jenkins,
and
W.
E.
Sisk.
1996.
Field
Sampling
and
Selecting
OnSite
Analytical
Methods
For
Explosives
in
Soil.
EPA/
540/
R­
97/
501.
Office
of
Research
and
Development
and
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.

Crumbling,
D.
M.
Current
Perspectives
in
Site
Remediation
and
Monitoring:
Clarifying
DQO
Terminology
Usage
to
Support
Modernization
of
Site
Cleanup
Practice.
EPA542­
R­
01­
014.
Office
of
Solid
Waste
and
Emergency
Response,
Technology
Innovation
Office.
October.

Department
of
Defense
(DoD).
1996.
DOD
Preferred
Methods
for
Acceptance
of
Product.
Department
of
Defense
Test
Method
Standard
MIL­
STD­
1916
(April).

Edland,
S.
D.
and
G.
van
Belle.
1994.
Decreased
Sampling
Costs
and
Improved
Accuracy
with
Composite
Sampling
in
Environmental
Statistics,
Assessment
and
Forecasting.
Boca
Raton,
FL:
Lewis
Publishers.

Edmondson,
B.
1996.
"How
to
Spot
a
Bogus
Poll."
American
Demographics.
October.
References
327
Efron,
B.
1981.
"Nonparametric
Estimates
of
Standard
Error:
The
Jackknife,
the
Bootstrap,
and
Other
Resampling
Plans,"
Biometrika.
Transactions
on
Reliability,
40,
547­
552.

Exner,
J.
H.,
W.
D.
Keffer,
R.
O.
Gilbert,
and
R.
R.
Kinnison.
1985.
"A
Sampling
Strategy
for
Remedial
Action
at
Hazardous
Waste
Sites:
Clean­
up
Soil
Contaminated
by
Tetrachlorodibenzo­
p­
Dioxin."
Hazardous
Waste
&
Hazardous
Materials
2(
2):
503­
21.

Fabrizio,
M.
C.,
A.
M.
Frank,
and
J.
F.
Savino.
1995.
"Procedures
for
Formation
of
Composite
Samples
from
Segmented
Populations."
Environmental
Science
&
Technology
29(
5):
1137­
44.

Federal
Remediation
Technologies
Roundtable
(FRTR).
1999.
http://
www.
frtr.
gov/

Filliben,
J.
J.
1975.
"The
Probability
Plot
Correlation
Coefficient
Test
for
Normality."
Technometrics
17:
111­
17.

Flatman,
G.
T.
and
A.
A.
Yfantis.
1996.
"Geostatistical
Sampling
Designs
for
Hazardous
Waste
Sites."
Principles
of
Environmental
Sampling.
2
nd
ed.
L.
H.
Keith,
ed.
Washington,
DC:
American
Chemical
Society.

Garner,
F.
C.,
M.
A.
Stapanian,
and
L.
R.
Williams.
1988.
"Composite
Sampling
for
Environmental
Monitoring."
Principles
of
Environmental
Sampling,
L.
H.
Kieth,
ed.
Washington,
DC:
American
Chemical
Society.

Garner,
F.
C.,
M.
A.
Stapanian,
E.
A.
Yfantis,
and
L.
R.
Williams.
1989.
"Probability
Estimation
with
Sample
Compositing
Techniques."
Journal
of
Official
Statistics
5(
4):
365­
74.

Gerlach,
R.
W.,
D.
E.
Dobb,
G.
A.
Raab,
and
J.
M.
Nocerino.
2002.
Gy
Sampling
Theory
in
Environmental
Studies.
1.
Assessing
Soil
Splitting
Protocols.
Journal
of
Chemometrics.
16:
321­
328.
Jon
Wiley
&
Sons,
Ltd.

Gilbert,
R.
O.
1987.
Statistical
Methods
for
Environmental
Pollution
Monitoring.
New
York:
Van
Nostrand
Reinhold.

Gilliom,
R.
J.
and
D.
R.
Helsel.
1986.
"Estimation
of
Distributional
Parameters
for
Censored
Trace
Level
Water
Quality
Data:
Part
I
Estimation
Techniques."
Water
Resources
Research
22(
2):
135­
46.

Guttman,
I.
1970.
Statistical
Tolerance
Regions:
Classical
and
Bayesian.
London:
Charles
Griffin
&
Co.

Gy,
P.
1982.
Sampling
of
Particulate
Materials:
Theory
and
Practice.
2
nd
ed.
New
York:
Elsevier.

Gy,
P.
1998.
Sampling
for
Analytical
Purposes.
Chichester,
England:
John
Wiley
&
Sons,
Inc.

Hahn,
G.
J.
and
W.
Q.
Meeker.
1991.
Statistical
Intervals:
A
Guide
for
Practitioners.
New
York:
John
Wiley
&
Sons,
Inc.
References
328
Helsel,
D.
R.
1990.
"Less
than
Obvious:
Statistical
Treatment
of
Data
Below
the
Detection
Limit."
Environmental
Science
&
Technology
24(
12):
1766­
74.

Ingamells,
C.
O.
and
F.
F.
Pitard.
1986.
Applied
Geochemical
Analysis.
Vol.
88.
New
York:
John
Wiley.

Ingamells,
C.
O.
1974.
"New
Approaches
to
Geochemical
Analysis
and
Sampling."
Talanta
21:
141­
55.

Ingamells,
C.
O.
and
P.
Switzer.
1973.
"A
Proposed
Sampling
Constant
for
Use
in
Geochemical
Analysis."
Talanta
20:
547­
68.

Isaaks,
E.
H.
and
R.
M.
Srivastava.
1989.
An
Introduction
to
Applied
Geostatistics.
New
York:
Oxford
University
Press.

Jenkins,
T.
F,
C.
L.
Grant,
G.
S.
Brar,
P.
G.
Thorne,
T.
A.
Ranney,
and
P.
W.
Schumacher.
1996.
Assessment
of
Sampling
Error
Associated
with
Collection
and
Analysis
of
Soil
Samples
at
Explosives­
Contaminated
Sites.
Special
Report
96­
15.
September.
U.
S.
Army
Corps
of
Engineers
Cold
Regions
Research
and
Engineering
Laboratory
(USACE
CRREL).
Hanover,
NH.
http://
www.
crrel.
usace.
army.
mil/
techpub/
CRREL_
Reports/
reports/
SR96_
15.
pdf
Jenkins,
T.
F.,
M.
E.
Walsh,
P.
G.
Thorne,
S.
Thiboutot,
G.
Ampleman,
T.
A.
Ranney,
and
C.
L.
Grant.
1997.
Assessment
of
Sampling
Error
Associated
with
Collection
and
Analysis
of
Soil
Samples
at
a
Firing
Range
Contaminated
with
HMX.
Special
Report
97­
22.
September.
USACE
CRREL.
Hanover,
NH.
http://
www.
crrel.
usace.
army.
mil/
techpub/
CRREL_
Reports/
reports/
SR97_
22.
pdf
Jessen,
R.
J.
1978.
Statistical
Survey
Techniques.
New
York:
John
Wiley
&
Sons,
Inc.

Journel,
A.
G.
1988.
"Non­
parametric
Geostatistics
for
Risk
and
Additional
Sampling
Assessment."
Principles
of
Environmental
Sampling.
L.
H.
Keith,
ed.
Washington,
DC:
American
Chemical
Society.

Keith,
L.
H.,
ed.
1996.
Principles
of
Environmental
Sampling.
2
nd
ed.
Washington,
DC:
American
Chemical
Society.

King,
J.
A.
1993.
"Wastewater
Sampling."
The
National
Environmental
Journal
3(
1).

Koski,
W.
M.,
R.
Troast,
and
W.
Keffer.
1991.
"Contaminated
Structures
and
Debris–
Site
Remediation."
In:
Hazardous
Materials
Control/
Superfund
'91
­
Proceedings
of
the
12th
National
Conference.
Hazardous
Materials
Control
Research
Institute.
Greenbelt,
MD.

Land,
C.
E.
1971.
"Confidence
Intervals
for
Linear
Functions
of
the
Normal
Mean
and
Variance."
The
Annals
of
Mathematical
Statistics
42:
1187­
1205.

Land,
C.
E.
1975.
"Tables
of
Confidence
Limits
for
Linear
Functions
of
the
Normal
Mean
and
Variance."
In:
Selected
Tables
in
Mathematical
Statistics.
Vol
III.
Providence,
RI:
American
Mathematical
Society.
References
329
Madansky,
A.
1988.
Prescription
for
Working
Statisticians.
New
York:
Springer­
Verlag.

Mason,
B.
J.
1992.
Preparation
of
Soil
Sampling
Protocols:
Sampling
Techniques
and
Strategies.
EPA/
600/
R­
92/
128.
NTIS
PB
92­
220532.
U.
S.
Environmental
Protection
Agency,
Office
of
Research
and
Development.
Las
Vegas,
NV.
http://
www.
epa.
gov/
swerust1/
cat/
mason.
pdf
McIntyre,
G.
A.
1952.
"A
Method
for
Unbiased
Selective
Sampling
Using
Ranked
Sets."
Australian
Journal
of
Agricultural
Research
3:
385­
390.

Miller,
R.
1974.
"The
Jackknife
­
A
Review,"
Biometrika,
61,
1­
15.

Miller,
R.
G.,
Jr.
1986.
Beyond
ANOVA,
Basics
of
Applied
Statistics.
New
York:
John
Wiley
&
Sons.

Myers,
J.
C.
1997.
Geostatistical
Error
Management:
Quantifying
Uncertainty
for
Environmental
Sampling
and
Mapping.
New
York:
Van
Nostrand
Reinhold.

Natrella,
M.
G.
1966.
Experimental
Statistics.
National
Bureau
of
Standards
Handbook
91.
United
States
Department
of
Commerce.
Washington,
DC.

Neptune,
D.,
E.
P.
Brantly,
M.
J.
Messner,
and
D.
I.
Michael.
1990.
"Quantitative
Decision
Making
in
Superfund:
A
Data
Quality
Objectives
Case
Study."
Hazardous
Materials
Control
44(
2):
358­
63.

Newman,
M.
C.,
K.
D.
Greene,
and
P.
M.
Dixon.
1995.
UnCensor
Version
4.0.
University
of
Georgia,
Savannah
River
Ecology
Laboratory.
Aiken,
SC.
UnCensor
is
public
domain
software.
http://
www.
vims.
edu/
env/
research/
risk/
software/
vims_
software.
htm
Occupational
Safety
and
Health
Administration
(OSHA).
1985.
Occupational
Safety
and
Health
Guidance
Manual
for
Hazardous
Waste
Site
Activities.
Revised
1998.
Prepared
by
the
National
Institute
for
Occupational
Safety
and
Health,
the
Occupational
Safety
and
Health
Administration,
the
U.
S.
Coast
Guard,
and
the
U.
S.
Environmental
Protection
Agency.
Washington,
DC.

Ott,
L.
1988.
An
Introduction
to
Statistical
Methods
and
Data
Analysis.
3
rd
ed.
Boston:
PWSKent
Publishing
Co.

Perez,
A.
and
J.
Lefante.
1996.
"How
Much
Sample
Size
Is
Required
To
Estimate
the
True
Arithmetic
Mean
of
a
Lognormal
Distribution?"
In:
Proceedings
from
the
1996
Joint
Statistical
Meetings
in
Chicago.
American
Statistical
Association.
Alexandria,
VA.

Perez,
A.
and
J.
Lefante.
1997.
"Sample
Size
Determination
and
the
Effect
of
Censoring
When
Estimating
the
Arithmetic
Mean
of
a
Lognormal
Distribution."
Communications
in
Statistics–
Theory
and
Methods
26(
11):
2779­
2801.

Pitard,
F.
F.
1989.
Pierre
Gy's
Sampling
Theory
and
Sampling
Practice.
Vols.
1
and
2.
Boca
Raton,
FL:
CRC
Press
LLC.
References
330
Pitard,
F.
F.
1993.
Pierre
Gy's
Sampling
Theory
and
Sampling
Practice:
Heterogeneity,
Sampling
Correctness,
and
Statistical
Process
Control.
2
nd
ed.
Boca
Raton,
FL:
CRC
Press
LLC.

Porter,
P.
S,
S.
T.
Rao,
J.
Y.
Ku,
R.
L.
Poirot,
and
M.
Dakins.
1997.
"Small
Sample
Properties
of
Nonparametric
Bootstrap
t
Confidence
Intervals–
Technical
Paper."
Journal
of
the
Air
&
Waste
Management
Association
47:
1197­
1203.

Puls,
R.
W.
and
M.
J.
Barcelona.
1996.
Low­
Flow
(Minimal
Drawdown)
Ground­
Water
Sampling
Procedures.
EPA/
540/
S­
95/
504.
U.
S.
Environmental
Protection
Agency,
Office
of
Research
and
Development,
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.
http://
www.
epa.
gov/
r10earth/
offices/
oea/
gwf/
lwflw2a.
pdf
Ramsey,
C.
A.,
M.
E.
Ketterer,
and
J.
H.
Lowry.
1989.
"Application
of
Gy's
Sampling
Theory
to
the
Sampling
of
Solid
Waste
Materials."
In:
Proceedings
of
the
EPA
Fifth
Annual
Waste
Testing
and
Quality
Assurance
Symposium,
Vol.
II.
U.
S.
Environmental
Protection
Agency.
Washington,
DC.

Rendu,
J­
M.
1980.
Optimization
of
Sampling
Policies:
A
Geostatistical
Approach.
Tokyo:
MMIJAIME

Rupp,
G.
1990.
Debris
Sampling
at
NPL
Sites–
Draft
Interim
Report.
Prepared
for
the
U.
S.
Environmental
Protection
Agency,
Exposure
Assessment
Division,
Environmental
Monitoring
Systems
Laboratory,
Las
Vegas,
by
the
Environmental
Research
Center,
University
of
Nevada,
Las
Vegas,
NV,
under
Cooperative
Agreement
Number
814701.

Ryan,
T.
A.
and
B.
L.
Joiner.
1990.
"Normal
Probability
Plots
and
Tests
for
Normality."
Minitab
Statistical
Software:
Technical
Reports
November
1­
1
to
1­
14.

Schilling,
E.
G.
1982.
Acceptance
Sampling
in
Quality
Control
.
Marcel
Dekker.
NY.

Schulman,
R.
S.
1992.
Statistics
in
Plain
English
with
Computer
Applications.
New
York:
Van
Nostrand
Reinhold.

Schumacher,
B.
A.,
Shines,
K.
C.,
Burton,
J.
V.
and
Papp,
M.
L.
1991.
"A
Comparison
of
Soil
Sample
Homogenization
Techniques."
In:
Hazardous
Waste
Measurements.
M.
S.
Simmons,
ed.
Boca
Raton,
FL:
Lewis
Publishers.

Shapiro,
S.
S.
and
R.
S.
Francia.
1972.
"An
Approximate
Analysis
of
Variance
Test
for
Normality."
Journal
of
American
Statistical
Association
67(
337):
215­
16.

Shapiro,
S.
S.
and
M.
B.
Wilk.
1965.
"An
Analysis
of
Variance
Test
for
Normality
(Complete
Samples)."
Biometrika
52:
591­
611.

Shefsky,
S.
1997.
"Sample
Handling
Strategies
for
Accurate
Lead­
In­
Soil
Measurements
in
the
Field
and
Laboratory."
Presented
at
the
International
Symposium
of
Field
Screening
Methods
Hazardous
Wastes
and
Toxic
Chemicals,
Las
Vegas,
NV.
References
331
Singh,
A.
K.,
A.
Singh,
and
M.
Engelhardt.
1997.
The
Lognormal
Distribution
in
Environmental
Applications.
EPA/
600/
R­
97/
006.
U.
S.
Environmental
Protection
Agency,
Office
of
Research
and
Development.
Washington,
DC.
http://
www.
epa.
gov/
nerlesd1/
pdf/
lognor.
pdf
Skalski,
J.
R.
and
J.
M.
Thomas.
1984.
Improved
Field
Sampling
Designs
and
Compositing
Schemes
for
Cost
Effective
Detection
of
Migration
and
Spills
at
Commercial
Low­
level
Radioactive
or
Chemical
Wastes
Sites.
PNL­
4935.
Battelle
Pacific
Northwest
Laboratory.
Richland,
WA.

United
States
Department
of
Energy
(USDOE).
1996.
"Statistical
Methods
for
the
Data
Quality
Objective
Process."
In:
DQO
Statistics
Bulletin
vol.
1.
PNL­
SA­
26377­
2.
Prepared
by
Joanne
Wendelberger,
Los
Alamos
National
Laboratory,
for
the
U.
S.
Department
of
Energy.
Washington,
DC.

United
States
Environmental
Protection
Agency
(USEPA).
1980.
Samplers
and
Sampling
Procedures
for
Hazardous
Waste
Streams.
EPA­
600/
2­
80­
018.
Municipal
Environmental
Research
Laboratory.
Cincinnati,
OH.

USEPA.
1984.
Characterization
of
Hazardous
Waste
Sites–
A
Methods
Manual.
Volume
I
Site
Investigations.
EPA/
600/
4­
84­
075.
Environmental
Monitoring
Systems
Laboratory,
Office
of
Research
and
Development,
Las
Vegas,
NV.
(Available
on
CD­
ROM.
See
USEPA
1998c)

USEPA.
1985.
Characterization
of
Hazardous
Waste
Sites–
A
Methods
Manual.
Volume
II:
Available
Sampling
Methods.
EPA/
600/
4­
84­
076.
Environmental
Monitoring
Systems
Laboratory,
Office
of
Research
and
Development,
Las
Vegas,
NV.
(Available
on
CDROM
See
USEPA
1998c)

USEPA.
1986a.
Test
Methods
for
Evaluating
Solid
Waste,
Physical/
Chemical
Methods,
Updates
I,
II,
IIA,
IIB,
III,
and
IIIA.
SW­
846.
NTIS
publication
no.
PB97­
156111
or
GPO
publication
no.
955­
001­
00000­
1.
Office
of
Solid
Waste.
Washington,
DC.
http://
www.
epa.
gov/
epaoswer/
hazwaste/
test/
sw846.
htm
USEPA.
1986b.
Permit
Guidance
Manual
on
Unsaturated
Zone
Monitoring
for
Hazardous
Waste
Land
Treatment
Units.
EPA/
530­
SW­
86­
040.
Washington,
DC.

USEPA.
1987.
RCRA
Guidance
Manual
for
Subpart
G
Closure
and
Post­
Closure
Care
Standards
and
Subpart
H
Cost
Estimating
Requirements.
530­
SW­
87­
010
(NTIS:
PB87­
158
978).

USEPA.
1988.
Methodology
for
Developing
Best
Demonstrated
Available
(BDAT)
Treatment
Standards.
EPA/
530­
SW­
89­
017L.
Treatment
Technology
Section,
Office
of
Solid
Waste.
Washington,
DC.

USEPA.
1989a.
Methods
for
Evaluating
the
Attainment
of
Cleanup
Standards.
Volume
1:
Soils
and
Solid
Media.
EPA
230/
02­
89­
042.
NTIS
PB89­
234959.
Statistical
Policy
Branch,
Office
of
Policy,
Planning,
and
Evaluation.
Washington,
DC.
http://
www.
epa.
gov/
tio/
stats/
vol1soils.
pdf
References
332
USEPA,
1989b,
Statistical
Analysis
of
Ground­
Water
Monitoring
Data
at
RCRA
Facilities
(Interim
Final
Guidance).
Office
of
Solid
Waste
(NTIS,
PB89­
151047).

USEPA.
1989c.
RCRA
Facility
Investigation
Guidance.
Vols.
1
­
4.
EPA
530/
SW­
89­
031.
OSWER
Directive
9502.00­
6D.
NTIS
PB89­
200299.
Office
of
Solid
Waste.
Washington,
DC.
(Available
on
CD­
ROM.
See
USEPA
1998g.)

USEPA.
1990.
"Corrective
Action
for
Solid
Waste
Management
Units
at
Hazardous
Waste
Management
Facilities:
Proposed
Rule."
Federal
Register
(55
FR
30798,
July
27,
1990).

USEPA.
1991a.
GEO­
EAS
1.2.1
User's
Guide.
EPA/
600/
8­
91/
008.
Environmental
Monitoring
Systems
Laboratory,
Las
Vegas,
NV.

USEPA.
1991b.
Description
and
Sampling
of
Contaminated
Soils–
A
Field
Pocket
Guide.
EPA/
625/
12­
91/
002.
Center
for
Environmental
Research
Information.
Cincinnati,
OH.

USEPA.
1991c.
Final
Best
Demonstrated
Available
Technology
(BDAT)
Background
Document
for
Quality
Assurance/
Quality
Control
Procedures
and
Methodology.
NTIS
PB95­
230926.
Office
of
Solid
Waste.
Washington,
DC.

USEPA.
1991d.
Site
Characterization
for
Subsurface
Remediation.
EPA/
625/
4­
91/
026.
Office
of
Research
and
Development.
Washington,
DC.

USEPA.
1992a.
Supplemental
Guidance
to
RAGS:
Calculating
the
Concentration
Term
1(
1).
OERR
Publication
9285.7­
08I.
NTIS
PB92­
963373.
Office
of
Emergency
and
Remedial
Response.
Cincinnati,
OH.

USEPA.
1992b.
Statistical
Analysis
of
Ground­
Water
Monitoring
Data
at
RCRA
Facilities
Addendum
to
Interim
Final
Guidance
(July
1992).
Office
of
Solid
Waste.
http://
www.
epa.
gov/
epaoswer/
hazwaste/
ca/
resource/
guidance/
sitechar/
gwstats/
gwstats.
htm
USEPA.
1992c.
RCRA
Ground­
Water
Monitoring:
Draft
Technical
Guidance.
EPA/
530/
R93
001.
Office
of
Solid
Waste.
Washington,
DC.

USEPA.
1992d.
Specifications
and
Guidance
for
Contaminant­
Free
Sample
Containers.
Publication
9240.05A.
EPA/
540/
R­
93/
051.

USEPA.
1992e.
Multi­
Media
Investigation
Manual.
EPA/
330/
9­
89/
003­
R.
National
Enforcement
Investigation
Center.
Denver,
CO.

USEPA.
1992f.
Management
of
Investigation­
Derived
Wastes.
Directive
9345.3­
03FS.
NTIS
PB92­
963353.
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.

USEPA.
1992g.
Guidance
for
Data
Usability
in
Risk
Assessment.
Final.
9285.7­
09A
and
B.
Office
of
Emergency
and
Remedial
Response.
Washington,
DC.

USEPA.
1992h.
40
CFR
Parts
268
and
271
Land
Disposal
Restrictions
No
Migration
Variances;
Proposed
Rule.
Federal
Register:
August
11,
1992.
References
333
USEPA.
1992i.
Methods
for
Evaluating
the
Attainment
of
Cleanup
Standards.
Volume
2:
Ground
Water.
EPA
230­
R­
92­
14.
Office
of
Policy,
Planning,
and
Evaluation.
Washington,
DC.

USEPA.
1993a.
Data
Quality
Objectives
Process
for
Superfund.
Interim
Final
Guidance.
EPA/
540.
G­
93/
071.
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.

USEPA.
1993b.
Guidance
Specifying
Management
Measures
for
Sources
of
Nonpoint
Pollution
in
Coastal
Waters.
EPA­
840­
B­
93­
001c.
Office
of
Water.
Washington,
DC.

USEPA.
1993c.
Subsurface
Characterization
and
Monitoring
Techniques–
A
Desk
Reference
Guide.
Vols.
1
and
2.
EPA/
625/
R­
93/
003a
and
EPA/
625/
R­
93/
003b.
Office
of
Research
and
Development.
Washington,
DC.

USEPA.
1993d.
Petitions
to
Delist
Hazardous
Waste–
A
Guidance
Manual.
2
nd
ed.
EPA
530­
R93
007.
NTIS
PB
93­
169365.
Office
Of
Solid
Waste.
Washington,
DC.

USEPA.
1994a.
Waste
Analysis
at
Facilities
That
Generate,
Treat,
Store,
and
Dispose
of
Hazardous
Wastes,
a
Guidance
Manual.
OSWER
9938.4­
03.
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.
http://
www.
epa.
gov/
epaoswer/
hazwaste/
ldr/
wap330.
pdf
USEPA.
1994b.
"Drum
Sampling."
Environmental
Response
Team
SOP
#2009,
Revision
#0.0.
Edison,
NJ.
http://
www.
ert.
org/

USEPA.
1994c.
"Tank
Sampling."
Environmental
Response
Team
SOP
#2010,
Revision
#0.0.
Edison,
NJ.
http://
www.
ert.
org/

USEPA.
1994d.
"Waste
Pile
Sampling."
Environmental
Response
Team
SOP
#2017,
Revision
#0.0.
Edison,
NJ.
http://
www.
ert.
org/

USEPA.
1994e.
"Sediment
Sampling."
Environmental
Response
Team
SOP
#2016,
Revision
#0.0.
Edison,
NJ.
http://
www.
ert.
org/

USEPA.
1994f.
"Sampling
Equipment
Decontamination."
Environmental
Response
Team
SOP
#2006,
Revision
#0.0.
Edison,
NJ.
http://
www.
ert.
org/

USEPA.
1995a.
Determination
of
Background
Concentrations
of
Inorganics
in
Soils
and
Sediment
at
Hazardous
Waste
Sites.
EPA/
540/
S­
96/
500.
Office
of
Research
and
Development
and
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.
http://
www.
epa.
gov/
nerlesd1/
pdf/
engin.
pdf
USEPA.
1995b.
EPA
Observational
Economy
Series,
Volume
2:
Ranked
Set
Sampling.
EPA/
230­
R­
95­
006.
Office
of
Policy
Planning
and
Evaluation.
Washington,
DC.

USEPA.
1995c.
EPA
Observational
Economy
Series,
Volume
1:
Composite
Sampling.
EPA230
R­
95­
005.
Office
of
Policy
Planning
and
Evaluation.
Washington,
DC.

USEPA.
1995d.
QA/
QC
Guidance
for
Sampling
and
Analysis
of
Sediments,
Water,
and
Tissues
for
Dredged
Material
Evaluations.
EPA/
823­
B­
95­
001.
References
334
USEPA.
1995e.
Superfund
Program
Representative
Sampling
Guidance
Volume
5:
Water
and
Sediment,
Part
I–
Surface
Water
and
Sediment.
Interim
Final
Guidance.
Environmental
Response
Team.
Office
of
Emergency
and
Remedial
Response
and
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.

USEPA.
1995f.
Guidance
for
the
Sampling
and
Analysis
of
Municipal
Waste
Combustion
Ash
for
the
Toxicity
Characteristic.
Office
of
Solid
Waste.
EPA
530­
R­
95­
036.

USEPA.
1996a.
Soil
Screening
Guidance
User's
Guide
(9355.4­
23).
Office
of
Solid
Waste
and
Emergency
Response.
Washington,
DC.
http://
www.
epa.
gov/
superfund/
resources/
soil/
index.
htm
USEPA.
1996b.
Environmental
Investigations
Standard
Operating
Procedures
and
Quality
Assurance
Manual.
Region
4,
Science
and
Ecosystem
Support
Division.
Athens,
GA.
http://
www.
epa.
gov/
region04/
sesd/
eisopqam/
eisopqam.
html
USEPA.
1996c.
"Soil
Gas
Sampling."
Environmental
Response
Team
SOP
#2042,
Revision
#0.0.
Edison,
NJ.

USEPA.
1996d.
Region
6
RCRA
Delisting
Program
Guidance
Manual
for
the
Petitioner.
Region
6,
RCRA
Multimedia
Planning
and
Permitting
Division,
Dallas,
TX.
(Updated
March
23,
2000).

USEPA.
1997a.
"Geostatistical
Sampling
and
Evaluation
Guidance
for
Soils
and
Solid
Media."
Draft.
Prepared
by
Dr.
Kirk
Cameron,
MacStat
Consulting,
Ltd.
and
SAIC
for
the
Office
of
Solid
Waste
under
EPA
contract
68­
W4­
0030.
Washington,
DC.

USEPA.
1997b.
Data
Quality
Assessment
Statistical
Toolbox
(DataQUEST),
EPA
QA/
G­
9D.
User's
Guide
and
Software.
EPA/
600/
R­
96/
085.
Office
of
Research
and
Development.
Las
Vegas.
http://
www.
epa.
gov/
quality/
dqa.
html
USEPA.
1998a.
EPA
Guidance
for
Quality
Assurance
Project
Plans,
EPA
QA/
G­
5.
EPA/
600/
R98
018.
Office
of
Research
and
Development.
Washington,
DC.
http://
www.
epa.
gov/
quality/
qs­
docs/
g5­
final.
pdf
USEPA.
1998b.
Final
Technical
Support
Document
for
HWC
MACT
Standards,
Volume
VI
Development
of
Comparable
Fuels
Standards.
Office
of
Solid
Waste
and
Emergency
Response,
Washington,
D.
C.
(May
1998).

USEPA.
1998c.
Site
Characterization
Library,
Volume
1.
Release
2.
EPA/
600/
C­
98/
001.
Office
of
Research
and
Development,
National
Exposure
Research
Laboratory
(NERL).
Las
Vegas,
NV.

USEPA.
2000a.
Guidance
for
the
Data
Quality
Objectives
Process
for
Hazardous
Waste
Site
Operations
EPA
QA/
G­
4HW,
EPA/
600/
R­
00/
007.
Quality
Staff,
Office
of
Environmental
Information,
United
States
Environmental
Protection
Agency,
Washington,
D.
C.
January
2000.
http://
www.
epa.
gov/
quality/
qs­
docs/
g4hw­
final.
pdf
References
335
USEPA.
2000b.
Guidance
for
the
Data
Quality
Objectives
Process
EPA
QA/
G­
4,
EPA/
600/
R­
96/
055.
Quality
Staff,
Office
of
Environmental
Information,
United
States
Environmental
Protection
Agency,
Washington,
D.
C.
August
2000.
http://
www.
epa.
gov/
quality/
qs­
docs/
g4­
final.
pdf
USEPA.
2000c.
Guidance
for
Choosing
a
Sampling
Design
for
Environmental
Data
Collection,
EPA
QA/
G­
5S.
PEER
REVIEW
DRAFT.
Quality
Staff,
Office
of
Environmental
Information,
United
States
Environmental
Protection
Agency,
Washington,
D.
C.
August
2000.

USEPA.
2000d.
Guidance
for
Data
Quality
Assessment,
EPA
QA/
G­
9
(QA00
Update).
Quality
Staff,
Office
of
Environmental
Information,
United
States
Environmental
Protection
Agency,
Washington,
D.
C.
July
2000.
http://
www.
epa.
gov/
quality1/
qs­
docs/
g9­
final.
pdf
USEPA.
2001a.
Data
Quality
Objectives
Decision
Error
Feasibility
Trials
Software
(DEFT)
User's
Guide.
EPA/
240/
B­
01/
007.
(User's
guide
and
software)
Office
of
Environmental
Information.
Washington,
DC.
http://
www.
epa.
gov/
quality/
qa_
docs.
html
USEPA.
2001b.
EPA
Requirements
for
Quality
Assurance
Project
Plans,
EPA
QA/
R­
5.
EPA/
240/
B­
01/
003.
Office
of
Environmental
Information.
Washington,
DC.
http://
www.
epa.
gov/
quality/
qa_
docs.
html
USEPA.
2001c.
Guidance
on
Environmental
Data
Verification
and
Data
Validation
EPA
QA/
G­
8.
Quality
Staff,
Office
of
Environmental
Information,
United
States
Environmental
Protection
Agency,
Washington,
D.
C.
PEER
REVIEW
DRAFT.
June
2001.

USEPA.
2001d.
Land
Disposal
Restrictions:
Summary
of
Requirements.
EPA530­
R­
01­
007.
Office
of
Solid
Waste
and
Emergency
Response
and
Enforcement
and
Compliance
Assurance.
Revised
August
2001.

USEPA.
2001e.
Guidance
on
Data
Quality
Indicators
EPA
QA/
G­
5i.
PEER
REVIEW
DRAFT.
Office
of
Environmental
Information,
Washington,
D.
C.
September
2001.

USEPA.
2001f.
EPA
Requirements
for
Quality
Management
Plans,
EPA
QA/
R­
2.
EPA/
240/
B01
002.
Office
of
Environmental
Information.
Washington,
DC.
March.
http://
www.
epa.
gov/
quality/
qa_
docs.
html
USEPA.
2001g.
Contract
Laboratory
Program
(CLP)
Guidance
for
Field
Samplers
­
Draft
Final.
OSWER
9240.0­
35.
EPA540­
R­
00­
003.
Office
of
Solid
Waste
and
Emergency
Response.
June.
http://
www.
epa.
gov/
oerrpage/
superfund/
programs/
clp/
guidance.
htm
USEPA.
2002a.
Guidance
on
Demonstrating
Compliance
With
the
Land
Disposal
Restrictions
(LDR)
Alternative
Soil
Treatment
Standards,
Final
Guidance.
EPA530­
R­
02­
003.
Office
of
Solid
Waste.
July.
http://
www.
epa.
gov/
epaoswer/
hazwaste/
ldr/
soil_
f4.
pdf
USEPA
and
USDOE.
1992.
Characterizing
Heterogeneous
Wastes:
Methods
and
Recommendations.
EPA/
600/
R­
92/
033.
NTIS
PB
92­
216894.
EPA
Office
of
Research
and
Development,
Las
Vegas,
NV
and
USDOE
Office
of
Technology
Development,
Washington,
DC.
References
336
van
Ee,
J.
J.,
L.
J.
Blume,
and
T.
H.
Starks.
1990.
A
Rationale
for
the
Assessment
of
Errors
in
the
Sampling
of
Soils.
EPA
600/
4­
90/
013.
Environmental
Monitoring
Systems
Laboratory.
Las
Vegas,
NV.

Visman,
J.
1969.
"A
General
Sampling
Theory.
Materials
Research
and
Standards."
MTRSA
9(
11):
8­
13.

Wald,
A.
1973.
Sequential
Analysis.
New
York:
Dover
Publications.

Williams,
L.
R.,
R.
W.
Leggett,
M.
L.
Espegren,
and
C.
A.
Little.
1989.
"Optimization
of
Sampling
for
the
Determination
of
Mean
Radium
226
Concentration
in
Surface
Soil."
Environmental
Monitoring
and
Assessment
12:
83­
96.
Dordrecht,
the
Netherlands:
Kluwer
Academic
Publishers.
337
INDEX
Note:
Bold
page
numbers
indicate
where
the
primary
discussion
of
the
subject
is
given.

Acceptance
sampling,
27
Accuracy,
22,
57,
134,
157­
158,
160
Action
level,
22,
31,
35,
39­
41,
45­
47,
49,
51,
54,
61­
63,
72,
78­
79,
81­
82,
84,
157,
163,
253,
278­
282,
284,
296­
297,
302
Additivity
of
errors
in
sampling
and
analysis
of
biases,
89
of
variances,
89
Alpha
(
),
42,
83
 
Alternative
hypothesis,
43,
157
Analytical
methods,
1,
12,
36,
40,
51,
70,
86­
87,
108,
122,
131,
139,
144,
161,
164,
169
Analytical
design,
50,
51,
183,
298
Arithmetic
mean,
77,
165,
170,
187,
243
ASTM,
2,
16,
17,
35,
60,
63­
65,
69,
74,
84,
101,
103,
106,
107,
122,
124­
126,
130,
134­
137,
157­
159,
163­
164,
166,
168­
169,
175,
191­
192,
195­
196,
201­
240
how
to
contact
and
obtain
standards,
103
summaries
of
standards,
305­
322
Attribute,
27,
39,
311,
321
Auger,
bucket,
100,
111­
113,
115,
225­
226,
287­
288
Automatic
sampler,
109­
110,
159,
202,
319,
321
Auxiliary
variable,
54,
60,
321
Background,
15,
24,
28,
33,
37,
41,
42,
44,
181,
183
Bacon
bomb
sampler,
109,
110,
115,
209
Bailer,
109,
110,
115,
230,
234­
235,
319
Beta
(
),
42,
162
 
Bias,
22­
24,
41,
49­
50,
88­
89,
95,
108,
118,
119,
123,
128,
141,
142,
144,
150,
157,
160,
164­
165,
167­
168,
200,
240,
249,
252,
274,
318
analytical,
23,
89,
163
sampling,
23,
89,
93­
94,
104,
119,
124,
128,
244,
300
statistical,
23,
89
Binomial
distribution,
18
Bladder
pump,
109,
110,
115,
202­
203
Bootstrap,
152,
250,
252
Bottles,
see
containers
Boundaries
defining,
15,
26,
30,
36­
37,
45,
49,
52,
59,
63,
66,
76,
79,
82,
158,
160,
279,
295
spatial,
14,
23,
32,
36­
37,
39,
49
,158
temporal,
14,
23,
32,
36­
38,
49,
158
Box
and
whisker
plot,
147,
148
Bucket,
110­
112,
301
Calibration,
23,
86,
124,
140­
143,
158
Central
limit
theorem
(CLT),
67,
244
Centrifugal
pump,
109,
110,
116,
205
CERCLA,
2
,
317
Chain­
of­
custody,
4,
86,
122,
124,
125­
127,
132,
139­
141,
143,
146,
158,
180,
310,
311
Cleanup
(of
a
waste
site),
8,
13,
28,
32,
33,
37­
40,
43­
44,
51,
57,
62,
64,
68,
79,
82,
196,
261,
277
Closure,
7,
8,
10,
61,
181,
185
Coefficient
of
variation
(CV),
147,
158,
250,
284
Cohen's
Adjustment,
152­
153,
241,
257­
261
COLIWASA,
100,
108­
111,
116,
228­
229,
314
Component
stratification,
58,
194–
196
Comparing,
populations,
24,
28,
150
to
a
fixed
standard,
24,
25,
27,
65,
71,
150,
152,
153,
155,
241,
242,
247­
249,
251,
253­
255,
258
Composite
sample,
64­
73,
80,
108,
115,
140,
158­
9,
172,
187,
249,
284,
288­
289,
318
Composite
sampling,
52,
64­
73
advantages,
65
approach,
66­
67
limitations,
65­
66
number
of
samples,
73
simple
random,
67
systematic,
68–
69
Computer
codes,
see
software
Conceptual
site
model
(CSM),
32
Cone
and
quartering,
134
Confidence
interval,
25­
27,
61­
62,
70,
150,
155,
247­
250,
252­
254,
259
Confidence
level,
47­
48,
61,
74,
84,
159
Confidence
limits,
25,
69,
155,
159
for
a
lognormal
mean,
75,
249
for
a
normal
mean
using
simple
random
or
systematic
sampling,
247­
249
for
a
normal
mean
using
stratified
random
sampling,
248
for
a
percentile,
253­
255
nonparametric
confidence
limits,
252
using
composite
sampling,
249
Consensus
standard,
17,
103,
159
Containers,
sample,
23,
62,
84,
96,
104,
122­
123,
128,
131­
132,
138,
141
Control
samples,
74,
96,
124­
125,
139,
142,
280
duplicate,
51,
74,
142,
143,
161,
162
equipment
blank,
51,
74,
96,
125,
142,
162,
286
field
blank,
51,
74,
96,
125,
162
rinsate,
96,
168,
286
spikes,
74,
142,
143,
162,
163
trip
blank,
51,
74,
96,
125,
142,
162
Conveyor,
37,
52,
60,
95,
96,
98,
103,
104,
106­
107,
111,
112,
312
belt,
52,
95,
98,
106­
107,
312
screw,
106­
107
Coring
type
sampler,
111­
113,
116,
214,
221
Corrosivity,
7,
8,
13,
26,
27,
35,
40,
66,
173,
293
Corrective
action
(RCRA),
1,
8,
10,
29,
40,
44,
79,
185,
277
Index
338
Data
quality
assessment,
1,
2,
4,
139,
145,
160,
241,275,
289,
302
Data
quality
objectives,
1,
2,
10,
24,
25,
145,
154,
160
process,
30­
87,
160
seven
steps,
30
Data
(also
see
distributions)
collection
design,
38,
50,
51,
159
gaps,
50,
143
DataQUEST
software,
146­
149,
244,
270
Debris,
10,
58,
94,
97,
104,
106,
107,
113,
121,
160,
191­
196
sampling
methods,
191­
196
Decision
error,
31,
38,
41­
48,
73,
75,
76,
82,
142,
155,
160
Decision
maker,
28,
31,
32,
39­
41,
43,
45,
49
Decision
unit,
4,
15,
16,
26,
38­
39,
41,
47­
49,
57,
67,
68,
76,
79,
81,
82,
84,
90,
91,
94,
99,
146,
161,
193,
194,
244
Decision
rule,
30,
39­
41,
49,
76,
79,
82,
83,
150,
279,
295
Decision
support,
see
Decision
Unit
Decontamination,
23,
51,
100,
117,
118,
122,
124,
125,
128­
130,
141,
312,
314
DEFT
software,
31,
45,
73,
84,
273,
284
Degrees
of
freedom
(df),
268
simple
random
or
systematic
sampling,
248,
249
stratified
random
sampling,
78,
79,
243
Delta
(
),
45
 
Detection
limit,
40,
161,
258
Dilution,
10,
58,
71,
72
Dipper,
106,
109­
112,
116,
236­
237,
313
Dispersion,
19,
22,
169,
170,
193
Displacement
pump,
109,
110,
116,
206­
207
Distributions,
14,
16,
17
binomial,
18
non
normal,
18,
252
normal,
17­
21,
67,
75,
81,
147,
148,
150,
158,
170,
244
lognormal,
17­
19,
75,
149,
150,
154,
195,
244,
249­
250
Distributional
assumptions,
87,
145,
148,
244
Distribution
heterogeneity,
91
Documentation,
86,
87,
95,
96,
122,
124­
126,
139­
144,
336
DOT,
131,
133,
174
Drum
thief,
108,
230­
231
Drums,
15,
37,
39,
72,
73,
95,
99,
100,
103,
104­
105,
314,
315,
316
Duplicate,
51,
74,
142,
143,
161,
162
Dynamic
work
plan,
161
Ease
of
use,
100
Effluent,
68,
94
Enforcement,
10­
12,
27,
43,
63
Errors,
3,
13,
16,
88­
101
analytical,
3,
69,
88,
90
components
of,
88,
89
contamination,
94,
96
decision,
31,
38,
41­
48,
73,
75,
76,
82,
142,
155,
160
delimitation,
94­
96,
99,
100,
102,
106,
136,
137,
211,
229
extraction,
94,
95,
99,
100,
102,
136,
137
fundamental,
69,
91,
92­
94,
96­
98,
135,
136,
197­
200
preparation,
94,
95,
96
segregation
and
grouping,
91
Example
calculations
Cohen's
Adjustment,
261
confidence
level
when
using
a
simple
exceedance
rule,
256
locating
a
hot
spot
using
composite
sampling,
73
mean,
19
mean
and
variance
using
composite
sampling,
71
number
of
samples
for
simple
random
sampling,
76
number
of
samples
for
stratified
random
sampling,
79
number
of
samples
to
estimate
a
percentile,
82
number
of
samples
using
a
"no
exceedance"
rule,
82
Shapiro­
Wilk
test,
246­
247
standard
deviation,
20
upper
confidence
limit
for
a
normal
mean,
249
upper
confidence
limit
for
a
lognormal
mean,
251
upper
confidence
limit
for
a
percentile,
255
variance,
20
Examples
of
the
DQO/
DQA
processes,
277­
304
Exceedance
rule
method,
27­
28,
255­
256
Exploratory
study,
74
False
positive
(false
rejection),
42,
162
False
negative
(false
acceptance),
42,
162
Familiarization
(analytical),
50
Field
QC
samples,
see
control
samples
Filliben's
Statistic,
148,
244
Finite
population
correction,
77
Flash
point,
66
Flowing
or
moving
materials,
sampling
of,
15,
52,
91,
95,
96,
98,
106,
309,
312,
314
Fragments,
92,
94,
99,
134,
141,
163,
192,
197
Frequency
plot,
148
Fundamental
error,
69,
91,
92­
94,
96­
98,
135,
136,
197­
200
controlling,
97
definition,
163
derivation,
197­
200
description,
92
Gases,
104,
114,
121,
173,
174
Geometric
standard
deviation
(GSD),
75
Geostatistics
and
geostatistical
methods,
15,
29,
58,
59,
80,
90,
151,
163,
192,
273
Goodness­
of­
fit,
163,
244
Grab
sample,
64,
66,
73,
80,
163,
176
Graded
approach,
32,
163
Gravitational
segregation,
91
Index
339
Gray
region,
41,
45­
47,
49,
75,
76,
79,
81­
84,
163,
281,
297
Grid,
56,
57,
59,
68,80,
159,
274
Ground­
water
monitoring,
7,
10,
15,
28,
39,
44,
45,
114,
121,
180,
181,
185,
309,
316,
321
Grouping
error,
65,
91,
93,
96,
134,
137,
138
Gy's
sampling
theory,
88–
101
Haphazard
sampling,
57
Hazardous
waste:
determination,
8
regulations,
6­
10,
171­
189
Hazardous
waste
characteristics,
164–
165
corrosivity,
7,
8,
13,
26,
27,
35,
40,
66,
173
ignitability,
7,
8,
13,
26,
27,
35,
40,
66,
173
reactivity,
7,
8,
13,
26,
27,
35,
40,
66,
174
toxicity,
7,
8,
13,
26,
27,
35,
40,
66,
73,
120,
173
Health
and
safety,
38,
50,
84,
97,
130
Heterogeneity,
4,
26,
52,
53,
66,
68,
69,
88,
90­
91,
93,
106,
137,
138,
163,
191­
196
large­
scale,
91,
191,192
periodic,
91
short­
range,
68,
91,
93,
191
Heterogeneous
waste,
4,
57,
58,
94,
107,
191­
196
Histogram,
17,
18,
147,
148,
255
Holding
time,
66,
74,
122,
123­
124,
131,
141,
143,
163
Homogenization,
4,
23,
66,
69,
91,
92,
102,
134,
320
stationary
processes,
134
dynamic
processes,
134
Homogeneity,
164,
192
Homogeneous,
92,
93,
97,
98,
134,
136
Hot
spots,
38,
39,
53,
57,
59,
65,
67,
71­
73,
164,
274
Hypothesis,
40,
41
alternative,
43,
157
null,
41­
47,
49,
76,
79,
82,
150,
152­
155,
157
Hypothesis
testing
versus
statistical
intervals,
25
Increments,
61,
65,
91,
93,
94,
96,
134,
135,
138,
158,
164,
194
Independence
or
independent
samples,
69,
71
International
Air
Transport
Association
(IATA),
131,
133
Interpolation,
261
Ignitability,
7,
8,
13,
26,
27,
35,
40,
66,
173
Investigation
derived
waste
(IDW),
118,
129­
130
Jackknife,
152,
250,
252
Judgment
sampling,
48,
51,
55,
63­
64
Kemmerer
depth
sampler,
100,
108,
109,
117,
210­
211
Labels,
sample,
96,
124,
125,
131,
141,
310,
314
Land
Disposal
Restrictions
(LDRs),
7,
8,
9­
10,
13,
26,
27,
35,
40,
44,66,
82,
113,
160,
171,
176,
177
Landfill,
28,
34,
52,
82,
104,
106
Land
treatment,
8,
28,
33,
37,
41,
52,
121,
183
Large­
scale
heterogeneity,
91,
191,192
Less­
than
values,
see
nondetects
Liquid
grab
sampler,
109­
111,
237
Liquids,
90,
98,
100,
109,
110,
120,
136
Logbook,
124,
140,
143,
146
Lognormal
distribution,
17­
19,
75,
149,
150,
154,
195,
244,
249­
250
Maps,
29,
33,
37,
58,
59,
124,
141
Margin
of
error,
13
Mass
of
a
sample,
4,
23,
36,
92,
96­
97,
136,
137,
197­
200
Mean,
14,
17,
18­
19,
40,
165
Mean
square
error,
89,
165
Measurement:
15­
16
bias,
23
random
variability,
23­
24
Median,
17,
19,
39,
40,
88,
155,
165,
249,
252
Miniature
core
sampler,
111­
113,
117,
222­
223
Modified
syringe
sampler,
111­
113,
117,
224
Multi­
phase
mixtures,
98
Nondetects,
146,
147,
150,
154,
257­
258
Nonparametric
methods,
18,
83,
150,
153,
165,
252,
255,
256
Nonprobability
sampling,
51,
55,
63,
193
Normal
distribution,
17­
18,
20,
21,
67,
75,
147,
148,
150,
244
Normal
probability
plot,
18,
147,
148,
290­
291
Nuggets,
92
Number
of
samples
composite
sampling,
80
mean,
normal
distribution,
using
simple
random
sampling
or
systematic
sampling,
73,
80
mean,
normal
distribution,
using
stratified
random
sampling,
77
mean,
lognormal
distribution,
75
percentile
or
proportion,
81
using
an
exceedance
rule,
83
Optimal
design,
50,
78,
96
Outliers,
145,
147,
148­
149,
165,
250,
322
OSHA,
130
Packaging
and
shipping,
131
sample
packaging,
131
sample
shipping,
133
Parameter
(statistical),
21,
23,
24,
25,
27,
39­
40,
166
Particle
size
distribution,
16,
94­
95
Particle
size
reduction,
69,
91,
93,
96,
97,
98,
136,
137,
138,
192,
198,
200
Particulate,
90,
95,
97,
134,
137,
317
Pass
or
fail
data,
18,
28,
35,
40,
81,
153
Percentile,
20,
21,
26­
27,
39­
40,
45,
81,
151,
153,
166,
253
Performance­
based
measurement
system
(PBMS),
86
Peristaltic
pump,
109–
111,
118,
202,
204­
205
pH,
66,
173,
174
Photoionization
detector,
60
Index
340
Piles:
elongated,
52,
138
staging,
37,
120
waste,
16,
37,
104,
106,
168,
178,
187,
317
Pilot
study,
43,
50,
74,
80,
93,
315
Pipes,
37,
52,
60,
94,
95,
98,
104,
105,
106,
109­
112,
120,
196,
312
Plunger
type
sampler,
109­
111,
118,
232–
234
Point
estimate,
21,
27,
252
Point
of
(waste)
generation,
6,
15,
33,
37,
39,
52,
73,
76,
82,
104,
106,
171,
193,
255,
295,
299,
300
Point
source
discharge,
106,
182,
236,
238
Ponar
dredge,
111,
118,
207­
209,
308,
309
Populations,
13,
14­
15,
16,
17,
24,
28,
194,
250
Pore
water,
15,
42,
182
Precision,
11,
14,
22­
24,
25,
26,
52,
58,
64,
65,
69,
70,
74,
80,
125,
134,
166,
194
Preliminary
study,
see
pilot
study
Preparation
error,
94,
95,
96
Preservation,
92,
94,
96,
123­
124,
131,
180,
308,
309
Probability
plot,
18,
21,
147­
149,
245,
255,
257
Process
knowledge
or
knowledge
of
the
waste,
1,
9,
10,
13,
27,
28,
34,
40,
43,
64,
175,
293
Proving
the
negative,
11­
12,
13,
295
Proving
the
positive,
11­
12,
13,
63
Quality
assurance
project
plan
(QAPP),
1,
3,
4,
30,
33,
34,
48,
50,
51,
84­
87,
139­
142,
144,
146,
166
Quality
control,
1,
11,
24,
30,
51,
87,
96,
122,
124­
125,
167,
313
Quick
Safety
Rule
(Pitard's),
97,
198
Random
number,
57
Random
variability,
3,
24,
26,
88­
89,
322
Randomization,
51
Range,
17,
41,
43,
45,
75,
167
Ranked
set
sampling:
54
description,
60
procedure,
61
RCRA:
summary
of
regulatory
citations,
171­
189
Reactivity,
7,
8,
13,
26,
27,
35,
40,
66,
174
Regulatory
threshold,
11,
26,
27,
35,
63,
72,
82,
124
Relative
standard
deviation,
97,
156,
167
Relative
variance,
97,
197,
279
Remediation,
31,
33,
37,
44,
167,
179
Repeatability,
see
precision
Representative
sample,
7,
9,
13,
16,
17,
168,
173­
175,
178,
179,
180,
191
Riffle
splitter,
134­
135
Rinsate,
96,
168,
286
Risk
assessment,
29,
139
Roll­
off
bin
or
container,
15,
37,
39,
52,
82,
95,
96,
99,
104,
106,
113,
255
Rotating
coring
device,
113,
118,
225,
227­
228
Rosner's
Test,
149
Sample:
biased,
55,
64
correct,
96
discrete,
26,
64,
66,
100
duplicate,
51,
74,
142,
161
grab,
64,
66,
73,
80,
163,
176
individual,
47,
64
random,
19,
57­
60,
67,
77,
79,
80,
243
representative,
7,
9,
13,
16,
17,
168,
173­
175,
178,
179,
180,
191
split,
72,
95,
123,
125,
135,
168
statistical,
14,
16,
19,
21,
27,
169
Sample
collection
design,
see
sampling
design
Sampling
design,
51
authoritative,
62
biased,
64
judgmental,
63
probabilistic,
51
ranked
set,
60­
61
simple
random,
57
stratified,
57–
58
systematic,
59­
60
Sampling
in
space
and
time,
52
Sampling
devices,
109­
114
limitations,
102
selecting,
95
Scientific
method,
160,
168
Scoop,
98,
100,
107,
111­
113,
118,
135,
137,
239­
240,
315,
319
Sediment,
104,
105,
114,
121,
133
Segregation
error,
91
Sequential
sampling,
54,
61­
62
Settleable
solids
profiler,
109­
111,
118,
233­
234
Shapiro­
Wilk
test,
147,
148,
244­
246
Sheet
mixing,
134
Shelby
tube,
100
Shipping
samples,
133
Short
range
heterogeneity,
68,
91,
93,
191
Shovel,
99,
100,
111­
113,
119,
239­
241
Significance
level,
47
Simple
random
sampling,
57
Slurry,
52,
106,
111,
120,
312
Software:
ASSESS,
275
DataQUEST,
275
DEFT,
31,
45,
73,
84,
273
DQOPro,
274
ELIPGRID­
PC,
274
GeoEAS,
29,
273
MTCAStat,
275
UnCensor,
257
Visual
Sample
Plan
(VSP),
274
Soil:
background
concentrations,
28,
33,
37,
41
volatiles
in
soil,
101
Soil
gas,
104,
114,
121,
310,
312,
313,
314
Solid
waste,
1,
8­
9,
13,
15,
16,
26,
173,
174,
178
Solid
waste
management
unit
(SWMU),
15,
33,
37,
44,
52,
67,
79,
113,
185,
277
Spatial
correlation,
29,
68,
68,
80,
163
Spatula,
137,
138,
239
Index
341
Split
barrel
sampler,
104,
112,
113,
119,
216­
217,
306
Splitting
of
samples,
135
Standard
deviation:
definition,
19­
20,
169
for
composite
sampling,
70
for
simple
random
or
systematic
sampling,
19­
20,
242
for
stratified
random
sampling,
243
Standard
error
of
the
mean,
21,
242
description,
21
for
composite
sampling,
71
for
simple
random
or
systematic
sampling,
21,
242
for
stratified
random
sampling,
77,
243
Standard
operating
procedures
(SOPs),
51,
86,
87,
124,
135,
136,
140,
142,
169
Statistical
intervals,
25
Statistical
methods,
241­
261
Statistical
tables,
263­
272
Statistical
software,
273­
275
Stratification,
194,
196
by
component,
58
Stratified
random
sampling,
53,
57­
58
Stratum,
57,
58,
59,
77­
79,
169,
194,
195,
243
Student's
t
distribution,
248­
250,
263
Subsampling,
135
liquids,
136
mixtures
of
liquids
and
solids,
136
soils
and
solid
media,
136
Superfund,
2,
15,
38,
94
Support,
16
decision,
see
decision
unit
sample,
94­
95
Swing
jar
sampler,
109­
111,
119,
238
Syringe
sampler,
109­
113,
119,
211­
212
Systematic
sampling,
53,
59­
60
Tank(
s),
7,
37,
52,
104,
105,
106,
109­
111,
115,
117,
120,
121,
129,
182
Target
population,
36,
37,
53,
57,
58
t
distribution,
see
Student's
t
distribution
Thief,
100,
108­
113,
116,
117,
217­
219,
230­
231
Thin­
walled
tube,
112,
113,
119,
219­
221
Time
(sampling
over),
52
Tolerance
limit,
27
Transformations
of
data,
150,
249
Trends,
29,
53,
57,
59,
60,
91,
150
Trier,
100,
111­
113,
119,
218­
219,
314
Trowel,
99,
100,
111­
113,
119,
239­
240
Two­
sample
tests,
28,
151
Type
I
error,
42,
43,
44,
47,
75,
76,
79,
83,
162,
170
Type
II
error,
42,
43,
44,
47,
75,
76,
78,
83,
155,
162,
170
Universal
treatment
standards
(UTS),
33,
151,
177,
256
Upper
confidence
limit
(UCL),
see
confidence
limit
Used
oil,
7,
8,
120,
172,
189
Vadose
zone,
107,
114,
121,
170,
217,
221,
226,
310,
313,
315
Valved
drum
sampler,
109,
110,
119,
231­
232
Variance,
19­
20,
23
additivity
of
variances,
89
for
composite
samples,
70
simple
random
or
systematic
sampling,
242
stratified
random
sampling,
243
Verification
and
validation,
2,
87,
139­
144
Volatiles,
sampling,
101
Volume
or
mass
of
a
sample,
94,
96­
97,
108
Walsh's
Test,
149
Waste:
debris,
10,
58,
94,
97,
104,
106,
107,
113,
121,
160,
191­
196
investigation
derived,
118,
129­
130
hazardous,
6­
10,
171­
189
heterogeneous,
4,
57,
58,
94,
107,
191­
196
multi­
phase,
98
nonhazardous,
13,
34,
38,
58,
82,
129,
194,
255
one­
dimensional,
52,
56,
95,
96,
98,
102,
138
three­
dimensional,
95,
96,
99
two­
dimensional,
56,
59,
95,
99,
102
Waste
analysis
plan
(WAP),
1,
3,
4,
10,
30,
50,
84,
85,
139
Weighting
factor,
58,
77­
79,
243
X­
ray
fluorescence,
60
