Detection/
Quantitation
Workshop
1
May
10,
2001
DETECTION/
QUANTITATION
WORKSHOP
May
10,
2001
OPENING
REMARKS
MR.
TELLIARD:
Good
morning.
We
would
like
to
get
the
session
started.

I
have
a
number
of
announcements.
Checkout
time
is
at
12:
00
noon.

There
are
evaluation
forms
in
the
back
of
your
packets.
We
would
really
appreciate
your
input
on:

1.
What
you
thought
of
this
year's
format
2.
If
you
would
like
workshops,
which
ones
would
you
think
would
be
useful
3.
Any
particular
subject
matter
than
you
think
has
been
lacking
in
coverage
here;
and,
4.
Of
course,
suggestions
for
the
entertainment
program,
all
of
which
could
be
ignored,
but
that
is
all
right.

The
box
for
the
evaluation
forms
is
out
by
the
registration
desk.

I
have
been
informed
that
I
have
been
negligent
again
by
not
making
people
go
to
the
microphones
when
there
are
questions
or
comments.
You
need
to
go
to
the
microphone
in
the
aisles
and
identify
who
you
are
and
your
organization.

This
morning,
we
are
going
to
have
a
session
on
a
subject
which
is
near
and
dear
to
many
of
our
hearts.
Probably
next
to
tuning
a
mass
spectrometer,
this
is
the
most
exciting
subject
I
can
think
of
talking
about.
Also,
fertilization
of
magnolias
would
fall
close
to
a
third,
but
we
are
going
to
go
through
this
today,
mainly
because
there
are
a
number
of
things
afoot,
and
we
need
to
talk
about
it
and
to
try
to
get
some
views
from
people.
What
we
are
trying
to
do
here
is
to
get
an
across­
the­
board
look
at
what
people
think
about
quantitation
and
detection,
various
approaches,
and
how
they
are
going
to
fit
into
the
overall
game
plan.

Up
front,
I
will
tell
you
that
we
are
familiar
with
NTTAA,
the
National
Technology
Transfer
Advancement
Act,
which
states
that
the
Agency
should
use
procedures
and
standards
as
presented
by
consensus
organizations.
A
consensus
organization
is
something
like
Standard
Methods,
ASTM,
AOAC,
and
so
forth,
but
it
also
extends
to
other
organizations
of
less
reputable
names
like
the
American
Petroleum
Institute,
the
American
Iron
and
Steel
Institute,
the
Automobile
Manufacturers
and
Swindlers
Organization.

All
of
those
organizations
which
set
out
and
list
standards
can
present
those
standards
to
the
Agency
for
consideration,
one,
or,
two,
the
Agency
is
actually
required
to
go
out
and
look
for
those
standards
before
we
create
one.
Detection/
Quantitation
Workshop
2
May
10,
2001
So,
with
that
in
mind,
the
one
ruling
figure
is
that
if
the
standard
is
already
in
existence
and
it
meets,
"
the
needs
of
the
Agency,"
then
the
Government
is
supposed
to
use
that
standard
and
not
create
another
one.

Why
are
we
doing...
don't
die
now.
There
we
go.
Impetus
for
this
workshop,
like
a
lot
of
things
we
do,
we
got
sued.
There
was
a
challenge
to
Method
1631
which,
of
course,
as
you
know,
is
our
beloved
mercury
method,
and
as
part
of
the
settlement
agreement,
we
reached
an
understanding
that
we
would
look
at
the
whole
question
of
detection
and
quantitation.
The
petitioners
are
listed
up
there
as
well
as
the
interveners.
Don't
ask
me
what
the
difference
is.

The
specific
detection
limit,
MDL,
and
the
minimum
level
in
the
method
really
were
not
addressed.
So,
the
settlement
agreement
remained
the
same
as
far
as
the
method
is
concerned
as
far
a
detection
and
quantitation.

Now,
as
a
result
of
the
settlement
agreement,
there
are
a
couple
of
things
we
have
to
do.
One
is
we
are
going
to
re­
propose
some
material
as
it
relates
to
Method
1631
in
September
which
will
cover
certain
phases
of
the
method,
that
is
to
say,
make
some
things
that
were
optional
a
requirement.

And
this
is
a
proposal.
It
is
not
a
final
rule.
Therefore,
we
would
expect
you
to
review
it
and
comment
on
it.
The
agreement
in
the
settlement
agreement
was
that
we
would
propose
these
changes,
not
that
we
would
make
them.
So,
we
look
to
the
audience
to
say,
yeah,
that's
probably
good,
or
no,
that's
not
so
good,
and
we
would
like
to
hear
from
you.

In
addition,
we
agreed
to
look
at
the
requirements
regarding
detection
and
quantitation,
and
this
is
kind
of
the
schedule
that
we
are
working
off
of.
The
reassessment
of
the
existing
detection
and
quantitation
procedures,
which
this
is
part
of
today.
The
Agency
is
going
to
put
together
a
procedure
and
send
it
out
for
peer
review,
and
we
are
scheduled
to
propose
the
approach
or
protocol
in
February,
2003
and
invite
comment
and
alternative
approaches,
suggestions,
and
so
forth,
and
we
are
going
to
give
a
180­
day
comment
period.

Generally,
we
give
30,
sometimes
60
days.
So,
this,
with
180
days,
gives
people
a
chance
to
get
visas
and
leave
the
country.
That
is
what
we
were
thinking.

The
final
rule
is
set
for
September
30th,
2004.
We
may
have
to
change
that,
because
some
of
us
may
want
to
leave
about
that
time,
so
that
we
are
not
here
to
get
caught
with
it.

In
the
overview
of
this
thing,
there
are
mostly
three
key
relationships
in
analytical
chemistry,
and
I
will
state
up
front
here
I
am
not
a
statistician.
I
have
been
seen
in
the
company
of
statisticians.
I
have
talked
to
statisticians.
At
times,
I
have
actually
been
seen
discussing
statistical
issues.
But
I
am
not
a
statistician.
My
mother
told
me
when
I
was
young,
"
Son,
you
don't
want
to
be
a
statistician,"
and
she
was
right.

One
of
the
areas
that
we
are
looking
at
is
the
response
versus
the
concentration,
which
is
nearly
always
linear,
the
standard
deviation
versus
concentration
or
our
affectionate
hockey
Detection/
Quantitation
Workshop
3
May
10,
2001
puck...
I
am
sorry...
hockey
stick...
hockey
puck
is
the
product
of
the
hockey
stick...
the
relative
standard
deviation
versus
the
concentration
or
the
ski
slope.

Now,
if
you
look
at
the
graphs,
this
is
the
linear
graph.
This
is
why
to
have
overheads
would
be
really
great,
because
if
you
take
all
three
of
these
and
lay
them
over
and
put
them
up,
they
really
make
a
nice
picture.

This
is
our
favorite
hockey
puck...
I
am
sorry...
hockey
stick,
and,
again,
the
concentration
versus
the
response
factor.
The
next
one
is
the
ski
slope.
And
all
of
this
kind
of
keys
around
the
fact
that
all
the
current
detection
concept,
regardless
of
philosophy,
which
I
think
is
stressing
a
statement,
are
based
on
the
assumption
that
variability
of
the
results
is
effectively
constant
in
that
flat
part
of
the
stick.

Detection
procedures
provide
a
decision
rule
based
on
the
controlling
of
the
false
positives
or
Type
I
for
determining
that
an
analyte
has
been
detected.
Examples
of
that
are
the
EPA
MDL
and
Dr.
Currie's
critical
level.
Many
also
attempt
to
characterize
a
false
negative
or
Type
II
error
rate,
and
examples
of
that
are
the
ASTM
Interlab
Detection
Estimation
or
the
IDE
and
also
Dr.
Currie's
criteria
for
detection.

All
the
quantitation
concepts
are
based
on
either
a
multiple
use
or
the
standard
deviations
of
replicate
measurements
of
the
blanks
of
a
sample
spiked
with
a
low
concentration,
and
that
is
basically
what
we
are
presently
doing
for
examples
of
our
minimum
level,
the
ACS
approach
to
their
LOQ,
and
also
the
approach
using
the
lowest
concentration
at
which
a
model
of
variability
versus
concentration
yields
a
relative
standard
deviation
of
10
percent
RSD.
An
example
of
that
is
the
ASTM
IQE,
which
you
will
hear
about
today.

Issues.
This
is
only
one
slide.
I
thought
we
could
cut
it
down.
One
is,
where
do
you
start?
What
is
the
selection
of
the
concentration?
The
other
thing
is
the
assumption
of
a
constant
variance
at
the
low
concentrations
versus
the
modeling.
Characterizing
false
negatives;
the
roles
of
the
confidence,
prediction
and
tolerance
limits,
how
they
should
play
in
the
overall
use
of
setting
this
area;
and
inclusion
of
the
interlaboratory
variability
factor,
which
we
have
all
kind
of
agreed
over
the
years
would
be
handy
if
not
necessary.

Now,
EPA
uses
detection
and
quantitation
limits
in
various
ways.
One
is
the
water
quality­
based
effluent
limitations
which
are
sometimes
below
the
level
of
measurement.
In
those
cases,
we
recommend
that
the
individual
use,
for
the
purposes
of
reporting,
the
quantitation
limit
for
compliance
purposes.

The
other
thing
is
the
performance
characteristics
of
the
analytical
methods,
which
vary
and
the
quality
control
testing
in
the
laboratory
and
also
the
quality
control
tests
required
in
the
methods
and
the
level
of
that,
and,
basically,
the
fundamental
characteristics
of
the
measurements
for
regulatory
development
and
for
compliance
assessments.

This
is
a
comparison
set
of
data
that
we
ran
on
an
inductively­
coupled
plasma
unit
mass
spectrometer,
and
it
shows
the
various
concepts,
the
MDL,
the
IDE,
the
EDL,
the
IDE,
the
Detection/
Quantitation
Workshop
4
May
10,
2001
MSMST.
Anyhow...
and
as
you
can
see,
the
IDEs
are
generally
higher
than
the
MDLs
and
the
EDLs
by
a
factor
of
between
2
and
10.

Now,
some
of
us
have
run
them
both
ways,
and
it
depends
on,
really,
the
model
you
pick,
is
it
a
poor­
fitting
model.
Interlab
components
of
IDE
and
IQE
sometimes
push
it
up.
Also,
the
statistical
prediction
tolerance
intervals,
how
they
play
in
where
it
will
come
out
and,
actually,
the
study
design.

So,
folks
have
run
these
and
come
in
and
said
well,
no,
they
are
actually
basically
fairly
the
same.
So,
it
depends
on
how
these
various
elements
are
played
within
the
overall
system.

For
our
concern,
our
being
the
Agency's,
the
whole
criteria
for
detection
and
quantitation
has
to
meet
some
kind
of
mandate,
and
one
is
that
the
mandate
must
protect
human
health
and
the
environment.
What
this
basically
means
is
that
the
method
must
be
able
to
measure
at
the
regulatory
or
compliance
level,
and
that
regulatory
level
may
be
water
quality,
it
may
be
whole
effluent
toxicity,
it
may
be
a
number
related
to
an
effluent
guideline.
We
would
assume
this
is
based
on
sound
science.

The
other
thing
is
there
may
not
be
an
answer.
I
think
the
more
I
look
at
this
is
that
various
programs
and
various
situations
are
going
to
probably
dictate
different
approaches,
and
I
think
we
should
be
open
to
that.
I
don't
think
that
maybe
one
shoe
fits
all
here.

In
addition,
it
has
got
to
be
fairly
practical.
When
I
say
fairly
practical,
it
means
that
the
average
analyst...
and
some
of
these
are
not
necessarily
post­
docs
doing
some
of
this
work...
has
to
be
able
to
utilize
it.
It
can't
be
so
complicated
that
nobody
can
make
it,
quote,
work.

And
the
other
thing,
on
the
bottom,
it
must
have
some
reasonable
cost
associated
with
it,
cost
both
in
the
operating
level
which
means
the
QA
that
is
tied
to
it
and
also
in
the
validation
process.
Validating
a
method
nowadays
is
fairly
expensive.
It
has
never
been
cheap,
but...
and
the
budgets
to
do
that
are
getting
smaller
and
smaller.
So,
the
whole
question
of
practicality
and
cost
becomes,
again,
a
driving
force.

This
also,
I
think,
brings
out
the
point
that
the
Agency
really
is
looking
to
the
consensus
standards
organizations
and
would
rather
work
with
them,
that
is
to
say,
on
some
of
these
combined
studies,
not
that
we
love
you,
but,
certainly,
our
budgets
are
such
that
it
will
make
it
a
lot
easier,
both
for
the
consensus
organizations
and
the
Agency,
to
get
new
methods
out,
whether
they
be
chemical
methods
or
microbiological
methods
or
whole
effluent
toxicity
methods,
and
on
the
books
in
a
reasonable
time.

And
that
is
the
other
part,
timing.
Many
of
our
situations
are
driven
by
court­
ordered
mandates,
rules,
Congressional
requirements.
In
doing
that,
time
sometimes
pushes
us
to
the
other
extreme.

So,
how
can
you
help
us?
Well,
some
of
you
already
have
expressed
yourselves
quietly,
and
we
would
like
to
continue
to
hear
from
you.
We
would
like
to
look
at
the
limited
concepts
of
the
MDL
and
the
ML
and
the
Clean
Water
Act
and
other
concepts,
if
they
are
available.
Detection/
Quantitation
Workshop
5
May
10,
2001
Suggested
improvements
in
existing
procedures.
Suggested
combinations
or
alterations
of
existing
consensus
organizational
procedures.

Also,
should
EPA
allow
any
procedure
or
just
promulgate
them
all
as
Part
136
Appendix
B,
take
it
all
and
put
it
out
there,
promulgate
it
in
the
sense
of
a
proposal
and
see
what
we
get?
We
are
open
to
that.
Just
more
printing
at
the
Federal
Register
office,
but...

So,
that
is
what
we
are
here
for
today.
This
is
an
open
forum.
We
accept
opinions,
we
accept
comments,
we
accept
praise,
and
we
hope
to
have
a
good
forum
this
morning.
Detection/
Quantitation
Workshop
6
May
10,
2001
DETECTION
AND
QUANTIFICATION
CONCEPTS
AND
NOMENCLATURE:
INTERNATIONAL
HARMONIZATION
AND
RECOMMENDATIONS
MR.
TELLIARD:
Our
first
speaker
is
Lloyd
Currie.
Lloyd
is
formerly
with
the
National...
I
call
it
the
National
Bureau
of
Standards.
I
am
sorry.
I
have
never
been
able
to
change
over,
and
Lloyd
always
tells
everyone,
and
I
will
say
it
now,
that
he
is
a
chemist.
He
is
not
a
statistician,
and
he
is
going
to
talk
a
little
bit...
and
Lloyd
has
done
extensive
work
in
this
area
of
detection
and
quantitation,
and
he
is
going
to
start
us
off
this
morning.
Lloyd?

DR.
CURRIE:
Thank
you.

I
am
delighted
to
be
here,
and
I
am
pleased
that
Bill
announced
that
I
am
a
chemist
and
I
do
look
at
fine
tuning
an
accelerator
mass
spectrometer.
That
becomes
really
interesting.

I
invite
interruption
or
questions
as
we
go
along,
if
you
wish.

There
are
three
parts...
am
I
too
loud
for
anyone
besides
myself?
Okay.

There
are
three
parts
to
what
I
would
like
to
present
this
morning.
I
am
coming
here
primarily
from
the
international
arena
where
I
have
worked
heavily
with
the
International
Union
of
Pure
and
Applied
Chemistry...
we
pronounce
that
u­
pak;
you
will
hear
that
many
times...
the
International
Organization
for
Standardization,
ISO,
and,
finally,
the
International
Atomic
Energy
Agency,
the
IAEA.
I
haven't
learned
to
pronounce
that,
but
those
are
the
three
primary
international
organizations
that
I
have
interacted
with
that
I
hope
may
be
of
some
help.

So,
there
are
three
parts
to
my
talk.
First,
a
little
bit
of
history.
What
I
thought
was
going
to
be
solved
in
the
late
part
of
the
century,
the
20th
century,
that
is,
and
I
was
going
to
go
on
to
other
activities,
it
didn't
happen,
and
I
guess
that
is
partly
why
we
are
here,
so
I'll
recount
a
little
bit
of
the
chaos
of
the
20th
century.

More
importantly,
what
has
taken
place
in
the
last
decade
or
so
on
the
international
front
in
terms
of
a
harmonized
position.
Perhaps
some
of
the
lessons
we
learned
there
can
be
applicable
to
what
needs
to
be
done
in
this
country.

Finally,
some
open
questions
and
critical
issues.

Before
starting
that,
I
would
like
to
tell
you
about
a
trip
I
had
to
one
of
our
national
laboratories
not
long
ago,
and
I
was
required
to
wear
a
radiation
monitor,
a
dosimeter,
and
then
they
gave
me
a
report
following
my
visit.
The
print
may
be
a
little
bit
too
small,
but
the
main
part
of
the
message
is
there
are
a
lot
of
zeroes
in
there,
and
then
they
go
on
to
tell
us
that
our
limit
is
100
millirem
per
year,
and
this
was
a
two­
day
visit,
and
they
were
counted
as
zeroes,
and
you
get
5
millirem
if
you
travel
across
the
country
in
both
directions
and
so
forth.
Detection/
Quantitation
Workshop
7
May
10,
2001
This
is
the
first
part
of
a
two­
part
lecture
on
virtual
reality.
This
I
refer
to
as
virtual
absence.
Later
on
in
the
discussion,
I'll
talk
about
virtual
presence.
They
both
can
be
disastrous
and
very
costly.

Next,
I'd
like
to
give
you
a
little
mental
and
visual
exercise.
Some
may
have
seen
this,
so
I
ask
you
not
to
vote,
and
I
am
taking
a
little
bit
of
didactic
license
in
presenting
this
data.
I
did
not
make
them
up;
the
International
Atomic
Energy
Agency
did,
and
I
am
just
telling
you
their
observations,
and
I
am
giving
the
magnitude
of
the
signal,
not
the
units.

You
can
imagine
they
are
random
samples
that
come
in.
They
may
have
a
space
component,
they
may
have
a
time
component,
but
looking
at
them,
let
me
just
quickly
ask
how
many
of
you
think
that
there
is
a
real
signal
there.
(
No
response.)

DR.
CURRIE:
So,
is
that
the
IDE?
I
have
forgotten
all
these
terms,
but
okay,
I
saw
zero.
Perhaps
I
asked
the
question
in
the
wrong
order.
We'll
come
back
to
this
later
in
the
lecture.

So,
on
to
the
first
topic,
namely,
history,
and
I
prepared
a
series
of
milestones
covering
the
past
200
years.
Necessarily,
a
lot
was
left
out,
and
I
am
not
going
to
take
any
time
on
this,
because
time
is
restricted,
and
I
think
I
have
more
important
things
to
convey,
but
there
are
some
fascinating
tales
that
go
into
the
people
and
the
dates
that
are
here.

When
I
came
into
the
area...
and
I
put
that
down
as
part
of
the
ancient
history
slide...
this
is
what
I
saw.
Some
of
you
may
have
seen
this.
I
was
just
a
curious
chemist
and
reading
about
capabilities
of
different
methods
in
the
literature,
and
this
is
what
came
out,
a
range
of
1000,
and
if
you
plot
them
in
the
right
order,
it
becomes
an
exponential
curve.

I
was
advised
by
a
physicist
colleague
at
the
time
not
to
publish
the
paper.
He
said
it
was
too
esoteric,
nobody
will
be
interested,
no
one
will
read
it.
I
ignored
his
advice,
and
I
found...
one
finds...
that
there
are
two
kinds
of
questions,
just
summarizing
the
concepts.

Is
there
a
pointer
nearby?
Great,
thank
you.

So,
there
are,
in
a
sense,
two
questions,
at
least,
from
one
formalism
of
detection.
One
is
how
little
can
we
detect,
and
the
other
is
has
something
been
detected,
and
these
are,
I
think,
profound
different
sides
of
the
measurement
issue.
One
refers
to
the
capability
of
the
measurement
process,
and
that
is
where
the
international
organizations
that
I
have
worked
with
talk
about
such
issues
as
detection
and
quantification
limits
as
characterizing
the
process,
not
the
result.
The
other
refers
specifically
to
results.

Confusion
between
these
two,
I
think,
has
reigned
and
led
to
many
difficulties.

Responses
that
I
got
from
colleagues
along
the
way
were
those
who
knew
what
the
capability
was
intrinsically...
intuitive.
I
have
a
lot
of
respect
for
that,
by
the
way.
Ad
hoc,
I
don't
have
a
lot
of
respect
for
that,
but
it
is
easy
to
work
with
something
that
is
given
to
you.
Signal
to
Detection/
Quantitation
Workshop
8
May
10,
2001
noise
refers
to
the
second
question
as
I
am
characterizing
it,
and
it
led
to
a
lot
of
confusion,
because
the
term
detection
limit
was
being
used
in
two
different
ways.

One
of
my
best
friends
recommends
avoiding
the
issue
by
always
having
big
signals.
That
would
be
lovely,
and
hypothesis
testing
is
what
I
shall
be
relying
upon
where
we
have
false
positives
and
false
negatives
as
far
as
detection
is
concerned.

And
to
end
my
journey
through
the
first
part
of
the
20th
century,
this
is
what
I
found
then,
and
I
see
we
have
many
more
acronyms
now,
a
whole
series
of
abbreviations,
a
number
of
organizations,
a
number
of
rules.
The
false
positives
at
that
time
ranged
from
0.05
to
5
percent,
false
negatives
from
5
to
50
percent.
50
percent
is
when
you
don't
recognize
that
there
might
be
false
negatives,
and
you'll
see
the
ratio
can
be
1000
if
you
have
different
points
of
view.

This,
in
part,
stimulated
the
decade
of,
I'll
call
it,
international
progress
during
the
`
90s,
because
the
Codex
Committee
on
Measurement
and
Sampling,
International
Committee,
part
of
the
Food
and
Agricultural
Organization,
asked
IUPAC
when
I
was
a
member
of
that...
and
I
still
am,
in
fact...
in
1990
to
help
them
resolve
this
question,
and
they
spoke
of
limit
of
detection,
limit
of
determination,
not
particularly
distinguishing
them,
but
they
asked
for
help.
That
is
when
I
got
to
work
internationally
on
this
issue.

Unbeknownst
to
me,
they
also
asked
the
International
Organization
for
Standardization
for
help.
So,
two
international
paths
were
beginning
at
the
same
time,
and
this
turned
out
to
be
very
interesting
indeed.

So,
on
to
part
two,
and
here
I
will
take
a
moment
or
two
to
point
out
the
milestones
of,
primarily,
the
`
90s.
Had
I
taken
time
to
discuss
the
milestones
of
the
last
two
centuries,
this
first
bullet
would
have
more
meaning
to
you,
but
the
confusion
of
both
the
terminology
and
the
meaning
of
limited
detection
led
to
a
formalized
definition
by
IUPAC
that
effectively
defined
it
in
terms
of
the
critical
value,
decision
threshold
to
be
applied
to
experimental
results,
and
that
caused
a
lot
of
confusion
and
consternation
among
chemists,
and
the
article
by
Long
and
Langforder
came
out
that
said
this
basic
definition
of
IUPAC
is
flawed.

The
Nuclear
Regulatory
Commission
I
put
down,
because
that
is
one
of
the
organizations
in
this
country,
which
I
believe
has
reached
consensus
on
the
issue.
They
are
generally
in
the
fortunate
case
of,
quote,
knowing
their
precision
or
imprecision,
because
they
can
rely
on
Mr.
Poisson.
It
is
easy.
You
just
take
the
square
root
of
the
number
of
counts.
That
helps
enormously,
provided
that
is
the
primary
source
of
imprecision.

So,
there
is,
perhaps,
an
example
of
some
national
consensus
in
this
country
that
has
emerged
in
the
nuclear
industry
at
least.

The
Codex
request
I
just
mentioned
to
ISO
and
IUPAC
in
1990.
My
good
friend,
Bill
Horowitz,
whom
some
of
you
may
know
who
was
at
Food
and
Drug,
was
also
in
both
organizations,
I
think,
and
he
alerted
me
to
the
fact
that
ISO
was
working
on
this
problem
at
the
same
time
IUPAC
was,
and
we
saw
something
of
one
another's
documents,
and
we
were
reflecting
the
confusion
that
has
reigned
during
the
whole
century.
Detection/
Quantitation
Workshop
9
May
10,
2001
ISO
used
one
sort
of
terminology,
IUPAC
the
other.
We
used
the
same
term
with
different
meanings,
and
we
had
the
same
meaning
with
different
terms.
So,
if
those
two
documents
had
come
forth
that
way,
we
would
have
had
international
confusion.

Well,
we
had
a
harmonization
meeting.
Maybe
it
reflects
the
sort
of
work
that
is
attempted
here.
It
was
not
easy.
It
was
extremely
difficult
but,
I
think,
extremely
important.
We
perhaps
had
a
better
chance
at
success,
because
we
only
had
six
participants,
but,
of
course,
then,
both
organizations
had
their
whole
communities
that
had
to
look
at
this.

Anyway,
we
had
an
intensive
meeting
in
1993
in
Washington
and
then
a
few
meetings
following
that,
and
we
did
a
bit
of
negotiation,
especially
on
terminology.
Happily,
we
had
the
same
basic
concepts,
namely,
hypothesis
testing
false
positives,
false
negatives.
And
we
did
reach
consensus,
not
perfect
but
awfully
good.

Then,
IUPAC
came
out
with
formalized
recommendations
that
were
published
in
1995,
and
ISO
came
out
with
two
documents
thus
far
on
what
they
called
the
capability
of
detection
in
1997
and
a
year
or
two
after
that.
Then,
the
formal
nomenclature
document
of
IUPAC
which
effectively
sets
the
law,
as
it
were,
for,
intended,
a
period
of
a
decade
or
so
was
published
in
1998
covering
many
things
besides
detection
and
quantification,
but
here
it
is
if
anyone
wishes
to
glance
at
it.
So,
this
is
the
international
standard
for
chemists,
at
least.

Contained
in
that
are
many
topics
related
to
quality
assurance,
methods
of
measurement,
and
so
forth
and
including
detection
and
quantification,
namely,
this
formal
recommendations
they
published
in
`
95.

I
will
just
say
as
a
footnote
to
get
recommendations
published
through
IUPAC,
a
decade
is
not
an
unusual
period.
It
has
to
be
reviewed
by
special
committees
within
the
organization,
then
distributed
to
some
15
editors
of
chemical
journals
plus
numerous
readers
and
statisticians
and
then
to
all
of
the
member
states
for
review.
So,
that
doesn't
come
quickly
or
easily.

The
International
Atomic
Energy
Agency
I
have
been
working
with
for
the
past
few
years
on
the
topic
has
a
draft
document
that
includes
the
issues
of
detection
and
quantification,
and,
happily
for
many
of
you,
they
include
worked
out
examples.

Oh,
finally,
there
are
at
least
two
ISOs.
There
are
probably
many
ISOs.
Another
group
that
was
concerned
with
radiation
came
out
with
standards
in
1999,
and,
interestingly,
they
are
much
closer
to
the
IUPAC
standards,
perhaps,
because
they
involved
nuclear
chemists
such
as
myself.

I
am
going
to
give
a
quick
definition
graphically
and
then
algebraically.
This
is
what
I
like
to
call
my
earthquake
metaphor.
I
tried
to
pick
a
topic
that
was
innocuous,
at
least
to
the
chemists,
so
I
picked
earthquakes,
and
then
I
started
reading
about
Seattle
and
Los
Angeles
and
Omidabad,
but
the
idea,
just
to
convey
the
concept,
is
that
there
is
some
level
of
loss
in
some
kinds
of
units,
probably
multivariate,
that
is
deemed
acceptable
to
society.
That
then
dictates
some
measure
of
a
concentration
or
an
event
that
that
acceptable
cost
level
corresponds
to,
and
from
my
perspective,
this
is,
by
far,
the
most
complicated
part
of
the
whole
story.
Detection/
Quantitation
Workshop
10
May
10,
2001
Anyway,
here
we
are,
presented
with
some
sort
of
what
I
called
a
requisite
limit
that
we
would
like
our
methods
of
measurement
to
be
able
to
meet,
so
they
must
not
exceed
that,
and
then
this
just
is
the
typical
no
hypothesis
distribution
around
something
I'll
call
B
that
can
be
a
blank,
can
be
a
background,
can
be
a
baseline,
with
a
distribution
our
Type
I
error,
the
false
positive,
alpha.
And
then,
at
the
detection
limit,
having
made
the
decision
at
L
C,
you
have
a
false
negative
beta.

And
I
have
purposely
shown
the
variances
at
different
concentrations
increasing,
which
it
usually
does,
and,
unfortunately,
it
is
not
always
normal.

That
is
the
graphical
representation,
and
the
definition
that
IUPAC
and
ISO
and
IAEA
are
agreeing
on
are
put
here
algebraically,
namely,
stating
what
we
stated
graphically,
that
the
probability
that
the
observed
or
estimated
signal
exceeds
the
critical
value,
given
that
the
true
value
is
zero,
the
null
hypothesis,
is
less
than
or
equal
to
the
false
positive
level
that
is
agreed
upon.

I
put
an
inequality
here
for
the
sake
of
those
distributions
that
are
discrete,
and
that
is
the
whole
nuclear
world,
namely,
the
counting
type
distributions
where
you
can't
have
continuous
values
of
alpha.

Given
L
C,
you
can
then
define
the
minimum
detectable
level
where
that
observed
signal
now
is
less
than
or
equal
to
the
critical
level,
the
threshold,
given
that
the
true
signal
is
equal
to
the
detection
limit
that
is
set
equal
to
the
false
negative
error,
beta,
and
then
the
quantification
limit
where
K
Q
is
1
over
the
relative
standard
deviation
at
the
quantification
limit.

These
are
default
values
taken
by
IUPAC
and
the
Nuclear
Regulatory
Commission.
The
International
Organization
for
Standardization
does
not
treat
the
quantification
limit,
and
they
set
alpha
and
beta
equal
to
0.05.
They
are
not
default
values,
according
to
their
standard,
but
we
in
IUPAC
believe
they
need
to
be
defined
according
to
the
needs
of
the
situation.

This
is
simply
what
happens
in
the
simplest
possible
case
where
variance
is
constant
over
the
range
of
interest.
 0
is
the
standard
deviation
of
the
estimated
net
quantity
when
it
is
zero,
and
if
it
is
the
normal
statistic,
if
you,
quote,
know
sigma,
this
is
the
multiplier,
and
I
gave
an
example
for
the
case
where
it
is
based
on
replication
and
took
an
extremely
small
degrees
of
freedom,
4,
namely,
five
measurements,
and
then
this
gets
modified
by
Student's
t
and
you
use
the
estimated
standard
deviation.

The
minimum
detection
limit,
then,
algebraically
is
this
representation,
and
now
it
is
the
non­
central
t,
delta
times
 0
for
the
detection
limit
which
is
approximately
2t.

The
problem
here
is
that
the
detection
limit,
based
on
the
replication
model,
has
to
have
the
standard
deviation
in
it,
not
the
estimated
standard
deviation.
So,
this
needs
to
be
 ,
not
S,
but
the
decision
is
made
with
S,
which
means
that
this
is
a
random
variable.
It
depends
on
the
particular
value
of
S,
but
in
the
long
run,
if
the
assumptions
are
correct,
you
will
come
out
with,
for
example,
the
5
percent
false
positive
or
1
percent.
Detection/
Quantitation
Workshop
11
May
10,
2001
What
this
tells
us
is
that
if
you
don't
know
sigma,
you
have
an
uncertainty
band
around
the
detection
limit
based
on
the
uncertainty
in
sigma.

The
same
is
true
of
the
quantification
limit.
This
is
defined
in
terms
of
the
true
standard
deviation.

We
have
assumed
here
that
variance
is
constant
which,
in
certain
kinds
of
measurements
is
true
and
in
many
kinds
is
not,
but
that
is
just
the
simplest
case.
I
don't
propose
to
go
into
more
complexities
here.
I
think
it
is
probably
not
in
order
except
just
to
present
the
concepts.

Here
are,
sort
of
summarizing
this
second
part,
some
of
the
primary
differences
between
the
harmonized
results
of
IUPAC
and
ISO,
namely,
ISO
treats
B
as
an
intercept
of
a
calibration
function,
and
IUPAC
says
we
have
to
pay
attention
to
the
blank,
and
I
will
say
a
little
bit
more
about
this
later.
This
is
a
really
severe
problem.

ISO
uses
the
replication
value
from
fitting
the
calibration
curve,
for
example.
IUPAC
says
we
have
got
to
pay
attention
to
the
total
uncertainty.
And
ISO
treats
detection;
IUPAC
detection
and
quantification,
and
then...
I
won't
address
this
next
issue.
There
is
just
not
enough
time.

And
then,
the
other
difference
between
the
two
is
that
for
ISO,
alpha
and
beta
are
defined.
For
IUPAC,
they
are
parameters
that
may
be
adjusted
according
to
the
needs
of
the
situation.

End
of
part
two.

Part
three
will
be
quick
and
graphical,
and
here
is
the
essence
of
what
I
wish
to
bring
to
your
attention
in
part
three.
As
with
Bill's
presentation,
this
is
far
from
a
complete
list,
but
I
hope
it
is
some
of
the
more
important
issues.
And
I
am
speaking
mostly
from
the
IUPAC
perspective,
a
lot
of
it
from
personal
perspective.

This
is
certainly
IUPAC.
They
say
that
to
even
address
these
questions,
you
need
to
fully
specify
the
measurement
process.
That
means
including,
for
example,
sampling,
for
example,
the
number
of
replicates
built
into
your
process.
If
you
are
talking
only
about
an
instrument
capability,
then
that
would
be
the
measurement
process.

The
variance
function
was
already
mentioned
by
Bill,
namely,
does
sigma
change
with
concentration
and,
if
so,
how.
That
needs
to
be
addressed
except
for
the
critical
value.
There,
you
need
only
the
variance
at
the
blank
level.

The
blank
I
am
going
to
say
a
little
more
about.
I
have
had
reason
during
the
last
year
or
two
to
look
into
the
issue
of
the
blank,
and
it
can
be
profound
in
its
influence.

There
are
different
domains
you
can
deal
with.
Theta
is
the
multivariate
area
where
you
like
to
use
a
series
of
measured
parameters
to
identify
a
source
of
pollution,
for
example.
That
leads
to
some
very
interesting
concepts
in
mathematics
and,
of
course,
has
great
importance.
Detection/
Quantitation
Workshop
12
May
10,
2001
Multiple
detection
decisions.
What
I
presented
with
the
earlier
equations,
if
you
make
one
decision,
you
get
the
kinds
of
probabilities
we
are
talking
about.
If
you
need
to
make
a
whole
series
such
as
the
first
slide
I
showed
where
everybody
voted
there
is
nothing
there,
then
you
need
to
pay
attention
to
what
does
this
do
on
the
overall
probability
of
false
positives.

Intercomparison
data
I
will
say
something
about
in
a
moment.

Virtual
reality.
I
talked
about
the
first
part.
I'll
give
you
the
second
part.
In
a
meeting
two
years
ago
with
the
Department
of
Defense,
DOE,
a
number
of
major
organizations,
they
were
concerned
about
the
effect
of
disposing
of
ordinance
by
setting
it
off
in
the
field
and
pollutants
are
created,
and
they
made
lots
of
measurements,
very
expensive
tests
of
the
emissions,
and
80
percent
of
them
were
non­
detected.
Those
were
set
equal
to
zero,
and
it
is
just
like
that
first
example,
namely,
there
was
a
very
severe
bias,
and
then
that
was
put
into
a
computer
code
to
find
out
what
would
be
produced.

The
other
example
was
nuclear
facilities,
DOE,
Nuclear
Regulatory
Commission,
where
they
have
repositories
for
low­
level
radioactive
waste,
and
they
were
using
some
inappropriate,
inexpensive
procedures
to
measure
what
they
were
putting
into
it,
and
there
is
a
limit
on
how
much
you
can
put
into
each
repository.
It
goes
into
the
inventory.
And
through
a
series
of
interesting
procedures,
this
inadequate
procedure
that
led
to
non­
detected
activity,
those
were
put
in
as
upper
limits.

So,
what
was
ambiguous
became
positive,
put
into
the
inventory,
added
up,
and
soon,
as
my
colleague
in
the
nuclear
industry
said,
our
repositories
are
filling
up
with
virtual
radioactivity.
So,
there
is
my
other
virtual
activity
case.

And
I
just
have
about
two
more.
I
want
to
emphasize
the
blank.
I
have
not
seen
as
much
attention
given
to
it
as
I
think
it
should
have.
I
have
been
involved
very
recently
in
some
very
difficult
measurements
by
three
different
laboratories
on
some
radionuclides
and
stable
elements
where
intercomparisons
showed
that
they
just
were
not
self­
consistent.

When
we
looked
into
the
issue
of
the
blank,
this
is
what
repeatedly
was
found,
a
distribution
that
can't
be
negative.
It
is
normally
positively
skewed.
I
shouldn't
have
said
normally.
It
is
positively
skewed,
and
oftentimes,
there
is
no
conceptual
reason
to
select
any
particular
distribution
function.

Log
n
is
fun
to
fit
and
sometimes
convenient
to
manipulate
with,
but
beware,
it
is
not
necessarily
log
n.
It
is
almost
always
positive
skew
unless
it
is,
for
example,
a
blank
in
a
reagent
that
is
a
well­
controlled,
repeatable
level.

These
actually
represent
the
first
decade
of
a
20­
year
series
of
measurements
of
trace
amounts
of
sulfur
blanks
in
the
laboratory
of
Bob
Kelly
at
NIST/
NBS,
and
he
perhaps
has
the
world's
record
for
high
sensitivity
sulfur
blanks,
and
this
is
what
he
finds.

But
be
careful.
Don't
interpret
this
as
a
distribution
you
can
then
fold
into
making
detection
decisions.
It
depends
on
the
weather.
He
found
the
blank
was
low
when
it
rained.
It
Detection/
Quantitation
Workshop
13
May
10,
2001
was
lower
when
the
wind
came
from
the
east
then
when
the
wind
came
from
the
west.
And
sulfur,
like
many
species,
is
mischievous,
because
it
can
go
from
gas
phase
to
solid
phase,
get
into
the
laboratory
air
even
if
it
is
well
filtered,
then
materialize
as
solid,
and,
of
course,
as
many
oxidation
states.

So,
this
is
an
overview
that
one
needs
to
pay
attention
to
if
you
are
working
at
a
level
where
the
blank
can
be
influential.
I
will
give
you
one
other
example.
This
is
from
something
that
I
published...
I
am
sorry,
that
is
in
press,
in
the
Fresenius'
Journal
of
Analytical
Chemistry.

This
is
Kr
85
in
the
atmosphere
as
measured
in
Prague,
Czechoslovakia
over
a
period
of
a
decade
or
so
in
the
mid
`
80s
to
the
mid
`
90s,
and
this
is
what
was
seen...
I
am
sorry.
I
am
giving
you
a
preview
of
what
I
didn't
intend
to.
As
you
say,
overlaying
is
convenient
sometimes.

So,
here
it
is.
It
sort
of
looks
log
n,
too,
but,
in
fact,
as
the
title
implies,
we
have
the
temporal
information
as
well,
and
that
is
a
whole
lecture
in
itself.
It
is
just
a
fascinating
story
about
socio­
political
nuclear
issues
in
Europe.
But
this
is
what
the
distribution
looks
like.
If
you
are
going
to
use
it,
be
careful.
And
I
will
just
end
this
blank
discussion
by
saying
that
is
Chernobyl.

You
remember
the
slide
that
we
voted
on
at
the
beginning?
Well,
here
is
the
whole
story.
The
International
Atomic
Energy
Agency
includes,
as
a
regular
part
of
the
quality
assurance
sample
data
manual,
a
set
of
presumably
representative
nuclear
data,
and
this
is
supposed
to
represent
a
gamma
ray
spectrum,
and
the
peak
that
you
all
voted
on
is
right
up
here.
This
is
a
log
scale,
and
this
is
what
it
looks
like
when
you
amplify
it,
and
these
were
the
dots
you
saw.

The
real
peak
that...
these
were
created
data
to
simulate
reality,
again,
virtual
reality,
but
to
test
the
computational
phase
of
the
measurement
process,
and
I
have
to
stress
that
that
is
a
part
of
the
measurement
process,
the
evaluation
process.
If
different
methods
are
used,
you
will
get
different
detection
limits,
for
example,
appropriately.

Anyway,
here
is
the
peak
that
you
saw,
and
here
is
the
rest
of
the
spectrum.
Since
it
is
a
log
scale,
you
can
see
some.
If
we
had
lots
of
time
and
interest,
you
could
also
tell
me
how
many
peaks
are
there
which
was
the
exercise
the
IAEA
offered
to
the
nuclear
scientists
of
the
world.
About
200
participated
in
this.
That
particular
spectrum
peak
was
found
by
about
50
percent
of
them.
So,
you
are
on
one
of
the
tails
of
this
distribution,
I
think.

The
point
here...
well,
there
are
two
points
I
would
like
to
make.
One
is
the
idea
that
there
can
be
reference
standard
data
that
can
be
used
for
evaluating
the
numerical
side
of
the
measurement,
and
I
think
some
of
those
have
been
used
in
the
information
that
is
going
to
be
presented
here
or
has
been
presented
here.

The
second
point
is
that
this
is
a
classic
example
of
multiple
detection
decisions.
Actually,
this
was
channel
number
gamma
ray
energy,
not
time
or
space,
and
the
point
is
you
need
to
decide,
as
you
proceed
through
the
entire
spectrum,
is
there
a
peak
there,
is
there
not,
and,
hence,
you
make
many
detection
decisions.
Detection/
Quantitation
Workshop
14
May
10,
2001
The
bottom
line
was
that
those
using
very
sophisticated
computer
programs,
not
taking
this
into
account,
got
numerous
false
positives.
You
might
guess
what
the
most
reliable
method
was.
The
human
eye,
the
test
I
gave
you.
Our
algorithms
are
remarkably
sophisticated
and
avoid
this
false
positive
issue.

I
will
end
with
a
set
of
international
references
relating
to
the
material
I
have
discussed,
and
I
have
the
so­
called
orange
book.
That
is
this
compendium
of
analytical
nomenclature.
And
I
think,
with
that,
I
will
end
and
save
others
for
discussion
purposes.

Thank
you,
Bill,
for
not
giving
me
the
five­
minute
sign.

MR.
TELLIARD:
Thank
you.

QUESTION
AND
ANSWER
SESSION
MR.
TELLIARD:
We
have
time
for
a
few
quick
questions
if
you
have
any
right
now.
David?
Microphone,
microphone,
microphone.
It
is
the
tall
palm
tree.

MR.
COLEMAN:
I
am
David
Coleman
from
Alcoa.

I
was
wondering
if
you
could
recommend
a
handful
of
references
for
someone
who
is
just
starting
to
dig
into
this
field?

DR.
CURRIE:
Yes.

MR.
TELLIARD:
I
think,
when
we
get
the
copies
of
his
overheads,
David,
we
will
make
them...

DR.
CURRIE:
I
might
also
give
you
a
reprint
that
has
really
a
handful.

MR.
TELLIARD:
Anyone
else
right
now?

MR.
ROBINSON:
I
have
a
comment.
Jim
Robinson,
EPA,
Kansas
City.

We
have
had
some...
I
have
looked
into
this.
I
am
also
a
mathematician,
but
I
am
not
a
statistician,
and
I
noticed
that
near
the
detection
limit
or
lower
limits,
one
thing
that
people
assume...
and
it
is
in
the
literature...
is
that
there
is
a
random
error
involved.
In
other
words,
everything
is
random,
but
that
is
not
always
the
case.
Sometimes,
you
have
to
subtract
your
blank,
and
that
is
where
the
problem
comes.
You
are
mixing
random
error
with
determinate
or
bias,
and
that
is
where
the
problem
comes.

If
everything
is
random
error,
it
is
not
so
hard
to
make
a
definition.

DR.
CURRIE:
I
totally
agree.
I
think
random
error
is
the
rare
bird,
actually,
but
it
makes
for
great
statistics,
not
the
picket
in
a
fence.
Detection/
Quantitation
Workshop
15
May
10,
2001
I
should
say
something
about
the
blank.
It
is
the
random
variation
of
the
blank
that
we
really
need
to
worry
about
and
especially
the
tail
of
those
asymmetric
distributions,
and
we
don't
really
know
much
about
the
tail.
The
rare
fellows
are
not
seen,
and
if
we
make
a
model,
we
could
be
misleading
ourselves.

So,
there
are
some
important
questions
there.
There
are
some
extreme
event
statistics
that
might
be
interesting.
I
don't
know.

But
as
far
as
bias
is
concerned,
if
we
measure
the
blank
and
use
the...
or
an
average
blank
or
background
and
use
the
same
value
repeatedly,
absolutely,
we
have
built
in
a
bias.
The
results
will
be
correlated,
and
we
may
have
an
alpha
that
is
very
different
from
what
we
intend.

That
is
one
reason
I
recommend
actually
replicating
each
time,
even
if
it
is
a
small
number
of
replicates,
and
using
Student's
t.
There
is
some
other
work
I
have
done
recently,
learning
from
statisticians
that
t
can
be
a
wonderfully
robust
statistic,
and
if
you
design
your
blank
measurement
carefully,
that
can
save
a
lot
of
woe.
I
can
talk
to
you
about
that
later.

There
is
another
source
of
systematic
error.
That
is
calibration.
It
is
perhaps
less
important
than
the
blank,
because
it
can
be
usually...
the
calibration
factor,
if
you
have
a
linear
curve,
at
least,
can
usually
be
determined
fairly
precisely,
but
if
you
use
the
same
calibration
curve
over
and
over
again,
you
have
again
built
in
a
bias.

MR.
TELLIARD:
Thank
you.
Yes,
sir?

MR.
FLORES:
Ray
Flores,
Region
VI.

You
mentioned
the
radio­
chemistry
results
are
also
reported
with
the
associated
error
based
on
Poisson
or...
forgive
me
if
I
pronounce
that
incorrectly...

DR.
CURRIE:
Poisson.

MR.
FLORES:
Poisson
statistics.
Thank
you.
Now,
is
that
spontaneous
nuclear
event
the
primary
source
of
uncertainty
or
error...
I
forget
which
term
it
is...
as
opposed
to
the
only
source?

DR.
CURRIE:
Crucial
question.
All
too
often
assumed
so.
If
you
work
in
the
low­
level
nuclear
world
which
has
been
my
home
for
many
decades,
the
counts
are
so
small,
number
of
counts,
that
the
Poisson
error
in
a
relative
sense
is
so
large
that
it
wipes
out
almost
everything
else,
but
one
must
be
cautious
of
making
that
assumption,
and,
indeed,
many
are
very
cautious
indeed.

The
accelerator
mass
spectrometry
world
which
I
have
become
a
part
of
for
the
last
20
years
has
a
rule.
I
have
tried
to
tell
them
to
modify
it
a
little
bit,
but
they
replicate,
which
is
wonderful,
but
they
also
have
counting
statistics,
the
number
of
ions
that
reach
the
detector
at
the
end
of
the
accelerator.
Detection/
Quantitation
Workshop
16
May
10,
2001
So,
they
can
calculate
it
both
ways.
They
can
calculate
S,
and
they
can
calculate
Poisson's
sigma,
and
their
almost
universal
rule
of
thumb
throughout
the
world
is
to
quote
the
one
that
is
larger.

That
also
leads
to
some
bias.
You
can
ask
yourself
what
happens
if
the
Poisson
error
is
always
the
dominant
error
and
they
use
that
rule.
At
least,
they
are
recognizing
the
fact
that
it
may
be
something
in
addition
to
Poisson.
Very
important
issue.

MR.
TELLIARD:
Thank
you,
Lloyd.

There
will
be
a
panel
at
the
end
of
this
workshop
which
will
be
open,
and
we
can...
any
questions
you
come
up
with,
we'll
get
another
shot
at
it.
Thank
you,
again.
Detection/
Quantitation
Workshop
17
May
10,
2001
EPA'S
METHOD
DETECTION
LIMIT
(
MDL)
AND
QUANTITATION
LIMIT
MR.
TELLIARD:
Our
next
speaker
is
Paul
Britton.
You
remember
Paul
Britton
when
he
had
a
blue
jersey
with
a
big
EPA
on
his
chest.
Since
then,
Paul
has
retired
and
is
now
represented
today
as
a
DynCorp
person.

Paul
has
been
involved
in
numerous
studies
over
the
last
25
or
more
years
at
EPA
on
our
QC
sampling,
our
methods
performance
work,
and
Paul
is
going
to
give
you
a
talk
today
a
little
bit
about
the
method
detection
limit
and
quantitation
limits
as
EPA
has
implemented
them
and
used
them.

MR.
BRITTON:
Good
morning,
ladies
and
gentlemen.
As
Bill
said,
I
represent
DynCorp
at
this
meeting,
and
I
would
like
to
establish
some
distinctions
between
detection
and
quantitation
limits,
as
I
see
them.

Let's
go
back.
I
wanted
that
one
up.
If
we
get
forward
and
reverse
in
the
right
order,
we'll
be
all
right.

First
thing
is
probably
the
use
of
the
term
quantitation.
I
think
that...
I
tried
to
look
this
up
in
the
big
dictionary
that
we
have
in
our
library,
and,
unfortunately,
quantitation
doesn't
appear
in
that
book.
Quantification,
however,
does,
so
I'll
hope
you
will
excuse
the
fact
that
I
am
going
to
use
quantification
or
attempt
to
use
quantification
in
place
of
quantitation
throughout
this
talk.

When
I
use
the
term
detection
limit,
I
am
frequently
talking
about
something
that
is
defined
from
a
graph
such
as
this.
If
we
represent
the
distribution
of
analytic
results
from
analysis
of
blanks
or
analysis
of
samples
of
zero
concentration,
then
if
this
represents
the
distribution
of
that
data
and
alpha
percent
of
that
distribution
exceeds
the
detection
limit,
then,
basically,
that
is
the
definition
of
detection
limit
that
I
am
going
to
be
using.

You
could
also
call
these
detection
limits
alpha
detection
limits.

Now,
I
am
also
going
to
be
talking
about
quantification
limits,
and
by
that
I
mean
the
concentration
where
beta
percent
of
the
data
from
analysis
of
a
sample
at
that
concentration
would
fall
below
the
detection
limit.
So,
this
represents
that
data.

Now,
you
can
see
that
what
I
am
calling
the
quantification
limit
could
also
be
called
an
alpha­
beta
detection
limit,
and
I
will
try
and
make
clear
why
I
don't
choose
to
call
it
that.

One
of
the
alpha
detection
limit
estimates
that
is
available
in
the
literature
is
the
MDL
which
can
be
found
in
40
CFR
Part
136,
Appendix
B
in
the
Code
of
Federal
Regulations,
and
the
concept
there
is
that
the
relationship
between
standard
deviation
in
absolute
or
measurement
units
and
analytical
concentration
is
somewhat
of
a
hockey
stick
where,
basically,
you
have
at
very
low
concentration
ranges
kind
of
a
constant
absolute
standard
deviation,
and
then,
at
some
concentration,
the
relationship
becomes
an
increasing
straight
line.
In
other
words,
standard
deviation
begins
to
increase
in
absolute
units
as
concentration
increases
in
absolute
units.
Detection/
Quantitation
Workshop
18
May
10,
2001
The
MDL
concept
really,
instead
of
estimating
the
standard
deviation
from
blank
analysis
and
using
that
standard
deviation
to
estimate
MDL,
because
there
are
problems
in
some
analytical
systems
and
what
not
in
getting
data
at
zero
concentration
and,
certainly,
getting
data
that
meets
the
assumption
of
a
normal
distribution
at
that
concentration,
EPA
had
the
idea
of
generating
that
standard
deviation
estimate
at
a
somewhat
higher
concentration
level
where
there
was
better
probability
that
you
would
have
analytical
data
to
work
with
in
computing
the
standard
deviation.

So,
basically,
it
tells
you
to
generate
your
replicate
analysis
on
a
sample
at
a
higher
concentration
than
where
you
initially
believe
the
MDL
to
be.
Then,
you
calculate
the
standard
deviation
in
absolute
units
from
those
seven
measurements,
providing
a
standard
deviation
estimate
with
six
degrees
of
freedom
which
either
is
in
this
level
area
of
the
curve
and,
therefore,
essentially
equal
to
the
standard
deviation
at
zero
concentration
or
perhaps
in
the
early
part
of
this
increasing
relationship
which
would
mean
that
that
standard
deviation
might
be
slightly
larger
than
it
should
be,
thus
providing
a
conservative
estimate
of
what
the
MDL
is,
if
anything.

Now,
the
theory,
then,
is
that
alpha
percent
of
the
data
from
analysis
of
blanks
would
exceed
the
MDL,
and
this
would
represent
the
analytical
data
that
would
result
from
analysis
of
a
sample
at
the
MDL.

You
can
see
that
half
of
this
data
falls
below
the
MDL
in
fact,
because
there
is
a
great
deal
of
variability
in
the
data
at
those
levels
of
concentrations.

Let's
see.
We
have
a
problem
with
order
here
perhaps?
I
was
expecting
another
slide.
Let's
see
if
I
can
find
it.
Tell
you
what.
I
have
some
overheads.
I
am
going
to
bring
that
slide
down
to
the
overhead
projector
and
try
and
get
it
up
there.

This
is
the
slide
I
wanted
to
get
to
next,
and,
basically...
no,
it's
not
going
to
hook
on
right.
I
guess
I
didn't
need
that
slide
at
the
moment,
so
we'll
go
back
to
the
overhead
project...
or
the
slide
projector,
rather.

One
of
the
points
I
would
like
to
make
before
we
go
on
is
that
when
I
say
quantitative
data...
and
there,
quantitative
is
a
perfectly
good
word...
I
mean
numerical
analytical
results.
Any
less
than
value
is
not
quantitative
data,
and
that
is
pretty
important
to
understand
before
I
go
on.

Here,
this
is
intended
to
depict
the
situation
when
you
are
following
a
reporting
policy
that
uses
the
MDL
as
a
reporting
limit.
This
is
a
very
common
practice
in
the
field,
very
common
use
of
detection
limits,
and
when
the
MDL
is
used
that
way
or
any
other
detection
limit,
any
other
limit
is
used
that
way,
you
generally
will
find
that
the
data,
the
analytical
data,
that
falls
below
the
reporting
limit
will
be
reported
as
less
than
the
MDL
or
less
than
that
limit,
and
higher
data,
higher
analytical
data,
would
be
reported
as
measured.

Now,
this
shows
the
situation
where
another
policy
is
followed
where
the
quantification
limit
or
any
other
quantification
estimate
is
used
as
a
reporting
limit,
in
other
words,
that
analytical
results
that
fall
below
that
value
are
reported
as
less
than
quantitation
limit.
Under
this
policy,
I
contend
that
the
quantitation
limit
is
no
longer
a
quantitation
limit
in
that
the
assumptions
that
were
made
in
producing
that
estimate
are
no
longer
valid.
Detection/
Quantitation
Workshop
19
May
10,
2001
This
would
be
equally
true
of
alpha­
beta
detection
limits
if
an
alpha­
beta
detection
limit
is
used
as
a
reporting
limit.
Under
those
conditions,
all
the
data
in
the
shaded
area
is
essentially
going
to
be
indistinguishable
from
each
other,
and,
in
essence...
and
non­
quantitative,
and,
in
essence,
half
of
the
data
at
the
quantitation
limit
would
be
non­
quantitative
which
is
against...
is
counter
to
the
assumption
that
only
beta
percentage
of
the
analytical
data
at
the
quantitation
limit
would
be
non­
quantitative.

Let's
go
back
there.
I
wanted
to
make
one
more
point
on
that
one,
too.
In
essence,
when
you
are...
these
quantitation
limits
or
alpha­
beta
detection
limits,
you
are
assuming
that
(
1
minus
beta)
percent
of
the
analytical
data
at
those
concentrations
would
be
quantitative,
and
that,
of
course,
wouldn't
be
true
if
they
are
improperly
used
as
reporting
thresholds.

It
should
be
clear
that
small
alpha­
small
beta
detection
limits
are
really
a
type
of
quantification
limit.
In
other
words,
when
the
MDL
or
a
similar
detection
limit
is
used
as
a
reporting
limit,
the
concentration
where
the
probability
of
detection
equals
(
1
minus
beta)
is
the
same
as
the
concentration
where
the
probability
of
numerical
analytical
results
being
available
is
(
1
minus
beta).

Alpha
detection
limits
such
as
MDL
may
be
used
as
reporting
limits
without
affecting
quantification
limits.
However,
if
quantification
limits
and
alpha­
beta
detection
limits
are
used
as
reporting
limits,
beta
changes
from
1
percent
to
50
percent.

Now,
there
are
proper
uses
of
quantification
limits
and
alpha­
beta
detection
limits,
although
they
are
not
appropriate
as
reporting
thresholds
which
they
might
be
improperly
used
as,
particularly
if
you
are
calling
them
a
detection
limit
estimate,
but,
really,
the
proper
use
of
them
is
as
a
lower
limit
for
data
users
to
consider
if
the
data
user
is
going
to
attempt
to
do
something
statistical
with
that
data
such
as
a
compliance
test,
a
compliance
evaluation.

You
certainly
want
to
have
a
high
probability
that
you
are
going
to
have
good
quantitative
data
to
work
with
under
those
circumstances.
So,
that
is
really
the
appropriate
application
of
quantification
and
alpha­
beta
detection
limits,
is
as
a
limit
to
how
far
down
you
can
attempt
to
go
in
concentration
if
you
are
going
to
require
that
you
have
good
quantitative
data
to
work
with
when
you
start
trying
to
make
conclusions
from
that
data.

That
concludes
my
presentation.
I
knew
we
were
going
to
catch
up
a
good
deal
on
our
schedule,
so...
but
this
is
a
point
that
I
have
tried
to
make
a
number
of
other
times
and
have
sometimes
had
trouble
conveying
my
perception
of
things.

Are
there
any
questions?
Detection/
Quantitation
Workshop
20
May
10,
2001
QUESTION
AND
ANSWER
SESSION
MR.
TELLIARD:
We
know
you
are
out
there.
Questions?

MR.
COLEMAN:
Paul,
I
am
wondering
what
you
can
say
about
the
uncertainty
of
measurements
taken
at
or
just
above
the
quantitation
level
as
you
have
defined
it.

MR.
BRITTON:
Well,
the
definition,
as
I
defined
it,
really
is
different
from
the
definition
that
relative
standard
deviation
is
equal
to
10
percent.
Because
you
are,
essentially,
double
the
MDL,
I
would
expect
the
relative
standard
deviation
to
be
somewhere
in
the
20
percent
range,
between
20
and
25
percent,
something
like
that,
because
relative
standard
deviation
at
the
detection
limit,
also
the
way
I
defined
it,
you
would
expect
the
relative
standard
deviation
to
be
about
31
percent.

This
is,
of
course,
the
MDL.
I
mean,
not
all
detection
limits...
although
most
of
the
alpha
detection
limits
that
have
been
defined
are
probably
extremely
similar.
The
difference
between
the
MDL
and
L
C
is
probably
insignificant,
and
I
certainly
agree
that
you
should
take
into
consideration
blanks
and
blank
correct,
if
that
is
necessary,
although
I
really
think
that
if
you
have
got
a
problem
with
non­
zero
blanks,
you
probably
should
deal
with
that
as
a
quality
control
problem
or
calibration
problem
rather
than
just
let
it
go.

You
should
try
and
minimize
any
signal
that
you
are
getting
on
analysis
of
blank
samples
as
much
as
possible.
It
suggests
that
you
have
got
some
kind
of
contamination
in
your
analytical
system,
and
you
shouldn't
just,
you
know,
blindly
accept
that.
You
should
try
and
remove
it,
and
if
you
absolutely
can't,
then
you
would
be
to
the
point
where
perhaps
you
should
blank
correct.

MR.
FLORES:
Can
you
hear
me?

MR.
BRITTON:
Yes.

MR.
FLORES:
First
a
comment
and
then
a
question.
Some
of
those
double...

MR.
TELLIARD:
Identify
yourself,
please.

MR.
FLORES:
Ray
Flores,
Region
VI.

Some
of
those
double
intensity
distributions
you
pointed
up
there
have
some
very
Freudian
impressions
that
can
be
implied
from
those.

My
question
is
you
hear
that
if
you
report
a
result
at
the
MDL,
the
uncertainty
associated
with
that
result
is
plus
or
minus
50
percent.
Is
that
correct?

MR.
BRITTON:
No,
it
is
probably
more
than
that.
It
really
depends
on
what
you
mean
by
uncertainty.
You
will
notice
that
when
I
tried
to
depict
the
distribution
of
analytical
results
that
you
would
get
from
analysis
of
a
sample
at
the
MDL,
some
of
those
analytical
observations
would
theoretically
fall
below
zero.
Detection/
Quantitation
Workshop
21
May
10,
2001
So,
if
you
are
talking
about,
you
know,
let's
say
95
percent
of
that
analytical
data,
then
you
are
going
plus
or
minus
two
standard
deviations
which,
I
have
just
suggested,
is
about
31
percent
at
the
MDL,
and
that
would
suggest
that
95
percent
of
that
data
would
fall
within
maybe
plus
or
minus
62
percent,
a
little
more,
maybe
65
percent
of
the
average
value,
the
MDL.

And
if
you
went
further
out
on
the
tails,
then
you
would
even
be
further
out
in
a
plus
or
minus
interval
around
the
MDL.

MR.
FLORES:
Okay.

MR.
BRITTON:
Is
that
what
you
are
talking
about?

MR.
FLORES:
Yes,
it
is,
and
what
I
am
getting
at
is
if
you
report
a
result
at
the
MDL,
there
is
some
amount
of
uncertainty
associated
with
that,
but
if
you
report
a
result
at
a
quantitation
limit,
does
that
amount
of
uncertainty
decrease?

MR.
BRITTON:
Yes,
it
does.

MR.
FLORES:
At
what
point
do
we
get
to
about
a
5
percent
plus
or
minus
2.5
percent
error?
MR.
BRITTON:
Depends
on
the
analytical
system.
Some
analytical
systems
may
never
get
there.
Others
may
get
there
very
quickly
in
concentration.
So,
you
know,
it
is
not
something
that
is
uniform
over
every
analytical
system,
but
yes,
relative
standard
deviation
should
be
reducing
as
concentration
level
goes
up
as
a
percentage
of
the
concentration
but
increasing
in
the
sense
of
absolute
units,
measurement
units.

MR.
FLORES:
What
I
am
searching
for
is,
where
is
the
associated
error
about
5
percent
which
is
what
you
generally
see
in
a
laboratory?

MR.
BRITTON:
Well,
you
see
that
in
one
laboratory
looking
at
only
its
own
data.
You
will
see
a
somewhat
different
picture
if
you
look
at
multi­
laboratory
data,
but
as
I
say,
even...
where
do
you
get
to
5
percent
in
an
individual
laboratory?
Probably
depends
on
which
analytical
system
you
are
looking
at
in
that
laboratory.

MR.
FLORES:
Thank
you.

MR.
BRITTON:
So,
I
don't
think
there
is
any
universal...
you
know,
it
is
not
going
to
be
50
times
the
standard
deviation
of...
you
know,
from
analysis
of
blanks
or
anything
universal
like
that.

MR.
FLORES:
No
easy
answer.

MR.
BRITTON:
No,
no
universal
easy
answer.
Detection/
Quantitation
Workshop
22
May
10,
2001
MR.
OSBORN:
Ken
Osborn.
I
am
from
East
Bay
Mud,
also
Standard
Methods.
And,
actually,
Paul,
if
you
wouldn't
mind,
this
is
a
question
for
Dr.
Currie,
but
you
could
answer
it,
too.

My
concept
of
zero
is
that
if
you
take
a
series
of
measurements
and
you
really
could
find
zero,
you
would
have
an
equal
distribution
on
either
side,
indicating
that
we
are
going
to
get
some
negative
measurements.

Now,
Lloyd
has
pointed
out
that
our
distributions
for
method
blanks
are
typically
asymmetrical,
and
I
have
seen
a
lot
of
these,
but
I
have
also
seen
instances
where
we
get
negative
measurements,
and
I
am
wondering
if
we
were
to
intend...
and
this
is
more
a
matter
of
curiosity,
and
this
is
why
I
would
like
your
insight,
Lloyd...
and
that
is
if
we
were
to
intentionally
put
in
a
positive
bias
up
front,
if
we
knew
exactly
what
it
was
or
real
close...
we'll
never
know
exactly,
but
if
we
were
really
close
on
it
and
then
we
took
a
series
of
measurements,
whether
we
would
uncover
a
hidden
symmetry
in
hidden
measurements
in
those
method
blanks.

Now,
I
do
have
some
personal
experience
with
this.
Years
and
years
ago
when
I
was
teaching
physics
in
high
school,
one
of
my
students
challenged
me
to
measure
the
weight
of
a
mouse
on
a
bathroom
scale.
I
figured
it
an
impossibility.

Well,
to
make
a
long
story
short,
we
did
come
up
with
a
measurement
after
spending
a
whole
period
bouncing
a
surrogate
mouse
off
a
scale,
but
we
had
to
adjust
the
scale
so
that
zero
was
not
on
the
value
zero.
We
moved
it
up
so
we
could
get
readings
on
either
side,
and
we
did.

Anyway,
the
question
is,
if
we
were
to
apply
a
positive
bias
to
our
measurements,
do
you
think
that
we
might
find
that
our
distributions
for
our
method
blanks
were
not
asymmetrical
but,
in
fact,
were
symmetrical?

MR.
BRITTON:
Well,
I
think
that
you
probably
would,
if
you
were
to
elevate
the
concentration,
if
you
will,
of
the
measurements,
find
out
that
you
are
going
to
get
a
lot
more
like
a
symmetric
distribution
of
the
data
around
that
point.

Now,
that
is
assuming
that
you
can
exclude...
can
recognize
and
exclude
outliers,
analytical
outliers
that
occur,
which
often
are
not
readily
recognized
and
excluded,
and
maybe
one
of
the
reasons
why
you
have
a
high
positive
bias
tail
in
data,
outlier
data
are
not
at
all
unusual
in
analytical
systems
as
a
general
practice,
and,
probably,
there
is
no
analytical
system
that
is
completely
free
of
generation
of
outlier
data.

I
do
think
that
a
lot
of
the
asymmetry
that
you
get
in
analysis
of
blanks,
for
instance,
results
from
the
fact
that
a
lot
of
analytical
systems
really
have
a
hard
time
giving
you
negative
responses.
So,
you
know,
that
tends
to
push
in
the
tail,
the
low
tail,
of
the
distribution
of
analytical
results,
and,
you
know,
also
contributes
to
the
asymmetry.

But
if
you
were
to
move
that
set
of
data
up
to
a
higher
concentration,
I
think
you
would
be
better
able
to
perceive
that
lower
tail
of
the
distribution.
It
wouldn't
be
distorted
by
the
fact
that
it
is,
you
know,
so
close
to
zero,
at
or
so
close
to
zero.
Detection/
Quantitation
Workshop
23
May
10,
2001
MR.
TELLIARD:
Lloyd?

DR.
CURRIE:
I
don't
know
that
I
should
speak
on
Paul's
time.

MR.
TELLIARD:
Go
ahead.

DR.
CURRIE:
Forgive
me
if
I
sit
so
I
am
close
to
the
microphone
this
way.

I
think
there
are
two
issues.
I
believe
I
heard
both
of
them.
One
Paul
was
just
mentioning,
and
I
think
it
was
perhaps
behind
your
idea
of
displacing
things
upward,
is
that
if
you
have
a
system,
if
you
will,
an
analytical
system
that
imposes
a
threshold
for
you
that
you
might
not
have
chosen,
you
know,
an
instrumental
discriminator...
I
have
seen
this
in
gas
chromatography,
for
example...
then
you
don't
have
the
benefit
of
getting
down
to
that
small
value.

The
blank
distributions
that
I
showed
you
that
were
asymmetric
were
not
subject
to
that.
There,
there
really
was
signal
measured
that
was
well
above
the
instrumental
discriminator.
So,
they
were
real.

So,
as
I
say,
there
are
the
two
cases.
Certainly,
if
you
have
a
cutoff,
you
are
going
to
lose
information,
and
I
strongly
advise
scientists,
metrologists,
if
you
will,
to
make
sure
that
their
instrument
has
adequate
sensitivity
to
see
the
variability
of
the
blank.
I
think
that
is
extremely
important.
But
modern
instruments
with
software
built
into
them
always
let
you
have
that
option.

I
will
make
one
other
comment.
If
you
are
not
constrained
by
what
the
instrument
will
let
you
do,
then
offsetting
with
an
added
number
is
not
going
to
change
the
symmetry
of
the
distribution.
It
will
change
its
location.

What
I
have
found
very
interesting
recently...
excuse
the
long
discussion...
in
the
case
of
the
asymmetric
blank
that
I
have
learned
to
recognize
recently
is
that
if
you
will
design
your
measurement
process
so
that
you
always
have
paired
measurements,
then
a
blank
minus
a
blank
is
going
to
be
symmetric,
and
that
is
extremely
helpful.

End
of
comment.

MR.
TELLIARD:
Thank
you.

MR.
KOORSE:
Steve
Koorse,
Hunton
&
Williams.

First,
a
quick
question.
I
think
what
I
heard
from
your
presentation,
Paul,
was
that
for
reporting
purposes,
perhaps
some
type
of
detection
level
should
be
the
threshold,
but
when
you
are
dealing
with
regulatory
compliance
determinations,
a
quantification
level
is
an
appropriate
basis
for
deciding
which
data
points
should
or
should
not
be
used
in
that
legal
determination.

From
my
experiences
dealing
with
permitting
proceedings
around
the
country,
in
the
year
2001,
I
am
quite
taken
aback
by
the
disparity
I
see
amongst
the
States
in
the
way
they
deal
with
Detection/
Quantitation
Workshop
24
May
10,
2001
data
in
the
compliance
determinations
in
their
regulations,
and
so
many
of
them
have
confused
the
term
reporting
with
the
use
of
the
data
that
are
reported
in
the
compliance
determinations.
My
concern
is
that
many
of
the
States
are
not
yet
familiar
with
EPA's
perspective
on
this.

EPA,
I
believe,
based
on
a
1994
guidance,
came
out
with
conclusions
pretty
much
along
the
lines
of
what
you
just
pronounced,
but
I
think
maybe
because
that
guidance
never
came
out
in
final...
it
was
a
draft
1994
guidance...
the
States
have
not
yet
gotten
the
message,
and
that
includes
some
of
the
EPA
Regions,
many
of
whom
are
represented
here.

I
guess
the
question
is
to
Bill.
Bill,
you
are
familiar
with
some
of
the
practices
in
the
States,
and
I
am
just
wondering
whether
the
Agency
is
contemplating
finalizing
the
1994
guidance
or
some
portion
of
it
to
clarify
under
what
circumstances
detection
levels
versus
quantification
levels
are
to
be
used
for,
you
know,
the
variety
of
determinations,
listing
decisions,
TMDL
decisions,
compliance
decisions.

MR.
TELLIARD:
I
agree
that
it
does
vary
all
over
the
place,
and
part
of
the
variation
started,
really,
with
the
dioxin
issue
where
you
had
a
water
quality
standard
so
low
that
no
one
could
measure
it,
and
the
Agency
recommended,
even
at
that
point,
that
you
use
the
level
of
quantitation
for
reporting
purposes
for
compliance.

A
number
of
the
States
said
well,
the
number
is
so
low,
if
we
get
a
burp,
a
hit,
then
we
got
it.
I
mean,
it
is
five
orders
of
magnitude.
Therefore,
we
ought
to
be
able
to
use
the
MDL
as
opposed
to
ML,
and
that
kind
of
got
it
rolling,
and
people
began
to
look
at
using.

When
I
was
a
mere
child,
Bill
Budde
got
up
at
a
meeting...
I
was
only
six
at
the
time...
and
said,
at
the
same
time,
you
know,
the
MDL
was
not
intended
for
regulatory
purposes,
and
the
Agency,
Cincinnati
and
the
folks
there
and
our
office
has
always
said
don't,
because
of
the
variability
Paul
just
discussed
at
the
MDL,
it
is
not
something
that
we
recommend,
encourage,
suggest
that
you
use
for
compliance.

As
regards
the
`
94
document,
that
is
a
permit
document,
and
I
don't
know
what
is
going
to
happen
to
it.
I
think
most
of
the
authors
have
retired
or
are
in
the
process
of
getting
out,
but
I
think
it
will
tie
up,
basically,
what
we
come
out
with
on
this
whole
detection
thing,
and
we
will
probably,
in
some
form
or
other,
tie
that
end
off.

MR.
BRITTON:
I
think,
Steve,
that
it
is
important
for
there
to
be
a
policy
on
what
should
be
used
as
a
reporting
limit
in
analytical
data
that
is
being
reported
for
compliance
purposes
and
what...
and,
of
course,
the
Agency
and
all
other
regulatory
agencies
have
to
use
something
like
quantification
limit
in
order
to
assure
that
if
they
are
going
to
do
something
like
compliance
testing
in
the
sense
that
it
is
done
now,
that
they
don't
try
and
do
that
below
the
quantitation
limit
where
it
is
reasonable
to
believe
that
you
will
have
the
quantitative
data
to
work
with
that
you
really
need.

It's
a
lot
of
real
significant
problems
in
trying
to
agree
how
a
less
than
value
should
be
interpreted
in
compliance
evaluation.
Ideally,
you
don't
want
to
be,
you
know,
in
that
kind
of
a
situation,
or,
at
least,
you
want
to
minimize
the
times
when
you
will
find
yourself
there.
Detection/
Quantitation
Workshop
25
May
10,
2001
And
that
is
why
I
talked
about
compliance
testing
of
individual
results.
I
think
you
don't
have
the
problem
if
your
compliance
testing
is
on
individual
results
that
are
at
or
above
a
quantitation
limit
regulatory
level.

Now,
you
do
have
a
problem
still
if
you
are
talking
about
30­
day
averages
or,
you
know,
7­
day
averages,
any
averaging
of
analytical
results
over
a
period
of
time.
Then,
you
can,
you
know,
reasonably
expect
to
see
less
than
values
on
a
much
higher
frequency
even
if
your
limit
is
at
the
quantitation
level
or
above,
because
you
are
dealing
with
a
situation
over
time,
and
if
the
concentration
is
not
consistent,
if
it
varies
over
time,
then
you
are
going
to
see
less
than
values
more
often,
and
that
can
present
a
real
problem
of
interpretation
of
those
values
in
a
compliance
testing
situation.

The
other
thing
I
would
like
to
say
is
that
I
think
that
there
is
one
exception
to
the
statement
that
you
should
only
do
compliance
testing
at
the
quantitation
level
or
above,
and
that
is
if
it
is
necessary
for
the
discharge
level
to
be
absolutely
zero.
If
it
is
something
that
is
extremely
noxious
and
it
must
be
maintained
at
absolutely
zero
in
the
discharge,
then
I
think
that
you
could
use
the
MDL
or
some
other
alpha
detection
limit
as
the
regulatory,
you
know,
threshold.

MR.
KOORSE:
Sure.
In
other
words,
when
you
are
dealing
with
a
limit
that
is
expressed
as
no
detectable
level,
you
would
use
a
detection
concept
as
opposed
to
the
limit
shall
not
exceed
10
ppb
where
you
need
to
be
able
to
quantify.

MR.
BRITTON:
Actually,
I
am
saying
if
the
health
effects
people
say
we
can't
tolerate
any
level
of
this
material
in
drinking
water,
then
you
could
consider
using
the
MDL
as
a
permit
limit,
because
it
has,
you
know,
a
very
proper
relationship
with
the
data
from
zero
concentration.

MR.
KOORSE:
Right.
We
are
saying
exactly
the
same
thing,
except
you
are
saying
it
as
a
scientist;
I
am
saying
it
as
a
lawyer,
because
the
way
it
would
be
expressed
in
regulatory
terms
would
never
be
absolutely
none.
It
would
be,
you
know,
below
detection.

MR.
BRITTON:
And
I
think
that
is
where
we
get
into
trouble,
because
if
you
are
willing
to
allow
any
level,
an
unmeasurable
level,
if
you
are
willing
to
consider
that
as
an
acceptable
possibility,
then
I
think
you
can't
use
the
MDL
as
a
reporting
limit...
or
as
a
compliance
limit.

MR.
TELLIARD:
Okay.
We
are
going
to
have
a
panel,
and
we
can
keep
this
going.
Right
now,
it
is
break
time.

Would
Bob
Wyeth
come
down
to
the
podium
at
the
break?

Get
your
coffee...
we
have
got
a
15­
minute
break...
and
come
on
back
in,
and
we
will
continue
on.

(
WHEREUPON,
a
brief
recess
was
taken.)
Detection/
Quantitation
Workshop
26
May
10,
2001
PRIORITIES
AND
LIMITS
­
WORK
DONE
IN
THE
ASTM
COMMITTEE
ON
WATER
(
D­
19)

MR.
TELLIARD:
We
would
like
to
get
started,
please.

Our
next
speaker
this
morning
is
David
Coleman.
Dave
is
going
to
be
talking
about
the
work
done
in
the
ASTM
D­
19
Committee
and
their
efforts
to
look
at
the
issues
of
quantification
and
detection.

David
is
currently
senior
technical
specialist
with
the
Alcoa
Technical
Center.

MR.
COLEMAN:
Good
morning.
Thank
you,
Bill,
for
inviting
me
and
for
the
introduction.
I
am
going
to
hope
I
do
not
have
technical
difficulties.

This
morning,
I
want
to
talk
primarily
about
two
standards
that
have
been
developed
by
ASTM,
but
we
are
not
going
to
focus
on
the
technical
details
of
those
standards.
We
are
going
to
talk
about
the
approach
we
took
and
what
we
considered
to
be
the
issues
that
are
associated
with
developing
detection
limits
and
quantification
limits.

But
for
reference,
the
two
standards
that
we
have
been
involved
in
the
past
ten
years
developing
in
the
Committee
D­
19
are
listed
here.
I
have
also
added
my
email
address
which
I
realized
I
had
omitted
and
my
phone
number
if
you
would
like
to
discuss
these
further
with
me.
I
would
be
happy
to
talk
to
any
of
you
about
them.

The
first
one
listed
here
is
the
Interlaboratory
Detection
Estimate
or
IDE,
and
I
flinch
a
little
bit
at
introducing
another
acronym
to
your
alphabet
soup,
but
we
can't
call
it
the
whole
thing
all
the
time.
And
then,
the
second
one
is
Interlaboratory
Quantitation
Estimate.

Both
of
these
standard
practices
are
available
if
you
contact
ASTM.
They
are
in
publication.

You
saw
my
abstract
perhaps
in
your
notebook,
so
many
limits,
so
little
time,
just
understanding
all
the
limits,
even
today,
we
have
had
some
different
perspectives
on
what
it
is.
It
is
difficult
to
keep
them
all
straight.

What
I
want
to
do
is
present
the
objectives,
the
priorities,
and
the
process
of
the
development
of
detection
and
quantitation
limits
by
ASTM,
and
this
really
is
a
multi­
year
effort
that
we
have
embarked
on,
and
I
want
to
publicly
acknowledge
Nancy
Graham,
the
Task
Group
Chairperson
and
also
the
work
of
people
like
Paul
Britton,
very
active
in
the
task
group
on
developing
these
limits.
There
are
a
lot
of
people
contributing
a
lot
of
effort.
I
can't
name
them
all
right
now.

These
standard
practices
include
many
of
the
features
as
well,
that
we
would
like
EPA
to
consider.
Bill
has
asked
for
input,
and
in
the
process
of
developing
these
limits,
we
developed
Detection/
Quantitation
Workshop
27
May
10,
2001
some
perspectives.
We
have
identified
issues,
and
we
would
like
EPA
to
consider
in
their
reevaluation
process.

The
most
important
parts
of
what
I
will
be
speaking
about
today
are
the
objectives
for
limits.
People
don't
like
it
when
I
ask
this
question,
but
it
is
a
fundamental
question
in
my
mind:
What
is
it
that
we
want
to
be
able
to
say
about
measurements
below
the
limits?
That
they
are
not
worth
looking
at?
That
they
should
be
set
to
zero?
That
the
noise
level
is
such
and
such?

Then,
similarly,
above
the
limits,
what
is
it
we
want
to
be
able
to
say
about
any
measurement
that
we
get
above
the
detection
limits?
A
certainty
that
it
is
there,
that
the
analyte
is
there
above
the
quantification
limit?
That
the
plus
or
minus
error
is
within
a
certain
percent
or
the
uncertainty
is
within
a
certain
percent?

Because
the
answers
to
these
questions
will
guide
the
development,
from
a
chemical
and
a
statistical
perspective,
of
what
kind
of
limits
you
should
use.
The
development
of
those
limits
depends
on
the
answers
to
these
questions.
So,
what
are
your
objectives?

Secondly,
recommended
criteria
for
limits.
We
want
to
list
these
in
priority
order
with
an
invitation
for
all
stakeholders
to
contribute
to
what
that
list
should
look
like.
We
did
that
within
ASTM.
I
continue
to
get
feedback
from
environmental
engineers
and
chemists
about
important
issues.

Dr.
Currie
brought
up
the
issue
of
blanks,
and
blanks
are
very
convenient
to
ignore.
I
think,
as
he
said,
they
have
been
ignored
too
long.
So,
what
are
the
important
criteria?

Here
is
the
outline
of
my
talk:
the
objective
of
the
task
group
in
ASTM;
the
context
that
we
are
working
in;
how
ASTM
works
just
in
broad
brush,
open
consensus
process;
and
then
the
priorities
for
detection
and
quantitation;
finally,
some
recommendations
and
impact.

First
of
all,
the
objective
of
developing
detection
and
quantitation
limits
to
fully
characterize
the
low
end
of
published
methods,
and
by
that,
I
mean
analytical
methods,
used
by
qualified
labs,
not
fly­
by­
night
garage
shops,
but
qualified
labs
across
the
United
States...
this
could
be
done
internationally...
by
computing
limits
that
provide
reliable
detection
and
reliable
quantitation.
It
is
easy
to
put
those
words
down,
but
what
do
we
mean
by
them?

The
practice
should
specify
the
needed
interlaboratory
data
set
and
interlaboratory
data
analysis
to
generate
a
detection
limit
and
a
quantitation
limit
which,
quote,
any
qualified
lab
should
be
able
detect
without
many
false
positives...
so,
right
there
is
one
of
the
criteria,
without
many
false
positives.
There
are
other
criteria,
too...
or
quantitate
at
a
specified
precision.
There
is
another
criterion,
not
the
only
one.

What
is
the
context?
What
motivated
us
ten
years
ago
to
embark
on
this
effort
in
ASTM?
I
think
the
statement
that
could
be
said
then
is
true
now,
that
there
are
no
current,
widely­
used
limits
that
can
be
used
to
objectively
evaluate
the
detection
or
quantitation
of
a
method
as
used
by
the
lab
community.
Detection/
Quantitation
Workshop
28
May
10,
2001
So,
when
you
go
out
there
or
many
of
you
are
managers
or
workers
within
a
lab,
what
can
you
say
about
your
lab,
the
lab
down
the
street,
the
research
lab
at
the
university,
what
can
you
say
collectively
in
terms
of
detection
and
quantitation?
No
current,
widely­
used
limits
that
can
be
used
to
objectively
evaluate
the
detection
or
quantitation
and
are
based
on
routine
measurements,
not
special
measurements,
not
special
study
measurements
and
ultrapure
water,
but
routine
measurements
by
qualified
labs
and
which
use
valid
statistics,
including
intervals.
So,
this
is
the
assertion.

Instead,
the
tendency...
and
there
are
different
variations
of
this...
the
tendency
is
to
develop
a
limit
using
one
analyst
on
one
instrument
on
one
day
measuring
sample
splits
and
then
use
an
expedient
shortcut,
3 ,
3.14 ,
or
10 .

The
consequence
of
this
is
that
we
don't
have
any
widely­
used
detection
limit
or
quantitation
limit
about
which
we
can
make
statements,
where
you
can
fill
in
the
blanks,
with
such
and
such
confidence,
the
analyte
will
be
detected
at
the
detection
limit,
and
blanks
will
be
measured
with
a
such
and
such
false
positive
rate.
This
would
be,
for
example,
the
alpha­
beta
type
limit
that
Paul
Britton
was
talking
about,
but
there
are
no
such
limits,
I
contend,
that
are
widely
used.

And
then,
for
quantitation,
at
or
above
this
quantitation
limit,
the
relative
standard
deviation
right
at
that
concentration
will
be
approximately...
fill
in
the
blank
again...
10
percent
or
less.
One
of
the
questioners
asked
about
2.5
percent
or
plus
or
minus
2.5
percent.
That
is
very
hard
to
get
down
to,
but,
as
we
will
come
to
in
a
little
while,
the
interlaboratory
quantitation
estimate
does
give
you
a
way
to
see
if
your
method
will
get
down
that
low.

We
have
talked
about
objective
and
context.
Let
me
talk
a
little
bit,
just
a
little
bit,
about
the
ASTM
open
consensus
process.

First
of
all,
we
wanted
to
set
priorities
in
our
subgroup,
our
task
group.
We
decided
we
would
do
interlaboratory
before
within
lab,
because
there
was
a
higher
priority.
People
were
more
interested
in
interlaboratory.
What
can
you
say
about
a
population
of
labs,
was
the
question
that
was
floating
around.

We
decided
we
would
do
detection
before
quantitation
even
though
quantitation
was
the
bigger
goal,
and
we
wanted
to
look,
for
detection
anyway,
at
methods
that
were
not
dominated
by
calibration
error.
The
Hubaux­
Voss
paper
which,
I
believe,
Dr.
Currie
referenced
does
address
calibration
error,
but
we
wanted
to
look
at
methods
that
were
not
dominated
by
calibration
error.

Secondly,
we
wanted
to
develop
a
laundry
list
of
properties...
a
property
would
be,
for
example,
a
1
percent
false
positive
rate...
performance
standards
in
terms
of
our
being
able
to
characterize
laboratories
and
say
that
this
applies
to
a
population
of
laboratories,
and
implementation
strategies,
how
do
we
get
the
data
and
do
the
analysis
so
that
it
will
be
widely
used,
so
that
it
will
be
accepted.

So,
we
developed
a
laundry
list.
It
had...
I
am
not
sure...
maybe
20
items
on
it.
Detection/
Quantitation
Workshop
29
May
10,
2001
Then,
we
had
a
group
consensus
process,
which
was
very
painstaking
and
involved
a
lot
of
arguing
and
discussion.
We
ended
up
using
a
multi­
voting
technique
which
worked
quite
well.
We
put
up
the
priorities
or
the
criteria,
and
then
everybody
got
20
votes,
and
they
could
vote
multiple
votes
on
any
given
priority,
and
that
provided
the
design
requirements
for
the
detection
and
the
quantitation
limits.
For
example,
a
requirement
that
the
sigma,
the
standard
deviation,
as
we
have
seen
earlier
this
morning,
depends
on
the
concentration;
the
requirement
that
there
be
a
low
false
positive
rate.

The
results
of
all
this
were
the
interlaboratory
detection
estimate
and
quantitation
estimate.

What
are
the
criteria
that
we
looked
at?
The
key
criteria,
in
rough
priority,
are
as
follows:
should
be
interlaboratory,
characterizing
a
population
of
qualified
labs,
as
I
said
earlier.
Not
within­
lab.
It
is
not
that
we
don't
think
within­
lab
QC
use
of
a
limit
is
important,
but
we
wanted
to
focus
first
on
interlab.

We
thought
it
should
be
based
on
a
multi­
level
precision
and
bias
study
to
capture
the
typical
changes
in
the
standard
deviation
as
you
increase
concentration
of
your
analyte
as
opposed
to
a
single
concentration
study,
even
one
that
is
validated
where
you
are
not
sure
how
that
standard
deviation
changes
over
the
range
of
concentrations
you
are
interested
in.

We
decided
that
the
limits
should
be
based
on
a
statistically
valid
interval
for
prediction
as
opposed
to
an
expedient
shortcut.
And,
Bill,
you
mentioned
there
are
prediction
intervals...
there
are
confidence
intervals
which
we
don't
think
are
appropriate,
but
there
are
also
prediction
intervals,
and
much
less
known
is
a
statistical
tolerance
interval.
Both
prediction
and
statistical
tolerance
are
directly
applicable
to
detection.

We
thought
it
was
important
that
we
could
ensure
a
low
probability
of
false
positives
and
a
high
confidence
of
detection
at
the
detection
limit,
not
just
the
50
percent
which
was
brought
up
earlier
today.
That
is
for
detection.

For
quantitation,
we
wanted
to
be
at
a
level...
establish
the
quantitation
limit
at
a
level
where
you
could
ensure
relative
standard
deviation
of
10
percent.
Why
10
percent?
That
is
a
somewhat
arbitrary
but
commonly
accepted
level.
For
organics
or
other
analytes,
you
may
have
difficulty
achieving
10
percent.
Some,
you
can
go
way
below
10
percent,
but
you
might
have
to
compromise
to
20
or
even
30
percent
at
the
quantitation
limit.
We
do
not
want
to
extrapolate
based
on
lower
concentrations.
We
want
to
model
the
behavior
of
the
response
and
the
standard
deviation
over
a
wide
enough
range
so
we
are
capturing
the
detection
limit
and
the
quantitation
limit
and
we
know
what
measurement
uncertainty
looks
like
in
that
range.

Also,
limits
should
reflect
routine,
randomized
measurement
by
qualified
labs
on
different
days,
different
analysts
using
different
instruments,
not
just
a
one­
shot
study.

And,
finally,
we
wanted
the
limits
to
reflect
the
commonly
different
precision
and
bias
of
different
matrices.
As
most
of
you
know
better
than
I
do,
when
you
change
matrices,
you
cannot
Detection/
Quantitation
Workshop
30
May
10,
2001
assume
equivalence
unless
you
are
knowledgeable
about
how
those
matrices
are
similar
or
different.
We
did
not
want
to
assume
that
all
matrices
behave
like
reagent­
grade
water.

That
ties
directly
into
the
issue
that
Dr.
Currie
brought
up
about
how
do
we
handle
blanks.
What
is
the
impact
of
a
blank
of
a
different
matrix?
It
changes
bias,
certainly,
and
sometimes
precision.

The
IDE
and
the
IQE
satisfy
these
criteria,
and
in
my
opinion,
any
limits
used
by
labs
and
by
regulatory
agencies
in
an
interlaboratory
context
should
also
satisfy
these
criteria.

And
I
will
throw
the
question
on
the
table,
not
expecting
an
answer
right
now
but
perhaps
privately
from
some
of
you
or
during
the
session
afterwards,
the
panel
discussion,
what
other
criteria
should
be
on
our
list
and
at
what
priority?

We
have
talked
about
the
objective
of
the
ASTM
task
group,
the
context,
the
process
used
in
ASTM,
and
priorities
for
detection
and
quantitation.
Lastly,
I
want
to
discuss
recommendations
and
impact.

The
first
one
isn't
really
either
of
those
but
just
to
reiterate
that
ASTM
publishes
standards
that
produce
these
limits,
and
we
believe
they
are
characterizing
the
population
of
qualified
labs
out
there
if
a
study
has
been
done
according
to
the
IDE
or
IQE
or
both.
You
can
combine
them
in
a
precision
and
bias
study.

So,
these
are
a
good
way
to
quantify
low
concentration
performance
of
methods.
You
have
to
be
sure
you
go
low
enough
in
concentration.
In
your
P&
B
study,
you
may
choose
concentrations
that
represent
your
working
range.
You
have
to
make
sure
you
go
low
enough
so
you
can
characterize
quantitation
and
detection.

Item
3,
I
apologize
for
a
typo
for
those
of
you
following
along
in
the
notebook.
There
are
two
pieces
of
software
that
are
nearly
ready.
They
have
been
nearly
ready
for
a
while,
but
they
are
nearly
ready
to
compute
IDE
and
IQE:
QCalc...
and
here
is
where
the
typo
is...
should
read
from
EPRI
or
from
ASTM.
We
are
hoping
that
will
be
the
case...
is
a
stand­
alone
piece
of
software
that
generates
also
an
Excel
file
so
you
can
look
at
some
nice
plots.

And
then,
after
finishing
my
long
period
of
procrastination,
I
finally
set
to
doing
an
Excel
template
which
I
have
on
this
machine
and
I
am
happy
to
send
to
any
of
you
as
long
as
there
are
no
strings
attached.
I
just
ask
you
to
be
beta
testers.
This
is
an
Excel
template
that
you
paste
two
columns
of
data
in,
then
you
go
through
a
couple
more
operations,
and
it
computes
IDE
and
IQE,
MDL,
and
ML
and
produces
summary
tables
you
can
take
a
look
at.
It
is
all
within
Excel.

After
you
do
simple
data
preparation
to
have
two
columns
of
data,
each
of
these
packages
takes
about
half
an
hour
to
do,
to
carry
out.
My
principal
beta
tester
who
is
probably
biased,
John
Phillips
at
Ford,
has
exercised
these
packages
more
than
anyone
I
know.
He
tried
to
insist
with
me
that
it
takes
five
minutes
to
do,
but
I
said
John,
that
is
you.
You
know
these
packages
inside
and
out.
So,
I
will
claim
half
an
hour,
not
five
minutes.
Detection/
Quantitation
Workshop
31
May
10,
2001
IDEs
and
IQEs
computed
to
date
enjoy
the...
this
sounds
a
little
corny...
but
priority
properties,
that
is,
the
properties
that
we
decided
were
really
of
highest
priority,
and
we
claim
they
make
sense.
If
you
look
at
plots
of
your
data
and
try
to
interpret
what
is
the
IDE
telling
us,
what
is
the
IQE
telling
us,
it
doesn't
make
sense
when
we
look
at
response
versus
spike
concentration.
It
doesn't
make
sense
when
we
look
at
that
ski
slope
plot
of
percent
RSD
versus
concentration,
and
I
contend
it
does
make
sense.

They
also
respect
chemistry
and
statistics
together.

When
sending
out
blank
samples
to
any
qualified
labs,
we
should
expect
that
most
labs
would
have
a
low
rate
of
false
positives,
that
is,
detects,
at
the
IDE.
And
when
sending
out
spiked
samples
to
any
qualified
labs,
we
should
expect
that
most
labs
could
reliably
detect
the
concentration
at
the
IDE
and
could
report
a
measurement
value
with,
I'll
say,
known...
it
says
known
here
but
it
is
approximated...
limited
RSD
of
about
10
percent
or
20
percent,
depending
on
which
IQE
you
want
to
generate...
and
above.

And
my
question
to
you,
isn't
that
what
we
need?
Isn't
this
what
we
need
for
a
detection
limit
and
a
quantitation
limit?
But
I
really
do
welcome
your
input
either
at
the
end
of
this
session
or
by
email
or
phone
at
a
later
time.

Thank
you.

QUESTION
AND
ANSWER
SESSION
MR.
TELLIARD:
Any
questions
now?

MR.
HAWORTH:
I
am
Garry
Haworth,
the
New
Hampshire
Department
of
Environmental
Services.

One
question.
Your
quantitation
level
at
the
RSD
of
10
or
20
percent
looks
an
awful
lot
like
what
the
certification
officers
drive
into
our
head.
I
am
curious
if
you
or
even
Paul
are
going
to
present
something
like
what
you
have
done
today
or
what
Paul
did
today
at
the
NELAC
thing
in
a
couple
of
weeks.
Maybe
you
can
influence
things
one
way
or
another.

MR.
COLEMAN:
I
have
no
such
plans,
although
I
thank
you
for
your
suggestion.
I
believe
there
is
going
to
be
at
least
one
gentleman
who
is
familiar
with
IDE
and
IQE
and
has
been
very
active
on
the
task
force.
I
hope
I
am
not
presuming...
who
will
be
at
NELAC.
Larry,
will
you
be
there?

LARRY:
Yes.

MR.
COLEMAN:
Okay.
I
don't
know
if
he
is
prepared
to
speak.
I
can't
speak
for
him,
but
you
might...
the
two
of
you
might
talk,
but
thanks
for
the
suggestion.

MR.
WYETH:
Bob
Wyeth,
Severn
Trent
Laboratories.
David,
a
question.
Detection/
Quantitation
Workshop
32
May
10,
2001
As
you
go
through
your
slides,
there
is
this
one
statement
you
make
that
I
wondered
if
you
could
define
for
us
a
little
bit,
as
no
doubt,
many
of
us
may
elect
to
try
to
get
your
software
and
see
what
we
get
from
the
IDE
and
IQE,
and
that
statement
is
methods
not
dominated
by
calibration
error.
What
does
that
mean?
What,
specifically,
sort
of
environmental
tests
that
we
routinely
do
are
you
speaking
about
in
that
regard,
and
can
you
give
us
any
more
guidance
here?

MR.
COLEMAN:
That
would
take
a
while.
It
is
a
valid
question,
and
I
have
been
asked
that
question.
The
intent
was
to
shut
out
those
measurements
where
calibration
error
was
really
the
dominant
error,
and
if
you
look
at
the
Hubaux­
Voss
paper,
that
is
what
they
focused
on.
The
methods
that
we
were
interested
in
or
primarily
interested
in
on
the
ASTM
task
group
would
include,
I
believe,
the
majority
of
analytical
methods
used
in
environmental
sampling,
GCMS,
certainly,
ICP
data,
and
a
number
of
other
methods
where
calibration
error
is
minimal
compared
to
sample
prep
error
and
other
sample­
related
issues.

So,
we
could
get
into
some
more
detail
at
a
later
time.
I
realize
that
it
is
ambiguous
when
you
say
negligible
calibration
error,
and
that
is
an
issue.

MR.
WHITE:
I
am
Chuck
White.
I
am
a
statistician
with
EPA's
Office
of
Water.

I
have
got...
well,
first
of
all,
I
would
like
to
thank
you
for
doing
your
presentation,
presenting
your
ideas.
I
thought
that
was
very
helpful,
but
I
do
have
a
couple
of
comments.

One
is
on
your
slide
where
you
talk
about
valid
statistics.
You
list
three
numbers.
I
think
you
know
where
those
numbers
come
from.
In
the
first
case,
the
3
is
a
standard
multiplier
for
99
percent
prediction
interval.
Is
there
a
problem
with
that?

In
the
second
case,
the
3.14,
I
think
you
are
alluding
to
EPA's
ML,
and
if
so,
that
is
not
a
correct
representation
of
what
it
is
that
we
are
doing.
We
are
multiplying
3.14
times
the
MDL
which
has
been
multiplied
by
3.18
which
gives
you
10.

So,
in
the
second
case
and
the
third
case,
they
are
really
the
same
thing.

And
where
did
10
come
from?
Well,
that
is
a
good
question,
but
it
is
also
a
good
question
why
you
would
want
10
percent
RSD.
These
are
arbitrary
criteria
for
quality
that
different
people
have
come
up
with.

My
second
comment
is
I
understand
you
have
looked
at
data,
and
I
have
looked
at
data,
and
in
the
IQE
process,
I
submitted
a
report
to
ASTM
showing
my
view
of
how
well
the
IQE
is
describing
the
data
that
were
available
to
us.
I
think
there
are
some
challenges
there.
I
won't
debate
the
issue
here,
and
it
really
comes
down
to
in
the
field
when
you
are
working
with
a
particular
lab,
does
it
make
sense
in
that
case,
but
I
think
that
there
is
still
some
work
to
be
done
about
making
sure
that
these
procedures
are
giving
us
the
summary
statistics
that
represent
the
words,
in
other
words,
whether
the
summary
statistics
are
doing
what
it
is
that
you
set
out
to
accomplish.
Detection/
Quantitation
Workshop
33
May
10,
2001
MR.
COLEMAN:
Okay.
To
address
those
two
points,
the
first
one,
my
point
really
was
that
if
we
get
a
standard
deviation
at
a
single
concentration
and
multiply
it
by
a
scalar...
I
wasn't
really
questioning
the
origin,
but
if
you
just
multiply
it
by
a
scalar,
when
you
go
up
to
that
higher
concentration,
you
may
or
may
not
actually
have
characterized
the
uncertainty
at
that
higher
level.
So,
that
is
what
I
was
getting
at
there.

It
is
very
simple
to
compute
a
single
sample
standard
deviation
and
multiply
it
by
a
number.
That
is
a
very
direct
and
simple
approach.
Anyone
can
do
it,
but
it
doesn't
necessarily
tell
you
everything
you
need
to
know
about
the
method
at
the
low
end.
So,
that
is
the
point
I
was
trying
to
make
there.

And
I
do
want
to
commend
you,
Chuck,
on
raising
a
lot
of
issues
in
your
report
which
are
worthy
of
further
discussion.

MR.
TELLIARD:
Thank
you,
David.
Detection/
Quantitation
Workshop
34
May
10,
2001
AOAC'S
APPROACH
TO
DETECTION
AND
QUANTITATION
MR.
TELLIARD:
Our
next
speaker...
and
we're
trying
to
catch
up
here
a
little
bit
on
time,
so
we
have
a
panel
so
we
can
really
kick
it
around...
is
Wendy
Campbell.
Wendy
is
going
to
talk
about
AOAC's
approach
to
detection
and
quantitation.

Wendy
is
currently
a
program
coordinator
for
the
Official
Methods
Program
of
AOAC
International,
and
she
is
now
presently
being
wired.

MS.
CAMPBELL:
Is
that
working?
Okay.
Thank
you
for
that
introduction.
I'll
try
not
to
buzz
you
guys
too
much.
If
you'll
give
me
a
second,
I
want
to
make
sure
I
have
the
slides
in
the
right
orientation.
That
is
the
first.
Okay.

My
name
is
Wendy
Campbell,
and
I
am
the
program
coordinator
for
the
Official
Methods
Program.
I
believe
most
of
you
are
probably
aware
of
AOAC's
Official
Methods
Program.

Basically,
what
I
want
to
address
is
what
AOAC
does
to
validate
methods
and
how
this
applies
to
limits
of
detection
and
quantification.
Basically,
AOAC
validates
methods
via
the
collaborative
study,
and
we
do
chemistry
and
microbiology
methods.
AOAC
has
been
doing
this
for
over
100
years.

We
recently
have,
through
various
task
groups,
decided
to
report
the
results
of
the
interlaboratory
studies
in
order
to
support
acceptance
of
a
method,
and
included
in
the
method,
based
on
the
results
of
the
interlaboratory
study,
is
an
applicability
statement.
Basically,
in
a
nutshell,
the
applicability
statement
says
the
functional
levels
of
an
analyte
in
a
matrix,
basically,
the
working
range
of
the
method.

Most
chemistry
methods
don't
require
a
pre­
collaborative
study,
but
for
the
microbiologists
out
there,
AOAC
also
requires
a
pre­
collaborative
study
before
a
collaborative
study
is
performed
that
addresses
applicability
of
the
method
based
on
inclusivity
and
exclusivity.

Note
that
a
method,
when
it
is
approved
as
a
first
action
through
AOAC,
is
open
to
a
2­
year
comment
period,
and
analysts,
in
that
2­
year
period,
have
an
opportunity
to
send
us
comments
as
to
how
that
method
is
performing
in
their
laboratory.
I
think
that
is
very
important.

Basically,
the
purpose
of
a
collaborative
study
is
to
determine
estimates
of
the
attributes
of
the
method,
particularly
the
precision,
that
can
be
expected,
reliably
expected
when
a
method
is
used
in
actual
hands­
on
applications.
AOAC
uses
two
terms
to
define
this,
reproducibility
and
repeatability.

The
initial
collaborative
study,
just
to
give
you
an
idea
of
what
we
do,
is
based
on,
for
quantitative...
there
is
a
minimum
number
of
materials,
five.
There
are
exceptions.
If
it
is
a
single
analyte
for
a
single
matrix,
that
can
be
changed,
or
if
the
instrumentation
is
prohibitively
expensive.
There
is
a
minimum
number
of
laboratories.
We
recommend
between
10
and
15,
but
Detection/
Quantitation
Workshop
35
May
10,
2001
8
laboratories
have
to
report
valid
data.
And
a
minimum
number
of
replicates,
ordinarily
nine
replicates
or
split
levels.

Basically,
the
study
design
of
a
collaborative
study
varies,
depending
on
the
purpose
of
the
method
being
studied.
AOAC
doesn't
believe,
really,
in
a
one­
size­
fits­
all
needs.

The
collaborative
studies
are
used
to
provide
data
to
calculate
repeatability,
reproducibility,
relative
standard
deviation,
HORRAT
which
I'll
explain
later,
and
percent
recovery.
In
fact,
we
have
a
stand­
alone
diskette
that
will
actually
go
through
and
calculate
this
for
you
if
you
put
your
data
in
it.

If
the
LOQ
and
LOD
are
important,
limit
of
detection
and
quantification...
it
is
important
to
note
that
it
isn't
always.
If
it
is
a
non­
regulatory
method,
the
limits
of
detection
are
not...
they
are
important,
but
they
are
not
considered
a
critical,
but
the
study
design
of
a
method
where
LOD
or
LOQ
are
important
should
pay
special
attention
to
the
number
of
blanks
and
also
to
the
necessity
for
interpreting
false
positives
and
false
negatives.

I
want
to
put
up
an
example
of
an
interlaboratory
study
result
table
just
to
give
you
an
idea
of
what...
this
is
actually
what
is
reported
in
the
method.
And
just
a
quick
note.
How
this
related
to
LOD
and
LOQ
is
that
the
LOD
and
LOQ
aren't
directly
addressed,
but
you
use
these
parameters
to
determine
how
reliably
this
is
going
to
work
in
your
laboratory
and
what
we
observed
during
the
interlaboratory
study.

The
interlaboratory
study
result
table
reports
several
parameters
for
a
collaborative
study.
It
reports
the
matrix.
Collaborative
studies
require
more
than
one
matrix
and
different
levels
of
the
analyte.
It
also
includes
the
number
of
laboratories
and
reports
the
outliers.
Reproducibility
and
repeatability
are
also
addressed
and
HORRAT.

The
matrix
should
be
representative
of
commodities
that
are
usually
analyzed,
and
I
realize
this
is
a
pollution
study,
but
this
happened
to
be
the
one
we
had
on
diskette.
A
blank
control
is
only
considered
as
a
material
if
it
is
used
to
determine
the
statistical
level
of
measurement
for
trace
analysis
which
is
near
the
limit
of
quantification.

The
mean
is
the
found
average
value
of
the
analyte
in
the
matrices
studied.
It
is
included
in
the
applicability
statement
of
the
method
and
is
used
to
estimate
the
measurement
range,
the
working
range,
of
a
method
for
that
analyte.

The
reproducibility
which
is
actually
SR2
is
a
composite
measurement
of
the
variation
which
includes
both
between
laboratory
and
within
laboratory
variation.
It
basically
measures
how
well
an
analyst
in
one
laboratory
can
check
the
results
of
another
analyst
in
a
different
laboratory
using
the
same
test
material
and
method
under
different
conditions.

Repeatability
is
the
variation
between
replicate
determinations
by
the
same
analyst,
and
it
measures
how
well
an
analyst
can
check
himself
using
the
same
method
on
blind
replicates
of
the
same
material
or
split
levels
under
the
same
conditions.
Detection/
Quantitation
Workshop
36
May
10,
2001
RSD
r,
relative
standard
deviation,
is
reported
both
for
repeatability
and
reproducibility.
The
RSD
values
are
usually
independent
of
concentration
and
tend
to
be
the
most
useful
measures
of
precision
in
chemical
analytical
work.
They
can
also
indicate
the
limit
of
reliable
measurement
of
a
method.

I
want
to
pull
this
off,
because
one
thing
that
seems
to
be
of
great
interest
to
people
when
they
talk
about
limits
of
detection
or
quantification
for
AOAC
methods
is
the
value
we
use
called
the
HORRAT
value.
HORRAT
is
something
that...
is
a
value,
a
ratio,
that
Dr.
Horwitz
developed,
and
it
is
basically
the
ratio
of
the
reproducibility
relative
standard
deviation
to
the
predicted
relative
standard
deviation
calculated
using
the
Horwitz
equation,
and
I
put
the
equation
up
there.
It
is
used
as
a
guide
to
determine
the
precision
of
a
method.

The
HORRAT
guidelines
that
are
used
to
determine
whether
or
not
a
method
is
acceptable...
it
is
only
one
parameter.
If
the
method
is
between
0.5
and
2,
method
reproducibility
is
normally
what
you
would
expect.
If
the
HORRAT
is
less
than
0.5,
it
might
be
in
question.
It
indicates
a
lack
of
independence.
If
the
HORRAT
is
greater
than
2,
the
reproducibility
is...
there's
a
problem
or
something
isn't
quite
right,
and
it
may
result
in
the
rejection
of
a
method.

There
are
some
limitations
of
HORRAT.
HORRAT
is
not
applicable
to
physical
properties,
viscosity,
density,
pH,
percent
moisture,
and
it
also
is
less
useful
in
extremely
lowlevel
determinations,
part­
per­
billion
or
lower.

The
last
value
that
is
reported
on
our
interlaboratory
study
result
table
is
percent
recovery.
There
are
two
methods
that
are
used
to
define
percent
recovery,
marginal
and
total.
The
marginal
recovery
is
typically...
shows
a
larger
variation
than
the
total
recovery.
The
true
or
sign
value
is
known
only
in
cases
of
spiked
or
fortified
material,
certified
reference
materials,
or
by
analysis
of
different
reference
material.

The
applicability
statement
indicates
the
observed
characteristics
of
the
method
observed
in
that
collaborative
study.
It
is
used
to
give
a
reasonable
expectation
of
method
performance
and
scope,
but
it
is
important
to
note
that
all
methods
need
to
be
validated
in­
house
prior
to
reporting
results.

It
is
also
important
to
note
that
a
lot
of
official
methods
are
typically
used
at
the
expected
range
of
the
analyte,
and
it
basically
indicates
the
usefulness
of
the
method
as
observed
during
a
collaborative
study
for
a
typical
range
as
found
in
the
field.

The
benefits
of
a
validated
method
are
that
it
gives
a...
it
provides
methods
where
the
highest
degree
of
confidence
and
performance
is
required
to
generate
credible,
defensible,
and
reproducible
results,
and
it
may
provide
a
thorough,
consistent
information
of
applicability
to
an
analyst
by
reporting
the
results
of
an
interlaboratory
study
that
supports
the
acceptance
of
the
method.
Also,
it
keeps
in
mind
that
results...
the
collaborative
study
is
considered
complimentary
to
results
validation,
and
that
is
basically
establish
to
the
proper...
establishes
the
proper
performance
of
a
method,
i.
e.,
the
method
is
providing
correct
results
in­
house
and
by
the
laboratory
performing
the
study.
Detection/
Quantitation
Workshop
37
May
10,
2001
Thank
you.

QUESTION
AND
ANSWER
SESSION
MR.
TELLIARD:
We
have
got
time
for
two
questions
if
they
are
easy.
Microphone,
please.
They
can't
record
the
thing
if
you
don't
use
the
microphone,
so
just
tell
them
who
you
are
and
what
you
want
and
we'll
work
it
out.

MR.
COOK:
Marcus
Cook,
CCI.
Can
we
get
a
copy
of
your
presentation?

MS.
CAMPBELL:
Yes,
the
copies
are
on
the
back
table.
There
are
handouts.

MR.
TELLIARD:
Roger
is
trying
to
take
them
all
now,
but
that
is
all
right.
Anyone
else
right
now?
(
No
response.)

MR.
TELLIARD:
Thank
you,
Wendy.
Detection/
Quantitation
Workshop
38
May
10,
2001
THE
CALCULATION
OF
DETECTION
LIMITS
USING
A
TWO­
COMPONENT
ERROR
MODEL
AND
LABORATORY
QUALITY
CONTROL
DATA
MR.
TELLIARD:
Trying
to
catch
up
a
little
bit
here.
Our
last
speaker
this
morning...
we
save
the
best
till
last
and
all
those
stories...
is
Ken
Osborn.
Ken
is
the
quality
assurance
officer
for
the
East
Bay
Municipal
Utility
District
in
California,
so
you
know
you'll
get
some
strange
stuff
here.
He
is
also
chair
of
the
Standard
Methods
Committee
for
the
Examination
of
Water
and
Wastewater
Data
Quality,
and
that
is
an
operating
group
that
publishes
that
chapter
in
Standard
Methods.

Ken
has
been
working
with
us
for
a
number
of
years
on
various
projects,
and
we
are
glad
to
have
him
here
today.

MR.
OSBORN:
Good
morning.
Can
you
hear
me?
Okay.
I'll
try
to
lower
it
down
a
little
bit.
Can
you
hear
me
now?
Now?
What
about
you,
Bob?
Okay,
thank
you.

I
am
going
to
talk
about
several
different
things
today,
some
of
my
own
personal
views,
the
current
approach
we
are
taking
with
standard
methods,
and
a
little
preview
about
what
we
are
doing
in
California,
because,
as
Bill
said,
we
really
do
strange
things
out
there,
and
I
thought
that
might
be
entertaining
or
amusing
for
you
who
are
not
from
California.

And
speaking
about
strange
things,
how
many
in
here
will
admit
to
being
statisticians?
Three.
My
condolences
to
your
mothers,
but
I
am
sure
they
are
proud
of
you.

To
make
sure
that
we
are
all
on
the
same
wavelength
and
that
we
are
talking
something
like
statistical
talk,
at
least
conceptually,
I
have
put
together
something
to
allow
us
to
visualize
what
I
am
talking
about.
So,
even
if
you
do
not
understand
the
words
that
are
coming
out
of
my
mouth...
and
I
recently
had
experience
with
this
when
I
was
in
Egypt
for
three
weeks
in
the
City
of
Alexandria
doing
some
training
on
QC
and
statistics,
and
I
used
this
particular
spreadsheet
that
I
am
going
to
show
you.
I
call
it
the
bee
swarm,
so
would
you
bring
up
the
bee
swarm,
please?

So
far,
no
bees.
If
any
of
you
have
ever
been
out
in
an
area
where
you
have
seen
a
collection
of
bees
swarming
through
the
air
and
have
ever
really
thought
about
it,
it
is
kind
of
like
a
collection
of
measurements.
It
can
be
quite
unpredictable,
but
when
you
look
at
it
a
little
bit
more
closely,
there
are
two
elements
of
all
bee
swarms
that
are
the
same
as
all
measurements,
whether
we
do
them
in
the
laboratory
or
whether
you
are
going
to
your
doctor
and
getting
measurements
there
or
whether
you
are
measuring
temperatures
outside
or
whatever,
and
these
two
properties,
if
you
remember
these
two
properties
and
only
these
two
properties,
you
can
probably
get
along
with
the
best
of
them.

Of
course,
if
the
spreadsheet
doesn't
come
up,
then
we'll
have
to
use
our
imaginations
even
more.
Ah,
thank
you.
Very
good.
Detection/
Quantitation
Workshop
39
May
10,
2001
Okay,
what
we
have
here
on
the
vertical
axis
is
the
probability
of
an
occurrence,
loosely
translated.
Don't
worry
about
the
exact
meaning
of
that.
On
the
horizontal
scale,
we
have
got
the
measurement
value.

Actually,
in
the
case
of
a
bee
swarm,
it
is
where
are
they,
their
position
relative
to
the
queen.
The
queen
is
typically
at
the
center
of
this
bee
swarm.
So,
let's
take
a
look
at
the
measurement.
Let's
change
the
average
measurement
and
see
what
happens
to
our
bee
swarm
as
we
do
so.

Now,
there
are
two
numbers
up
in
the
upper
left
hand.
You
see
mean
and
sigma.
Right
now,
they
are
set
arbitrarily
to
1
and
1,
and
you
will
see
this
line
on
the
far
left­
hand
side
of
the
chart.
We
are
going
to
do
some
things
with
that
line.

So,
click
on
the
pink
box
and
change
the
mean.
We
are
going
to
change
the
mean.
You
will
also
see
that
sigma
will
also
change.
Go
ahead.
This
will
be
repeated
three
times
just
in
case
you
miss
it
the
first
one
or
two
times.

We
see
the
bee
swarm
marching
across
the
screen
from
left
to
right
as
the
mean
changes.
So,
the
sigma
has
been
set
arbitrarily
to
50.
We'll
take
the
bee...
well,
try
it
again.

You'll
see
the
mean
goes
from
100,
200,
300,
so
on,
so
it
is
marching
across
the
screen
from
left
to
right.
That
is
what
happens
to
the
bee
swarm
when
we
change
the
mean.

Now,
let's
try
changing
sigma,
but
before
we
do
it,
any
hands?
Any
people
think
they
have
any
idea
what
is
going
to
happen
to
the
bee
swarm
when
we
change
sigma?
This
might
be
likened
to
we
go
from
a
cool
day
in
the
morning
and
then
the
day
warms
up.
What
are
the
bees
going
to
do
relative
to
the
center
of
the
hive?

Okay,
you
gave
me
a
visual
response.
I
saw
some
hands
going
out
like
this.
Let's
try
it
and
see
what
happens.

We
keep
the
queen
at
the
same
place.
We
change
sigma,
and
we
get
that.

Can
we
characterize
this
any
differently
so
we
get
even
more
information
out
of
this?
Right
now,
yes,
we
can
see
the
beehive
is
swarming
out,
it
is
ballooning,
it
is
contracting,
it
is
moving
to
the
right,
or
it
might
move
to
the
left.
Is
there
any
more
information
we
can
get?

What
if
we
took
the
bees
and
we
ranked
their
position
relative
to
the
queen
from
low
to
high?
I
call
that
my
red
curve,
so
let's
go
to
the
red
curve.
Well,
let's
see.
I
don't
know
that
mine
are
any
better,
but...
oh,
you
can't
see
on
here,
either.
Okay,
that
comes
from
the
two
hemispheres
of
my
brain
being
switched
relative
to
everybody
else's.

So,
now
what
we
have
got
up
here
is
a
red
curve.
We
are
going
to
try
the
same
thing,
only
now,
the
bees
are
ranked
in
order
from
those
that
are
furthest
from
the
queen
all
the
way
to
the
center,
then
furthest
on
the
other
side.
That
is
going
to
be
my
red
curve.
Detection/
Quantitation
Workshop
40
May
10,
2001
Now,
I
am
leaving
the
original
bee
swarm
there
so
you
can
see
how
one
superimposes
over
the
other.
So,
let's
do
it
first
for
the
average,
and
you
see
this
very
symmetrical
curve
marching
from
left
to
right,
and
the
center
of
the
red
curve
matches,
roughly,
to
the
center
of
this
distribution
of
blue
dots.
That
is
the
average,
the
mean.

What
would
happen
if
I
changed
the
standard
deviation
to
the
red
curve?
Any
ideas
on
that?
And
you
can't
just
go
like
this
anymore.
I
will
accept
a
visual
response.
You
can
do
it
with
your
hands.
Oh,
okay,
like...
is
that
your
answer?
That
is
your
answer.
Oh,
more
like
that.
Oh,
let's
see
what
happens.

So,
let's
change
sigma
and
see
what
happens
to
our
red
curve.
Yeah,
like
that.
There
are
only
two
things
you
can
do
with
the
data.
Okay?
You
can
translate
it
left
to
right.
That
is
changing
the
average.
You
can
rotate
it.
Okay?
That
is
changing
the
standard
deviation.
Or
at
least
in
this
presentation.

Now,
there
are
some
nice
things
about
this
particular
format,
and
that
is
you
can
do
it
in
an
Excel
spreadsheet.
Ever
try
to
get
a
bell­
shaped
curve
in
an
Excel
spreadsheet?
It
can
be
done.
It
is
a
lot
easier
just
to
take
the
data,
rank
it
from
low
to
high,
and
plot
it.

Then,
you
can
take
a
Gaussian
distribution,
and
you
can
take
that
curve
and
fit
it
over
the
other
and
see
how
they
fit.
Is
it
normal
or
is
it
not?

So,
let's
get
out
of
that
spreadsheet
and
into
the
next
one.
Now,
these
are
kind
of
my
ideas.
If
they
are
all
bad,
I
take
full
responsibility
for
them.

Now,
I
am
going
to
give
you
a
little
bit
of
a
preview,
and
this
is
some
of
the
stuff
we
are
doing
in
the
State
of
California
on
reporting
limits.
We
have
there
what
are
called
DLRs,
detection
limits
for
reporting,
and
this
is
with
drinking
water,
not
with
wastewater.

The
State
of
California
had
promulgated
these
in
what
we
call
Title
XXII
some
years
back,
and
the
way
they
came
up
with
them
is
they
got
a
number
of
laboratories
together...
I
think
it
was
four
or
five...
and
they
said
what
do
you
think,
and
they
said
well,
we
think
this,
and
that
became
the
DLRs.
That
is
probably
a
little
oversimplified
and
probably,
if
anybody
is
here
from
California,
I'll
probably
have
words...
they'll
probably
have
words
with
me
after
I
go
out
that
that
was
not
quite
what
happened.

However,
a
couple
of
years
ago,
the
State
saw
that
those
DLRs
weren't
necessarily
going
to
stand
up,
especially
when
EPA
published
a
new
list
of
MDLs,
and
they
might
have
to
go
to
those.
So,
they
sent
to
certified
laboratories
throughout
California
who
were
involved
in
drinking
water
a
list
of
these
new
proposed
DLRs
and
asked
for
comments,
and
they
got
a
lot
of
them.

As
a
consequence
of
that,
the
State
asked
a
number
of
laboratories
to
come
and
meet
with
them
and
let's
work
this
out
on
a
more
rigorous
basis
than
our
just
saying
well,
these
look
like
good
numbers
and
this
is
what
we
are
going
to
use.
Detection/
Quantitation
Workshop
41
May
10,
2001
So,
we
took
a
number
of
approaches,
and
one
of
them
was
to
take
a
look
at
a
modeling
approach
where
we
could
find
out
something
about
quantification.
If
we
were
to
give
this
sample
to
100
laboratories,
would
80
percent
of
them,
90
percent
of
them,
75
percent
of
them,
whatever,
be
able
to
come
up
with
a
consensus­
based
result
that
we
were
able
to
agree
within,
say,
plus
or
minus
10
percent
or
20
percent,
you
know,
like
the
LOQ
or
the
PQL.
Could
we
come
up
with
some
basis
like
that?

More
than
that,
could
we
do
it
cost
effectively?
Yes,
we
could
do
studies
and
we
could
send
everything
out
to
all
the
laboratories
and
test
it
all
and
then
get
these
numbers
back
and
see
what
we
would
agree
on,
but
it
seemed
awfully
expensive.

So,
our
first
cut
was
to
look
at
some
models,
and
we
looked
at
the
model
developed
by
Rocke
and
Lorenzato,
and
I'll
be
talking
a
little
bit
about
that.

Then,
after
we
came
up
with
some
numbers
based
on
the
model,
then
we
went
out
and
did
some
testing.
So,
you
get
the
second
spreadsheet,
I
made
a
macro.

Now,
this
is
a
spreadsheet
I
originally
put
together
to
explain
to
myself
some
differences
between
MDLs
and
LOQs
and
that
kind
of
thing
and
what
happened
when
you
tried
different
things,
and
it
is
not
a
perfect
model,
and
I
don't...
don't
use
this
at
home.
Okay?
But
there
are
some
conceptual
things
that
I
think
we
can
get
out
of
this.

First
of
all,
if
you
look
up
on
there,
there
is
a
blue
curve.
That
blue
curve
is
the
relationship
between
sigma
and
concentration
using
the
Rocke
and
Lorenzato
model.
The
curve
that
is
in
yellow
with
the
coral
boxes
on
it
is
the
relationship
between
the
relative
standard
deviation
and
concentration,
and
you
see
as
sigma
goes
up,
the
relative
standard
deviation
goes
down.

Now,
this
is
a
generalized
kind
of
picture,
and
it
isn't
always
the
way
that
things
work.
Sometimes,
it
is
flat.
For
example,
if
you
get
on
your
bathroom
scales,
you
are
not
likely
to
find
this
relationship
between
standard
deviation
and
your
weight.
Okay?
You
may
find
that
it
is
very
flat,
that
you
get
an
error
of
about
1
pound
regardless
of
how
much
you
weigh.
Okay?
But
a
lot
of
things
do
seem
to
follow
this
particular
model,
so
it
is
a
very
general
model.

Then,
there
is
a
green
box
up
there.
That
represents
the
method
detection
limit.
There
is
a
red
box
up
there.
That
represents
a
limit
of
quantification
or
some
sort
of
a
quantification
level.

Now,
what
I
am
going
to
do
is
change
sigma
and
see
what
happens
to
our
green
box
and
our
red
box.
So,
change
sigma.
You
will
notice
that
the
blue
curve
goes
up
as
the
standard
deviation
at
zero
increases.
You
would
expect
that
to
happen.

Try
it
again.
Okay,
the
green
box
and
the
red
box
shift
to
the
right,
as
you
would
expect
would
happen.
Our
variance
goes
up,
then
our
detection
limit
is
going
to
go
up,
our
quantification
limit
is
going
to
go
up.
But
you
will
notice
they
track
together.
That
is
because
the
only
thing
I
am
changing
here
is
the
standard
deviation
of
zero.
Detection/
Quantitation
Workshop
42
May
10,
2001
What
if
I
change
the
method
relative
standard
deviation?
Now,
the
method
relative
standard
deviation
has
no
effect
at
zero.
It
is
what
happens
at
that
upper
region.
So,
why
don't
we
change
the
relative
standard
deviation
and
see
what
happens?

In
this
case,
the
method
detection
limit
doesn't
change,
but
the
quantification
limit
does.
Now,
you'll
also
notice
another
box
in
there,
that
yellow
box.
It
is
kind
of
drifting
to
the
right
a
little
bit,
not
as
rapidly
as
the
red
box
is.
That
is
the
RLDL,
and
that
is
a
detection
limit
that
is
calculated
in
a
fashion
which
I
will
get
to
in
my
presentation.
So,
let's
get
out
of
that
and
go
to
the
power
point.

By
the
way,
that
is
a
little
overview
of
what
we
are
doing
in
the
State
of
California.
Just
before
I
went
to
Egypt,
we
had
a
lot
of
data
that
came
back
from
the
laboratories.
We
started
to
evaluate
it.
The
early
returns
are
that
there
is
agreement
with
the
data
that
we
are
getting
back
from
the
laboratories
and
our
model
predictions,
with
a
few
excursions.
So,
now
when
I
get
some
time
again,
I
am
going
to
look
at
it,
although
my
immediate
boss,
the
one
who
is
paying
me,
has
pointed
out
that
I
need
to
do
some
things
within
the
laboratory
as
opposed
to
outside
of
the
laboratory.

So,
those
of
you
who
would
like
to
see
more
on
this,
I
invite
you
to
send
email
to
my
boss.
This
is
not
junk
mail.
This
is
honest
response
from
you,
unsolicited
from
me.
His
email
address
is
bellgas@
ebmud.
com.
And
this
is
strictly
from
the
heart,
and
I
expect
to
see
a
lot
of
responses.

Okay,
population
of
the
attachment
using
the
two­
component
error
model
and
quality
control
data
in
the
laboratory.
This
is
the
approach
we
are
currently
taking
with
standard
methods,
although
I
think
there
is
probably
room
for
some
changes
in
the
future.
I
base
this,
in
part,
on
discussions
with
people
in
laboratories
who
say
yes,
we
want
it
to
be
scientifically
defensible.
We
want
it
to
be
able
to
stand
up
in
court.
We
want
it
to
be
reliable.
We
want
it
to
be
simple.
And
cheap.

I
am
not
so
sure
it
is
possible
to
put
all
those
together,
but
I
think
that
we
can
take
a
look
at
what
we
are
doing
and
make
some
changes
and
some
modifications.
Maybe
there
are
certain
ways
of
doing
it
under
certain
conditions,
other
ways
of
doing
it
other
ways.

For
example,
initial
demonstration
of
efficiency.
How
many
have
done
an
initial
demonstration
of
efficiency
on
an
analytical
method
within
the
last
two
years?
Okay,
a
few
of
you
but
not
too
many.
I
would
expect
maybe
10
percent
or
fewer.

For
those
kinds
of
situations,
I
think
we
need
a
very,
very
good
measure
of
what
we
are
capable
of
doing.
On
an
ongoing
basis,
however,
I
think
we
need
to
reconfirm
to
ourselves
that
yes,
we
are
in
the
ball
park.
Okay?
That
is
what
I
mean
when
I
say
two
different
approaches.

So,
maybe
that
very
simple
approach
that
the
bench
chemist
loves
and
adores,
the
ad
hoc
approach
where,
hey,
give
me
a
formula,
I
don't
want
to
be
bothered
about
thinking,
that
maybe
can
be
applied
to
the
day­
to­
day
kind
of
thing.
On
the
other
hand,
the
kind
of
thing
where
yes,
Detection/
Quantitation
Workshop
43
May
10,
2001
this
is
what
we
are
capable
of
doing,
this
is
what
we
have
demonstrated
up
front,
I
think,
since
you
are
not
doing
a
lot
of
that,
maybe
that
takes
a
more
rigorous
approach.

I
am
not
quite
sure
how
these
are
all
going
to
work
out
together
in
the
end,
but
it
is
something
I
have
just
started
thinking
about,
especially
with
my
experience
in
Egypt
where
I
have
seen
they
really
want
to
do
things,
but
they
also
want
something
that
is
doable.

So,
I
gave
you
a
little
bit
of
statistics
already.
You
have
heard
a
lot
about
what
a
detection
limit
is,
and
a
limit
may
be
hard
to
define,
because
there
are
so
many
different
ones.
Why
this
model?
I
am
going
to
talk
about
three
phases
of
detection
and
give
you
an
application
example.

We
have
seen
the
Gaussian
distribution,
seen
it
a
lot.
Next
slide.

It
has
got
certain
properties.
It
has
got
a
mean,
it
has
got
a
standard
deviation,
it
has
got
symmetry.
It
has
got
a
lot
of
things
that
we
like.

Mainly,
one
of
the
things
we
really
like
about
it
is
that,
going
back
to
the
bee
swarm,
you
are
most
likely
to
find
bees
closest
to
the
queen,
less
likely
to
find
them
far
away,
but
there
is
a
finite
probability
that
you
will
find
one
bee
way,
way
out
there,
just
like
with
measurements.
And
it
is
that
particular
property
of
measurements
that
we
have
things
like
detection
limits
and
quantification
limits.

If
we
didn't,
if
all
the
bees
were
stacked
on
top
of
one
another,
we
would
never
need...
we
wouldn't
be
here.
Next
slide.

So,
cumulative
probabilities,
standard
deviation
from
the
average...
next
slide...
and
you
have
seen
this.
That
was
in
the
bee
swarm.
Next
slide.

Now,
the
three
terms
for
detection.
I
like
the
IUPAC
approach,
but
there
are
reasons
to
like
the
MDL
approach
of
EPA.
Okay?
There
are
a
lot
of
times
when
the
regulations,
for
example,
will
specify
to
use
the
EPA
MDL.
Also,
we
have
a
lot
of
experience
with
the
EPA
MDL
in
the
laboratories.
The
chemists
are
very,
very
familiar
with
it.
So,
whether
you
like
it
or
not,
it
is
something
that
is
there,
people
are
using
it,
that
is
the
way
they
are
working.

Can
we
do
it
any
differently?
Will
it
resolve
some
of
the
issues
that
we
have?
I
think
so.
Next
slide.

These
are
detection
limit
equations.
The
critical
value
is
equal
to
some
scalar,
a
t
value,
with
n­
1
degrees
freedom
and
alpha
level
of
probability
times
a
standard
deviation
of
zero.
The
MDL
looks
fairly
similar.
It
looks
like
a
critical
level.
It
sort
of
is,
but
it
sort
of
isn't.
And
that
is
the
last
equation.
It
is
the
standard
deviation
at
the
MDL
which,
if
it
is
the
same,
of
course,
as
the
standard
deviation
of
your
method
blanks
at
zero,
then
it
will
be
the
same
as
an
L
C.
The
one
in
the
middle,
the
L
D,
that
is
the
IUPAC
detection
level.
And
we
add
onto
the
L
C
another
amount
to
give
it
a
beta
level
of
error,
a
Type
II
error.
Next
slide.
Detection/
Quantitation
Workshop
44
May
10,
2001
Now,
maybe
I'll
go
through
these
rather
rapidly,
because
Dr.
Currie
has
already
shown
these,
but
the
critical
level
says
that
the
probability
of
a
measurement
exceeding
L
C...
that
is
in
the
yellow
box
down
there...
when
the
true
value
is
zero
is
equal
to
alpha.
So,
this
is
a
way
of
measuring
our
Type
I
error.
Next
slide.

Graphically,
using
that
different
approach
to
a
Gaussian
curve,
the
purple
line
up
there
represents
L
C.
The
curve
represents
all
the
measurements
distributed
around
zero.
So,
you
can
see
that
99
percent
of
those
measurements
are
below
our
critical
value
when
the
true
value
would
be
zero.
Okay?
If
we
really
had
zero
in
there.

So,
the
critical
level
gives
us
some
criteria
for
saying
yeah,
we
are
up
above
it,
we
have
only
got
a
1
percent
chance
that
we
really
were
down
at
the
center
of
that
distribution.
That
bee
that
is
up
there
maybe
really
doesn't
belong
to
that
distribution
for
this
particular
queen.
It
belongs
with
another
queen.
Next
slide.

This
is
for
the
IUPAC
detection
limit.
Again,
a
similar
kind
of
thing.
Next.

IUPAC
detection
limit,
we
have
got
two
distributions
now.
The
first
one
down
below
is
for
the
critical
value.
The
one
up
above,
that
is
a
distribution
around
the
detection
limit,
and
you
see
the
purple
band
here
is
for
L
D.
The
distribution
is
centered
at
L
D,
and
you
will
see
that
alpha
percent
of
it
is
below
the
blue
line,
representing
the
critical
level.

So,
if
you
are
at
that
concentration,
what
are
your
odds
that
you
will
mistake
a
measurement
for
below
the
L
C?
It's
beta.
This
little
tail
right
up
there
that
I
can't
quite
reach.
Next
slide.
And
next
slide.

This
is
the
MDL.
Now,
the
MDL
is
a
distribution
around
itself.
Okay?
And
we
take
a
look.
99
percent
of
the
measurements
are
above
zero.
So,
we
only
have
a
1
percent
change,
if
we
have
exceeded
the
MDL
or
we
are
at
the
MDL,
that
we
will
be
below...
we
have
actually
got
zero,
we
have
got
nothing
in
the
sample.

On
the
other
hand,
if
you
have
a
concentration
that
is
just
a
little
bit
below
the
MDL,
50
percent
of
the
time,
you
are
going
to
be
above
it,
and
50
percent
of
the
time,
you
are
going
to
be
below
it.
So,
you
have
got
a
50
percent
chance
that
you
will
not
see
something
that
is
at
the
MDL.
Okay?
So,
this
is
what
people
mean
when
they
say
oh,
it
will
check
for
the
Type
I
error
but
not
the
Type
II
error.
And
maybe
that
is
okay
for
certain
circumstances.
Next
slide.

The
equations
again.
Next.

So,
why
is
calculating
a
detection
limit
a
moving
target?
It
is
a
moving
target,
because
the
standard
deviation
is
frequently
a
function
of
the
concentration,
and
if
you
happen
to
be
off
somewhat
in
selecting
a
given
concentration
for
your
MDL
study,
and
then
the
next
time
you
pick
it
out,
you
are
up
here
a
little
bit,
maybe
the
concentrations
aren't
that
far
apart,
but
maybe
the
standard
deviation
is
changing,
and
what
you
think
is
your
standard
deviation
at
that
concentration
isn't.
Detection/
Quantitation
Workshop
45
May
10,
2001
So,
you
pick
a
concentration
here,
get
a
standard
deviation,
get
an
MDL
over
here,
and
here
is
your
standard
deviation.
Okay?
Now,
if
you
were
to
pick
that
one,
then,
you
know,
we
could
get
up
here
real
fast.
So,
how
do
we
do
this?
Next
slide.

Dr.
David
Rocke
and
Stefan
Lorenzato
came
up
with
a
model
for
doing
this,
and
there
are
a
couple
of
papers
that
have
been
published
on
this.
This
is
a
reformulation
of
the
equation
that
they
have
in
their
paper,
but
the
standard
deviation
at
some
concentration
is
equal
to
a
function
of
the
standard
deviation
of
the
blank,
the
concentration
you
are
looking
for,
and
then
a
complex
that
really
represents
the
relative
standard
deviation
in
that
linear
portion
of
the
standard
deviation
concentration
curve.
Next
slide.

So,
we
really
have
two
equations
that
we
have
to
solve.
One,
there
is
a
defining
equation
for
the
detection
limit,
whether
it
is
the
MDL,
limit
of
detection,
whatever.
Okay?
You
have
got
to
solve
for
that
one.

But
then,
you
have
got
another
equation
that
you
have
got
to
satisfy,
and
that
is
the
relationship
between
the
standard
deviation
and
the
concentration.
So,
how
do
we
do
that?

Well,
one
way
is
to
make
an
initial
stab
at
it.
We
all
have
some
idea
of
what
our
detection
limits
are
in
the
laboratory,
so
you
say
okay,
this
is
my
detection
limit.
Plug
it
into
equation
number
one
and
get
a
standard
deviation
at
that
detection
limit.
Then
take
that
standard
deviation
at
that
detection
limit,
plug
it
into
equation
two,
come
up
with
your
new
detection
limit
which,
typically,
will
be
a
little
bit
different.
Then
take
that
new
detection
limit,
plug
it
back
into
equation
one,
and
recalculate.
Go
back
and
forth
and
back
and
forth
and
back
and
forth
until,
eventually,
you
get
a
number
that
settles
down
and
doesn't
change
that
much.
Next
slide.

Here
is
an
example
of
that.
Took
a
situation
where
the
standard
deviation
of
the
method
blanks
is
1,
relative
standard
deviation
is
9
percent,
0.09.
Plugged
in
an
initial
value
of
100
into
my
equation
number
one.
Then
went
back
and
forth
and
back
and
forth
and
back
and
forth.
The
curve
goes
down.
Eventually
tends
to
settle
out.

Now,
if
this
is
really
working,
I
should
be
able
to
go
in
the
other
direction
well
below
my
method
detection
limit
and
plug
that
in.
Did
an
initial
estimate
of
1,
and
the
curve
goes
up,
and,
again,
asymptotically,
approaches
the
same
value.

Now,
you
can
do
this
for
a
whole
lot
of
values
in
a
model,
and
it
works
until
you
get
to
what
I
call
the
exploding
RSD.
Okay?
And
the
exploding
RSD
is
where
you
suddenly
start
to
try
to
divide
by
zero.
You
are
taking
the
difference
of
two
terms,
and
when
the
relative
standard
deviation
of
your
method
gets
up
to
around
30
percent,
it
won't
work.

Now,
that
is
30
percent
if
you
are
using
the
multiplier
of
3.1.
Why
30
percent?
Well,
the
reciprocal
of
3.1
is
about
30
percent.
Okay?
So,
you
have
got
your
scalar
factor
in
one
term,
and
you
have
got
your
relative
standard
deviation
in
the
other
term,
and
when
you
start
to
subtract
those
and
they
get
closer
and
closer
together,
things
just
won't
solve.
But
below
that
30
percent,
it
converges.
Next
slide.
Detection/
Quantitation
Workshop
46
May
10,
2001
Here
is
an
example
calculation
where
we
took
quality
control
data,
matrix
spikes,
matrix
spike
duplicates.
You
take
the
logs
of
those,
take
the
differences,
take
the
standard
deviation
of
those,
divide
it
by
1.4,
the
square
root
of
2.
Why
do
you
do
that?

Well,
you
have
got
two
sources
of
variance.
Variances
are
additive.
You
have
seen
that
in
Youden
pairs.
Okay?
So,
you
have
got
two
sources
of
variances.
You
divide
by
2.
You
are
taking
the
square
root
of
your
variance
to
get
your
standard
deviation.
That
is
why
the
square
root
of
2.
Okay?
Standard
deviation
by
the
square
root
of
2.

Plug
that
value
in
for
your
relative
standard
deviation
into
the
two
equations.
Come
up
with
an
MDL
of
5.1
with,
in
this
case,
two
iterations.
Next
slide.

This
is
what
happens
with
the
MDL
and
relative
standard
deviation.
Asymptotically
approaches
infinity
as
you
get
out
to
the
far
right
hand.
But
if
you
are
with
methods
that
have
got
below
10
percent
relative
standard
deviation,
actually,
it
is
not
that
big
a
factor.
It
doesn't
change
your
MDL
by
that
much.
Next
slide.

It
turns
out
there
is
a
simultaneous
equation
solution.
You
can
take
those
two
equations,
one
and
two,
you
can
put
them
together,
and
you
can
come
up
with
a
single
solution
to
it.
And
if
you
want
to
know
more
about
this,
I
suppose
you
can
send
me
email,
and
I
would
be
happy
to
respond
to
you.

Thank
you.

MR.
TELLIARD:
Thanks,
Ken.
Detection/
Quantitation
Workshop
47
May
10,
2001
PANEL
DISCUSSION
MR.
TELLIARD:
Could
we
have
all
the
speakers
come
up
and
take
a
hot
seat?
And
we'll
open
the
forum.
You
can
ask
them
whatever
you
want.
The
one
thing
you
guys
have
to
do
is
move
those
microphones
around,
because
I
understand
they
can
hear
over
there
without
them.

Any
questions?
Oh,
my
God,
they
are
not
moving.

Unknown
Participant:
My
question
is
the
very
important
issue
of
this.

DR.
CURRIE:
There
are
some
issues
there
and
some
issues
about
the
blanks
which,
of
course,
I
gave
a
major
focus.
I
just
wanted
to
tell
you
a
real
story
which
is
interesting
to
me
and
maybe
gives
us
some
idea.
As
a
chemist,
I
like
to
think
about
physics
and
chemistry,
and
the
physics
and
chemistry
of
the
blank
can
be
a
fascinating
area
to
study
per
se.

But
the
little
anecdote
I
would
like
to
tell
you
which
I
learned
about
a
month
ago
illustrates
the
issue
of
matrix
interactions
and,
in
fact,
the
perturbation
of
the
matrix
on
the
apparatus
creating
a
blank
change,
and
the
particular
issue
I
am
going
to
talk
about
is
where
you
have
a
natural
material
or
a
vegetative
material
or
something
and
you
are
combusting
it
in,
typically,
a
quartz
tube.

The
quartz
has
impurities
in
it.
Some
materials
react
with
the
quartz,
some
don't.
So,
if
you
happen
to
have
some
alkaline
matrices,
they
will
release
impurities
differently
than
other
types
of
matrices.

So,
it
is
just
a
small
real
example
where
if
you
think
about
the
details
of
the
measurement
process,
you
can
find
out
how
the
blank
can
come
about
and
also
how
to
control
it.

I
wanted
to
also
relate
a
comment
on
the
matter
of
less
than
values.
I
wanted
to
show
you
one
example
but
also
tell
you
that
at
NIST...
used
to
be
NBS...
in
order
to
get
anything
approved
for
publication,
you
must
always
quote
an
uncertainty
regardless
of
how
large
or
small
the
value
is,
whether
it
is
above
or
below
some
critical
value.

The
illustration
I
wanted
to
give,
harkening
again
to
IAEA
data,
old
data
but
still
good
data
from
radioactivity
in
sea
water.
These
are
real
results
reported
from
an
interlaboratory
study
of
zirconium
niobium
95
which
I
think
has
about
a
75­
day
half
life
or
something
on
that
order.
I
had
to
put
them
into
two
categories,
values
and
non­
values,
and
it
is
interesting
to
ask
well,
what
is
the
mean?
Well,
how
do
you
do
it?

So,
uncertainties
really
need
to
be
present
if
you
hope
to
do
anything
about
interpreting
the
data
on
a
global
scale.
I
think
that
is
all
I
will
say
at
this
point
and
let
the
panel
go
on.

MR.
TELLIARD:
Thank
you.
Questions?

MS.
PROCTOR:
Debbie
Proctor
from
Pace,
Quality
Assurance
Officer.
Detection/
Quantitation
Workshop
48
May
10,
2001
MR.
TELLIARD:
Hold
on.
We
couldn't
hear
you.
Try
again.

MS.
PROCTOR:
I
am
Debbie
Proctor
Quality
Assurance
Officer
for
Pace.

MR.
TELLIARD:
Sorry
about
that.
There
you
go.

MS.
PROCTOR:
Debbie
Proctor,
Quality
Assurance
Officer
for
Pace
Analytical
Services.

I
did
have
a
question.
I
didn't
hear
much
about
outlier
tests.
We
currently
follow
the
40
CFR
MDL
protocol,
and
occasionally,
if
we
do
more
than
seven
replicates,
we
are
currently
using
the
Grubbs
test
for
outlier,
and
I
just
question,
is
that
the
best
test
to
apply?
And
pretty
much
to
anyone
on
the
panel.

MR.
BRITTON:
I
think
there
are
a
whole
host
of
Grubbs
tests,
so
you
really
have
to
be
more
specific
as
to
what
you
mean.

MS.
PROCTOR:
I
can
to
look
to
see
what
is
in
our
SOP.
Right
now,
all
it
says
is
just...

MR.
BRITTON:
I
would
follow...
when
you
are
dealing
with
relatively
limited
data,
as
you
usually
are
in
a
laboratory
study
or
in
an
MDL
study,
I
think
that
you
have
to
limit
yourself
to
tests
like
the
Grubbs
test,
if
I
understand
what
you
mean
by
that
being
what
is
frequently
used
and
the
AOAC
interlaboratory
study
standards
and
ASTM
interlaboratory
study
standards
to
identify
outliers.
You
are
very
limited
in
your
choices
of
outlier
tests
under
those
conditions.

If
you
have
got
more
data,
then
you
have
the
freedom
to
estimate
standard
deviation
robustly,
and
if
you
can
do
that,
then
you
should
certainly
apply
that
technique.
The
robust
estimates
of
standard
deviation
will
tend
to
estimate
the
true
standard
deviation
of
the
underlying,
if
you
will,
good
data
without
being
unduly
influenced
by
the
presence
of
any
outliers,
and
then
you
take
that
standard
deviation
and
apply
it
back
against
the
data
in
order
to
identify
the
outliers.

If
there
is
sufficient
data,
and
that
is
generally
20
observations
that
are,
you
know,
related
to
the
same
material
and
concentration
and
analyte
and
what
not,
then,
certainly,
robust
estimation
of
the
standard
deviation
and
then
taking
that
standard
deviation
estimate
back
and
applying
it
against
the
data
to
identify
outliers
is
probably
the
best
way
to
go.

MS.
PROCTOR:
Okay,
thank
you.

MR.
BRITTON:
And
Grubbs
test
is
fine
if
you
don't
have
20
or
more
observations
to
use.
The
robust
estimation
techniques
that
I
would
recommend,
I
think
there
are
a
lot
of
them
in
SAS
and
other
sources,
but
they
can
be
as
simple
as
estimating
the
center
of
the
underlying
distribution
with
the
median
and
estimating
the
variability
of
the
underlying
good
data
using
something
like
the
interquartile
distance
or,
you
know,
distance
measurements
like
that,
that
tend
to
be
fairly
free
of
influence
from
outliers.
The
MAD
is
another
one,
the
median
absolute
deviation.
Detection/
Quantitation
Workshop
49
May
10,
2001
Those
are
good,
robust
estimates
of
standard
deviation
when
you
have
sufficient
data.

MS.
PROCTOR:
Thank
you.

MR.
TELLIARD:
Thank
you.

MR.
COLEMAN:
I
would
like
to
add
just
a
little
bit
to
that.
Median
absolute
deviation
from
the
median
is
a
very
good
estimate.
I
use
that.
Also
biweight.
Some
of
you
may
not
be
familiar
with
it,
but
biweight
is
a
robust
estimate
that
is
available
in
SAS
and
other
packages.

The
other
thing
that
I
know,
Paul,
you
are
well
familiar
with
that
I
think
is
important
for
outlier
identification
and
rejection
is
on
the
laboratory
level,
and
that
is
also
embedded
in
the
ASTM
precision
and
bias
study
protocol
that
you
do
lab
ranking
to
identify
laboratories
that
always
report
high
or
always
report
low,
and
you
have
the
possibility
of
rejecting
entire
labs
on
that
basis.

So,
sometimes,
it
is
an
individual
value;
sometimes,
it
is
an
entire
laboratory
that
is
giving
you
an
outlier.

MR.
BRITTON:
Yeah,
as
an
aside,
I
think
that
lab
ranking,
I
agree,
in
many
applications,
does
have
good
relevancy
to
identification
of
outlier
laboratories,
but
I
have
seen
instances
where
the
methods
are
extremely
good
and
the
variabilities
extremely
low
where
relatively
unimportant,
if
you
will,
laboratory
biases
end
up
looking
like
significant
biases.
So,
I
think
it
has
to
be
applied
carefully.

MR.
TELLIARD:
Anyone
else?
Going
to
be
quick?

MS.
GOODMAN:
I
don't
know.
I
have
two
questions,
actually.
The
first
one
is
for
Ken.

I
have
to
give
my
not
a
statistician
disclaimer
first,
so
if
this
was
totally
obvious
from
your
talk,
I
apologize.
We
have
had
some
discussion
in
ASTM
D­
19
about
how
to
use
quality
control
data,
routine
quality
control
data,
for
purposes
of
coming
up
with
detection
limits,
and
I
wasn't
quite
clear
on
how
do
you
take
an
MS
and
MSD
that
are
generally
in
the
upper
part
of
the
curve
and
extrapolate
those
downward,
you
know,
when
you
don't
know
the
model
or
the
point
at
which
the
deflection
is
going
to
happen?

MR.
OSBORN:
Well,
actually,
the
intention
was
to
select
data
that
is
in
the
upper
part
of
the
curve.
When
you
plot
sigma
as
a
concentration
function,
you
find
that
your...
if
you
were
to
take
the
slope
of
the
data
and
you
go
along,
eventually,
you
would
find
a
point
where
you
have
got
a
slope
that
is
constant.
Take
the
relative
standard
deviation.
That
will
approach
the
slope
in
the
upper
end,
but
not
in
the
lower
end,
because
you
are
taking
the
ratio
of
two
numbers
as
opposed
to
the
ratio
of
differences
of
numbers.
Detection/
Quantitation
Workshop
50
May
10,
2001
So,
you
go
up
on
that
curve
far
enough
so
that
your
relative
standard
deviation
is
a
reasonable
approximation
of
the
slope,
and
then
that
relative
standard
deviation
is
what
goes
back
into
that
Rocke
and
Lorenzato
equation,
that
big
complex
term
that
was
in
there.

And
I
really
didn't
take
any
time
to
explain
how
that
would
be
used,
but
you
have
got
three
points
in
the
curve.
You
have
got
your
standard
deviation
at
zero.
You
have
got
the
standard
deviation
that
you
are
trying
to
calculate
at
a
given
concentration,
and
you
don't
know
what
that
concentration
is.
And
you
have
got
the
relative
standard
deviation.
So,
you
have
got
two
knowns,
and
you
have
got
two
unknowns.
Okay?
So,
you
need
two
equations,
and
that
is
why,
in
that
first
cut,
it
was
an
iterative
process.

I
don't
know
whether
this
answers
the
question,
but
to...
part
of
it
is
it
is
intentional.
You
do
want
it
in
that
upper
region.

Now,
the
other
part
of
it
is
if
you
are
using
MS­
MSD,
the
concentrations
are
all
over
the
place.
Okay?
So,
you
just
can't
take
recovery
values.
You
just
can't
take
results.
You
have
to
take
the
differences,
and
it
is
the
standard
deviation
of
the
differences,
not
of
the
absolute
values.
Okay?

MS.
GOODMAN:
Okay.
The
second
question
was
for
Wendy.
Your
discussion
of
the
Horwitz
equation,
Horwitz
original
paper
was
based
on
a
lot
of
trace
metals
studies,
basically.
I
was
just
curious,
looking
at
that
as
a
way
to
tackle
the
question
of
interlab
differences,
has
that
ever
been
updated
to
incorporate
organics
methods
and
other
similar
methods?
Because
it
is
an
empirical
equation.

MS.
CAMPBELL:
Well,
currently,
Dr.
Horwitz...
it
is
a
work
in
process,
but
what
he
does
is
he
gathers
data
from
multiple
interlaboratory
studies
and
then
comes
up
with
these
curves
and
equations,
and
it
has
been
applicable
to
almost...
the
only
limitations
we
have
found
so
far
have
been
in
the
physical
testing.

I
don't
know
if
he
has,
per
se,
done
one
on
organics,
but
he
has
done
them
on
food
matrices.

MS.
GOODMAN:
So,
is
there
a
current
definitive
published
equation,
or
is
it
a
work
in
progress
that
you
have
to
talk
to
him
about?

MS.
CAMPBELL:
Yeah,
what
I
can
do
is
give
you
something.

MR.
TELLIARD:
Actually,
there
is
a
paper
that
he
did
showing
some
organics
in
it.
I
can't
remember
the
publication,
but
I
saw
it
at
the
last
AOAC...

MS.
CAMPBELL:
Yes,
it
is
in
here.
It
is
in
the
quality
assurance
principles.
He
put
some
equations
in
there.

MR.
TELLIARD:
Yeah.
Detection/
Quantitation
Workshop
51
May
10,
2001
MS.
GOODMAN:
Okay.

MR.
OSBORN:
If
any
of
you
want
further
information
on
how
to
use
the
QC
data,
the
part
that
I
went
through
very
rapidly,
there
is
a
paper
that
Dr.
Rocke
and
I
published,
and
I
can
send
that
electronically
to
anyone
who
sends
me
an
email
request
for
one.
Okay?
Or
leave
your
card
up
here
with
your
email
address
on
it,
and
I
can
send
you
something.

MR.
TELLIARD:
Good.

MR.
KOORSE:
Steve
Koorse
again.
I
invite
any
of
the
panelists
to
respond,
offer
your
observations
about
the
following
paradox
which
I
perceive
to
be
perhaps
at
the
root
of
the
controversy
over
detection
and
quantification
levels.

When
you
look
at
the
Agency's
practices
for
validating
test
methods
to
be
used
in
the
regulatory
arena,
one
thing
that
is
glaring
is
that
when
the
Agency
does
a
full­
blown
collaborative
interlaboratory
study,
it
does
not
start
out
by
saying
the
standard
we
will
use
for
either
approving
or
disapproving
this
new
method
is
that
if
it
exceeds
a
precision
of
X,
it
is
too
variable,
and
if
it
doesn't,
then
it
is
acceptable.

Rather,
the
interlaboratory
study
is
performed,
and
the
Agency,
without
any
standard...
and
I
don't
mean
to
be
critical.
I
am
trying
to
just
raise
a
very
important
issue.
They
do
the
study.
They
look
at
the
data,
and,
invariably,
the
approve
the
study.
There
have
been
a
couple
that
have
been
bounced
out
of
the
process.
The
Ames
test,
for
example,
was
bounced
out,
and
we
are
not
quite
sure
exactly
why
the
variability
of
the
Ames
test
was
too
great,
whereas
the
variability
of,
let's
say,
a
PCB
method
which
is
quite
high
was
not
too
great.

So,
in
the
validation
process,
we
end
up
with
approved
test
methods.

Now,
in
the
regulatory
arena,
the
Agency's
position,
typically,
is
whatever
the
variability
of
that
test
method
is...
and
you
can
pretty
much
determine
that
variability
based
on
the
collaborative
study
results.
In
the
old
days,
we
used
to
have
nice
regression
equations
which
allowed
you
to
do
it
at
any
concentration.

Now,
we
have
got
sort
of
an
averaging
which
makes
it
more
difficult,
but
you
can
determine
the
variability,
but
the
Agency's
position,
basically,
the
variability
should
not
be
taken
into
account...
the
analytical
variability
should
not
be
taken
into
account
in
the
derivation
of
permit
limits,
at
least
not
water
quality­
based
limits.

However,
when
you
get
down
to
quantitation
and
detection
levels,
the
Agency
says
at
that
point,
the
variability
is
too
high.
So,
now
we
have
sort
of
a
flip­
flop.
The
Agency
has
determined
that
the
variability
is
too
high
to
be
used
in
the
regulatory
process,
and
the
Agency
says
we
do
not
expect
those
data
points
to
be
used
in
the
regulatory
process.
Okay?

The
question
becomes
when
you
look
at
the
precision
for
the
various
test
methods
at
the
quantification
level,
assuming
you
can
determine...
and,
of
course,
as
David
pointed
out
earlier,
with
the
IQE,
you
know
exactly
what
it
is,
because
you
set
the
number,
the
quantification
level
at
Detection/
Quantitation
Workshop
52
May
10,
2001
a
particular
precision
level.
But
if
you
look
at
the
typical
quantification
levels
for
the
test
methods
that
have
been
approved
by
the
Agency,
you
look
at
the
variability,
you
look
at
the
precision
for
those,
the
precision
is
all
over
the
board,
yet
the
Agency
says
that
this
is
the
appropriate
level
for
this
method
with
this
precision,
and
for
any
other
methods,
there
is
a
different
precision.

So,
it
raises
the
question,
how
much
variability
is
too
much?
In
some
cases,
the
Agency
says
this;
in
other
cases,
they
say
no,
it
is
this
much.
Or
in
other
cases,
it
says
no,
we'll
only
tolerate
this
much.

And
it
creates
a
paradox.
On
the
one
hand,
the
Agency
is
saying
variability
should
not
be
taken
into
account.
Then
it
says
it
should
be,
but
then
it
says
it
should
be
when
it
is
this
big,
other
times
when
it
is
this
big.

How
do
we
deal
with
all
of
that?

MR.
TELLIARD:
Well,
first
of
all,
it
is
not
a
question
for
these
gentlemen
and
ladies.
I
am
sorry.
I
almost
forgot.

The
Agency
makes
that
decision
based
on
a
number
of
issues.
A
good
example,
I
think,
of
that
is
the
information
collection
rule
that
was
done
on
Cryptosporidium.
The
Congress
of
the
United
States
decided
we
would
do
a
study,
we
would
establish
a
standard.

We
did
not
have
a
method.
We
used
a
method
that
gave
us,
roughly,
6
to
8
percent
recovery
of
Cryptosporidium,
and
that
was
on
a
good
day.
And
we
knew
that,
and
we
said
we
march
forward,
because
we
had
a
deadline,
because
the
Congress
of
the
United
States
said
you
will
do
it.

Since
then,
we
have
come
up
with
a
method
that
gives
us,
roughly,
50
to
70
recovery
of
oocysts.
In
the
interim,
yes,
we
used
that
method,
and
yes,
we
gathered
data,
and
yes,
we
made
some
policy
decisions
based
on
that,
because
that
is
all
we
had.
Now,
we
are
going
to
make
better
policy
decisions,
because
we
have
better
methods.

But
a
lot
of
these
methods,
as
you
know,
are
driven
by
court
orders,
they
are
driven
by
Congressional
mandates,
and
they
are
not
necessarily
at
the
Agency's
discretion
to
take
all
the
time
in
the
world
that
it
wants
or,
in
many
instances,
as
EPRI
and
other
people
in
the
room
know,
we
even
had
the
budget
to
do
the
studies.
Outside
people
have
committed
actual
funds.
The
pulp
and
paper
industry
committed
a
triggadollar
to
help
generate
the
data
base
for
their
rule,
because
we
didn't
have
the
funding.

So,
a
lot
of
this
question
of
how
much
variability
can
we
stand
is
going
to
be
independent,
and
it
is
going
to
be
independent
of
the
rule,
and
it
is
going
to
depend
a
greater
part
on
what
the
Agency
has
been
mandated
to
do.

So,
how
much
can
we
live
with?
It
was
the
gentleman
in,
I
think
Region
VI
asked.
Where
do
you
get
to
that
point?
And
the
answer
is
it
is
going
to
depend
on
the
method,
it
is
going
to
depend
on
the
day,
and
it
is
going
to
depend
on
the
situation.
Detection/
Quantitation
Workshop
53
May
10,
2001
So,
we
would
love
to
have
every
method
with
a
standard
deviation
of
less
than
10.
Okay?
We
would
love
to
have
a
lot
of
things,
but
what
we
can
live
with
and
what
we
can't
live
with
will
be
decided
by
the
Agency
and
certainly
not
by
these
people.
That
is
why
we
have
lawyers,
I
think.

Henry,
you
had
a
question.

MR.
KAHN:
Yeah,
I
had
a
question,
Bill,
if
I
could
direct
the
discussion
back
to
something
that
I
perceive
to
be
within
the
purview
of
the
panel,
Steve.

I
think
Dr.
Currie's
comments
about
the
blank
and
calibration
error
are
particular
important
and
relevant
here,
and
I
would
be
interested
in
having
Dr.
Currie
or
any
other
panel
members
address
those
matters
a
little
more.
In
particular,
in
talking
about
the
blank,
in
talking
with
chemists
that
we
deal
with,
I
am
usually
told
that
it
is
not
practical
to
get
true
blank
replicates.
This
has
to
do,
I
guess,
with
the
sort
of
automated
censoring
in
the
equipment.

MR.
TELLIARD:
In
some
of
the
instruments,
yes.

MR.
KAHN:
Some
of
the
instruments.
So,
I
was
wondering
if
you
could,
Dr.
Currie,
elaborate
a
little
more
on
that
issue
and
also
address
calibration
error
which,
I
think...
which
you
alluded
to
which
I
think
has
short
shrift
in
these
discussions.

Thanks.

DR.
CURRIE:
Thank
you
for
the
added
time.
I
will
do
a
poor
job
at
both,
but
let
me...
I
am
going
to
run
again,
because
I
have
illustrations.

MR.
TELLIARD:
You
need
to
sit
down,
so
you
can
get
the
microphone.
They
can't
hear
you
on
the
other
side.

DR.
CURRIE:
Oh,
okay.
Sorry.
I
will
tell
you
about
it,
and
then
I'll...
perhaps
I...

MR.
TELLIARD:
Just
give
it
to
me.

DR.
CURRIE:
Yeah,
yeah,
that
would
be
great.

MR.
TELLIARD:
That
will
be
easier.

DR.
CURRIE:
Excellent.
I
have
to
give
you
the
right
ones.
This
will
be
the
first.
That
for
now.

This
is
a
topic
I
didn't
bring
up,
per
se,
but
I
think
it
is
very
relevant.
The
type
of
question
you
are
asking
in
my
presentation
of
the
blank
today,
I
think,
can
be
extremely
important
and,
in
fact,
limiting
in
many
cases.
Also,
it
can
be
extremely
difficult
and
hence,
in
a
sense,
costly.
Detection/
Quantitation
Workshop
54
May
10,
2001
I
am
giving
you
some
data
from
our
laboratory
of
several
years
ago,
showing
measurement
of
radiocarbon,
C
14,
by
what
used
to
be
the
conventional
method
measuring
the
beta
particles,
counting
the
rate
of
decay.
That
is
the
decay
method.
Shows
the
conventional
sample
size.

Then,
below
that
is
AMS
accelerator
mass
spectrometry
which
was
a
revolution
in
the
measurement
process,
and
I
advocate
more
measurement
process
revolutions
to
get
over
a
number
of
these
problems.
And
the
simple
little
issue
I
wanted
to
show
here
is
that
if
we,
as
scientists
and
statisticians
and
attorneys
and
all
that,
can
try
to
address
the
question
of
what
is
limiting
of
the
measurement
process,
that
can
help
us
a
great
deal.

So,
in
the
case
of
decay
counting,
we
needed
about
2.5
mg
to
get
a
signal
that
would
give
us
10
percent
Poisson
precision.
The
background
was
equivalent
to
about
5
mg.
This
is
modern
carbon,
living
carbon.
And
our
blank
was
40

g.
No
problem.
The
blank
is
not
an
issue
there.

Now
we
improve
the
measurement
process
by
counting
atoms
rather
than
decaying
particles.
This
is
great
for
long­
lived
radioactivity.
The
signal
limit
is
down
by
over
a
factor
of
1000,
and
the
background
is
even
further
down,
background
equivalent,
but
now
our
blank,
though
we
have
reduced
it
here
to
15

g,
it
is
the
limiting
factor.

So,
the
little
illustration
I
give
here
is
to...
it
can
be
useful
to
ask,
for
the
measurement
processes
you
are
interested
in,
which
of
these
is
the
major
limitation.

Now,
we
can
go
on
to
the
second.
I
am
going
to
leave
the
blank
now.
I
just
wanted
to
raise
this
issue
of
perspective,
what
is
limiting.

I
was
hoping
someone
would
ask
about
calibration
and
that
aspect.
This
is
something
that
I
created
out
of,
I
think,
a
correct
model
for
taking
into
account
the
calibration
error.
If
you
have
a
linear
calibration
curve,
Y=
B,
for
blank
intercept,
plus
AX
where
A
is
the
calibration
factor
and
X
is
the
concentration,
you
invert
that
and
calculate
concentration,
the
estimated
concentration
is
X=
Y,
your
total
signal,
minus
the
estimated
blank
divided
by
the
calibration
factor.

If
the
estimated
blank,
estimated
calibration
factor,
in
fact,
random
like
measured
each
time,
then
you
get
a
non­
linear
relationship,
and
you
get,
at
zero
concentration,
a
distribution
that
has
long
tails
peaked
in
the
center,
kurtotic.
At
higher
concentrations,
it
becomes
asymmetric.

In
fact,
this
is
for
something
that
I
published
showing
how
the
critical
value
is
linked
to
the
alpha
and
beta
errors.
The
simple
issue
is
that
if
the
calibration
uncertainty
is
non­
trivial
compared
to
the
other
uncertainties,
you
get
some
interesting,
non­
normal
distributions.

That
is
more
than
you
wanted
to
hear,
probably,
but
did
I
partly
address
your
question?

MR.
KAHN:
Partly,
but
it's
not
more
than
I
want.
So...

DR.
CURRIE:
Touché.
Maybe
David
could
do
it
right.
Detection/
Quantitation
Workshop
55
May
10,
2001
MR.
FOREMAN:
I
am
Bill
Foreman.
I
am
with
the
U.
S.
Geological
Survey's
National
Water
Quality
lab.

I
wanted
to
reinforce
some
of
the
things
that
were
said
today.
We
have
been
using
kind
of
a
long­
term
continuous
approach
to
determine
method
detection
levels
and
then
set
reporting
levels
or
the
less
than
value,
and
I
wanted
to
mention
a
couple
of
things.

One
is
that
with
regards
to
blanks,
I
think
those
are
really
critical,
and
I
think
it
is
something
we
have
missed,
and
it
has
not
really
been
positioned
very
well
within
the
EPA
procedure,
especially
in
terms
of
doing
blank
correction
or
blank
offset
in
terms
of
setting
the
MDL.
So,
I
think
that
is
really
important.

We
have
been
collecting
uncensored
blank
data
for
some
of
our
inorganic
methods
and
have
been
able
to
compute
MDLs
based
on
that
and
can
compare
them
and
see
how
they
compare
with
spike­
based
MDLs.
In
some
cases,
they
agree
well.
In
other
cases,
they
don't.
Usually,
the
indication
is
maybe
the
blank
values
are
better
than
the
spike,
and
that
is
because
of
this
change
in
standard
deviation
with
concentration
that
Ken
and
others
have
tried
to
address.

In
terms
of
setting
the
reporting
level
or
the
quantitation
limit,
as
Paul
referred
to
it,
I
think
it
is
very
important
that
we
consider
not
using
that
as
a
censoring
limit,
because
the
MDL
represents...
I
mean...
I
am
sorry...
we
have
to
consider
if
this
quantitation
limit
is
a
censoring
limit,
we
really
are
going
to
censor
out
a
lot
of
values
if
the
actual
concentration
is
at
the
quantitation
limit
or
level.

So,
I
think
it
is
important
to
keep
that
in
mind,
and
that
is
what
we
have
tried
to
do.

Another
feature,
though
is
that
in
all
the
plots
I
saw
today
and
all
the
representations,
the
distribution
is
always
centered
on
100
percent
mean
or
median
recovery.
Typically,
especially
for
organic
methods
but
even
some
inorganic
methods,
the
distribution
is
not
100
percent.

So,
you
have
to
keep
that
in
mind
as
well
when
you
set
your
reporting
level.
It
is
not
just
two
times
the
critical
level.
It
may
be
four
times
if
you
have
a
50
percent
recovery.
So,
it
is
important
to
keep
that
in
mind.

With
regard
to
outlier
testing,
we
now
have
a
lot
of
experience
for
doing
this
kind
of
determine
this
LTMDL
type
of
determination.
We
get
a
lot
of
data,
because
we
are
collecting
more
than
just
seven
replicates,
and
we
are
doing
it
over
time,
so,
occasionally,
you
get
outliers.
So,
we
are
using
a
more
robust
approach
like
Paul
mentioned
using
the
pseudo
sigma
and
replacing
the
standard
deviation
in
the
determination,
and
we
think
that
helps,
at
least
from
a
practical
standpoint,
for
a
large­
scale
implementation
like
we
are
doing.

But
a
bigger
issue
for
us,
I
think,
is
the
fact
that
for
multi­
analyte
kinds
of
methods...
and
for
most
of
the
methods
that
we
run,
they
are
not
EPA
methods;
they
are
USGS
methods,
so
we
don't
already
know
what
the
MDL
is,
because
EPA
or
somebody
else
hasn't
previously
determined
it
for
us.
That
is,
likewise,
the
case
for
our
new
methods
that
we
are
developing,
and
in
some
cases,
we
may
have
40
to
80
elements
or
analytes
or
more
that
we
are
dealing
with.
Detection/
Quantitation
Workshop
56
May
10,
2001
It
is
very
difficult
to
try
to
prepare
a
solution
with
all
your
analytes
in
there
at
the
right
concentration
that
is
1
to
5
times
the
MDL.
It
is
not
very
practical,
and
that
is
what
we
have
found
out.

So,
this
year,
we
have
started
evaluating
the
approach
that
was
given...
the
multiconcentration
approach
that
was
advocated
by
Gibbons
and
co­
workers,
including
David
Coleman,
and
that
Ken
has
now
suggested
with
the
Rocke
and
Lorenzato
method.
So,
we
are
going
to
see
how
that
works
for
us,
but
from
a
practical
standpoint,
this
can
be
very
difficult.

I
had
a
handout,
a
single­
page
handout,
of
a
poster
that
I
presented
at
the
SETAC
meeting
and
also
at
the
Pittsburgh
conference
as
a
talk
recently.
I
don't
know
if
there
are
any
left.
They
were
on
the
table
out
there,
but
if
anybody
would
like
a
copy
of
what
we
are
doing...
and
we
also
have
a
web
site
on
some
of
the
stuff
we
are
doing...
please
see
me.

Thank
you.

MR.
TELLIARD:
Thank
you.
I
always
call
it
the
Rocky
and
Bullwinkle
to
keep
it
in
the
right
perspective.
That
way,
I
don't
forget.

MR.
BLYE:
Steven
Blye
with
Environmental
Standards.

I
apologize
if
I
am
asking
a
question
that
might
have
been
addressed
in
the
opening
remarks,
but
I
followed
the
outline,
and
I
was
late.

My
first
question
is,
as
a
result
of
the
settlement
agreement
with
the
Agency
to
address
method
detection
limits
and
ML,
could
you
provide
a
kind
of
status
update
on
where
the
Agency
is
heading
with
that
and
what
the
specific
actions
are?

Then,
the
second
question
would
be
I
am
not
an
attorney,
but
with
what
I
understand,
it
seems
like
the
40
CFR
Part
136
MDL
definition
is
used
unilaterally
through
all
of
the
environmental
regulatory
programs.
As
part
of
that
settlement,
has
there
been
any
thought
for
the
Office
of
Water
to
kind
of
get
together
with
the
other
agencies
and
establish
perhaps
one
uniform
definition
for
MDL
and
quant
limits?

MR.
TELLIARD:
Your
first
one
is
there
is
a
schedule.
I
have
a
copy
for
you
which
lays
out...
the
final
date
is
February,
2004.
I
can
check
my
retirement
date,
but
I...

The
other
thing
about...
we
originally
started,
again,
something
simple.
We
were
going
to
amend
the
MDL
procedure
to
make
it
a
little
more
robust
in
the
sense
of
confirming,
since
it
was
a
calculated
number,
and
Jim
Lichtenburg
and
Jim
Longbottom
both
agreed
that
we
ought
to
do
it.
This
is
like
ten
years
ago,
and,
you
know,
we'd
get
around
to
it
type
of
thing.
So,
we
were
going
to
make
this
simple
little
change,
and
here
we
are.

So,
the
question
is
yeah,
there
is
going
to
be
some
definitions
changed.
We
are
consolidating
it
in
the
sense
that
Water
will
do
this.
Office
of
Solid
Waste
has
kind
of
an
MDL
Detection/
Quantitation
Workshop
57
May
10,
2001
procedure,
but
it
is
not
a
requirement.
It
is
kind
of
in
and
out.
But,
yeah,
that
is
part
of
this
whole
process
that
we
are
going
through
for
which
this
is
the
starting
point
for
some
of
it.

We
have
done
some
laboratory
studies
on
low­
level
concentrations.
Henry
and
Chuck
have
been
working
on
that
data,
doing
some
data
crunching.
Budget
is
an
issue
in
this
particular
program,
and
we
are
kind
of
at
the
mercy
of
when
Chuck
and
Henry
have
time
to
do
some
of
this,
since
we
don't
have
a
lot
of
contract
funds.
But
that
is
the
game
plan,
is
to
come
up
with
kind
of
an
approach.

Now,
it
may
not
be
the
only
approach.
This
is
not
a
one
shoe
fits
all
thing.
I
mean,
we
have
heard
here
today
there
are
a
lot
of
different
ways
of
skinning
this
animal,
and
we
may
choose
to
say
skin
it
any
way
you
want,
you
know,
as
long
as
you
can
document
x,
y,
and
z.
And
as
I
say,
the
Agency
isn't
here
to
reinvent
the
wheel
if
there
is
a
wheel
out
there.

Lloyd
pointed
out
also
that,
you
know,
there
are
a
lot
of
things
that
we
have
been
kind
of
parochial,
looking
in
ASTM,
AOAC,
and
Standard
Methods.
Maybe
we
need
to
look
to
the
international
scene
more
than
we
have
to
see
how
that
all
fits
together
with
our
package.
So,
yeah,
that
is
the
whole
bite.

MR.
BRITTON:
Thanks,
Bill.
I
have
a
question
of
David.

Recognizing
the
tendency
of
data
reporters
to
use
anything
called
the
detection
limit
as
a
reporting
limit,
is
it
appropriate
to
use
the
IDE
that
way?

MR.
COLEMAN:
In
terms
of
censoring
below
that
value,
I
completely
disagree
with
censoring
for
the
reasons
expressed
earlier
in
that
you
are
throwing
away
information,
even
at
that
very
low
level,
information
that
you
can
use.

Now,
if
you
have
to
make
a
binary
decision
and,
as
Steve,
I
believe,
brought
up
earlier,
no
detectable
amount
and
it
has
to
be
yes
or
no,
then
I
think
the
IDE
is
applicable.

MR.
BRITTON:
So,
you
would
condone
use
of
the
IDE
as
a
reporting
limit
under
some
circumstances?

MR.
COLEMAN:
Not
if
it
is
used
for
censoring
but
if
it
is
used
for
making
a
binary
decision.

MR.
BRITTON:
Okay.
I
also
wanted
to
caution
that
the
IDE
requires
an
interlaboratory
study,
and
we
have
to
recognize
that
a
lot
of
these
interlaboratory
studies
are
done
once
and
once
only.
Generally,
that
is
when
the
method
is
brand
new.
In
other
words,
there
is
no
such
thing
as
experienced
users
of
the
method.

So,
the
interlaboratory
study
is
done
with
a
body
of
the
best
available,
you
know,
users,
and
they
are
all
inexperienced,
and
that
tends
to
be,
because
of
the
expense
of
interlaboratory
studies,
the
last
time
that
you
have
the
data
required
to
estimate
IDE
and
IQE.
Detection/
Quantitation
Workshop
58
May
10,
2001
The
problem
is...
and
I
can
show
an
example
of
this
in
the
600
series
methods
when
EPA
did
interlaboratory
studies
on
those
methods
when
they
were
brand
new
in
the
late
`
70s,
and
the
data
were
highly
variable.
However,
later
on
when
people
really
became
experienced
with
those
methods,
as
exampled
by
the
data
that
they
were
reporting
in
performance
evaluation
studies,
for
example,
you
saw
considerably
less
variability
in
the
data
from
exactly
the
same
methods,
and
I
think
it
was
primarily
based
on
experience,
but
the
users
really
knew
how
to
apply
the
method
and
were
generating
outlier
data
less
frequently.

This
would
be
a
really
serious
thing
if,
you
know,
the
IQE
and
IDE
were
estimated
from
data
on
a
new
method
using
inexperienced
users
and
really
wouldn't
be
applicable,
perhaps,
a
year
or
two
or
three
down
the
line.

MR.
COLEMAN:
Just
to
comment
briefly
on
that,
I
think
that
is
a
serious
issue,
not
just
for
IDE
and
IQE
but
for
just
about
any
method.
It
is
true
that
some
other
detection
and
quantitation
limits
make
what
I
consider
compromises
so
that
they
are
easy
to
recalculate.
You
only
need
to
get
seven
samples.

In
the
view
of
the
ASTM
task
group,
that
compromise
wasn't
worth
it,
but
that
was
a
judgment.
Precision
and
bias
studies
have
had
a
place,
and
I
think
we
would
all
recognize
that.
Unfortunately,
they
are
expensive
and
time­
consuming.
So,
it
is
a
legitimate
issue.

MR.
FLORES:
Ray
Flores
in
Region
VI
again.

This
is
a
question
for
David
about
the
IQE
approach.
It
seems
to
me
like
an
MDL
determination
using
all
the
calibration
points
at
some
concentration
about
what
you
would
normally
expect
for
an
MDL,
it
is
kind
of
an
attractive
approach.
I
am
lab
auditor,
so
I
am
always
searching
for
ways
to
manage
the
error
that
the
lab
has
got
to
live
with.

So,
my
question
is,
when
you
are
doing
this
IQE
determination
within
a
lab,
intralab,
are
these
concentrations
of
these
contaminants
going
to
go
through
the
entire
analytical
process?
I
am
asking,
are
they
going
to
go
through
the
digestion,
whatever
prep
steps
are
involved?

MR.
COLEMAN:
Generally,
they
would
go
through
all
the
steps,
but
I
don't
think
a
general
answer
will
necessarily
answer
your
question.
So,
I
suggest
we
talk
privately
and
more
specifically
later.

MR.
FLORES:
Okay.
Another
question
for
Ken.
I
understand
what
you
were
doing.
It
is
kind
of
an
iterative
approach
to
determining
an
MDL,
and
I
don't
mean
to
put
you
on
the
spot,
but
can
you
compare
that
to
the
F
test
in
40
CFR
136
Appendix
B?

MR.
OSBORN:
No.

MR.
FLORES:
Oh,
okay.

MR.
COLEMAN:
Let
me
just
mention...
I
don't
want
to
take
too
much
time,
but
partly
for
clarity,
the
Rocke
and
Lorenzato
model,
a
version
of
that,
is
embedded
within
IQE
in
Detection/
Quantitation
Workshop
59
May
10,
2001
particular.
It
is
also
allowed
within
IDE
where
it
is
called...
in
IQE
anyway,
it
is
called
the
hybrid
model,
and
the
F
test
is
sort
of
the
lowest
level
test
one
would
perform
to
see
if
you
have
equivalent,
not
identical
but
equivalent,
standard
deviations.

The
Rocke
and
Lorenzato
hybrid
approach
is
an
explicit
modeling
approach
where
you
say
we
assume
that
the
standard
deviation
changes
with
concentration,
and
actually
built
into
the
IQE
is
a
formal
test
to
see
if
that
is
true,
and
if
it
is
true,
you
go
ahead
and
model.

MR.
KOORSE:
Let
me
quickly
remind
Paul
that,
again,
as
an
attorney,
I
have
an
obligation
to
my
clients
to
make
sure
that
until
a
test
method
has
gone
through
the
rigor
and
the
educational
process
and
the
learning
curve
is
at
the
appropriate
point
where
better
controls
are
in
place,
I
have
got
to
make
sure
that
people
are
not
penalized
unfairly.
So,
you
have
to
strike
a
balance.

Yes,
you
may
need
to
reconsider.
It
may
need
to
be
a
separate
approach
for
determining
performance
based
on
actual
performance
information,
and
perhaps
the
DMRQA
program
could
be
set
up
to
establish
some
kind
of
a
confirmation
process,
but
we
have
to
strike
a
balance
whereby
the
due
process
of
folks
is
not
exceeded
by
the
need
to
promote
the
Agency's
program.
Both
interests
are
important,
and
we
have
got
to
strike
a
balance.

MR.
BRITTON:
Yeah,
I
would
say
that
it
is
a
chicken
or
the
egg
process,
you
know.
I
mean,
you
have
to
decide
whether
you
are
willing
to
wait
a
couple
of
years
until
the
method
has
really
been
used
and
there
is
a
body
of
people
experienced
with
its
use
so
you
can
really
see
what
it
is
capable
of
or
whether,
you
know,
you
have
to
do
these
studies
on
brand
new
methods
and
then,
really,
nobody
is
willing
to
put
the
effort
into
refining
those
estimates
down
the
road.

MR.
TELLIARD:
Well,
the
other
thing,
too,
is
if
we
can
ever
get
performance­
based
in
one
form
or
another,
we
would
allow
people
to
upgrade
these
methods
as
they
do
get
better.
I
think
a
good
example
is
when
we
started
out
with
dioxin,
you
know,
10
parts
per
quadrillion.
Everybody
died.
You
know,
people
are
going
oh,
that
is
our
highest
calibration
point.
Now,
we
are
down
in
the...
which
is
wonderful,
you
know,
and
great.

Makes
your
life
miserable,
but
for
the
analytical
chemist,
you
know,
everything
is
going
to
get
better,
and
that
is
the
purpose
of
it.
For
the
regulatory
community,
they
are
probably
a
little
skittish,
but
that
is
our
job.

But
we
agree,
there
has
got
to
be
a
way
to...
and,
you
know,
like
the
old
story
was
in
ASTM
and
some
of
the
other
consensus
groups
where,
routinely,
methods
come
up
for
review
to
be
either
codified
or
dumped,
and
I
think
the
Agency
should
try
to
do
that.
We
have
tried
it
twice
now,
and
the
world
comes
to
an
end,
because
somebody
calls
in
and
says
my
daddy
has
been
using
that
method
for
48
years,
and
now
you
want
to
take
it
away,
boy?

We
have
never
been
able
to
revoke
a
method,
so
it's...
but
a
lot
of
these
should
come
up
for
reconsideration
and,
again,
to
see
if
the
detection
is
better,
the
instrumentation
is
improving,
and
I
think
if
we
can
get
some
sort
of...
like
was
covered
in
streamlining
where
you
are
allowed
to
make
these
changes,
it
would
cover
that
issue.
Detection/
Quantitation
Workshop
60
May
10,
2001
It
is
now
20
after
12:
00.
These
people
will
be
here
until
Thursday
of
next
week
unless
you
want
to
call
it
a
day.

I
would
like
to
thank
the
speakers
this
morning.
If
the
good
Lord
is
willing
and
the
river
doesn't
rise,
same
time
next
year,
generally
the
same
location.
Don't
know
if
it
is
this
hotel
or
the
one
across
the
water
but
in
the
Norfolk­
Portsmouth­
Hampton
Roads
area.

I
would
like
to
thank
Jan
Kourmadas
for
doing
all
the
minor
things
like
getting
the
hotel
and
a
bed
for
you
to
sleep
in
and
all
that
other
good
stuff,
Marion
Kelly
from
my
staff
for
working
on
the
arrangements,
Ellyn
Hagy
for
helping
to
find
out
that
the
papers
got
where
they
were
supposed
to,
Dale
Rushneck
who
had
the
small
task
of
putting
together
the
program,
the
speakers
who
have
come...
I
thought
we
were
really
fortunate
this
year,
our
co­
sponsors,
Battelle­
Duxbury
or
wherever
they
may
be
seen,
and
you,
the
attendees.
Thank
you
for
coming.
I
really
appreciate
it,
and,
hopefully,
we'll
see
you
here
next
year
for
the
25th.
Thank
you
for
your
attention.

(
WHEREUPON,
the
Meeting
was
adjourned
at
12:
22
p.
m.)

CAPTION
The
meeting
in
the
matter,
on
the
date,
and
at
the
time
and
place
set
out
on
the
title
page
hereof.
It
was
requested
that
the
meeting
be
taken
by
the
reporter
and
that
the
same
be
reduced
to
typewritten
form.
