
1
Interim
Guidance
for
Judging
the
Scientific
Quality
of
Intentional
Exposure
Human
Studies
of
Pesticide
Toxicity
March
17,
2006
U.
S.
Environmental
Protection
Agency
Office
of
Pesticide
Programs
Health
Effects
Division
PREFACE
This
guidance
provides
a
general
framework
for
reviewing
a
human
toxicity
study
for
scientific
merit
and
for
deciding
how
human
data
fit
into
the
overall
toxicity
profile
of
a
pesticide
using
a
weight
of
evidence
approach.
The
Office
of
Pesticide
Programs
(
OPP)
has
not
issued
guidance
for
the
review
of
human
toxicity
studies
before,
and
this
initial
effort
should
be
considered
as
interim
guidance
for
the
evaluation
of
the
scientific
merit
of
already
completed
human
toxicity
studies
considered
for
use
in
assessing
potential
pesticide
health
risks,
and
will
be
updated
as
experience
is
gained
in
evaluating
human
studies.
The
generic
guidance
provided
here
is
based
on
informal
guidance
developed
by
the
Office
of
Pesticide
Program=
s
Health
Effects
Division
in
2001
to
evaluate
human
toxicity
studies
submitted
in
support
of
cholinesterase­
inhibiting
pesticides.

Supplements
will
be
developed
as
needed
on
other
types
of
intentional
exposure
human
studies
such
as
dermal
absorption
studies,
biomonitoring
studies,
etc.

This
document
does
not
attempt
to
address
ethical
issues
related
to
the
conduct
of
tests
of
pesticide
toxicity
on
human
subjects.
Coinciding
with
the
implementation
(
April
7,
2006)
of
the
Rule
on
Protections
for
Subjects
in
Human
Research,
EPA
has
developed
separate
guidance
with
respect
to
the
review
of
the
ethical
conduct
of
intentional
human
exposure
studies.
In
accor
with
the
Human
Studies
Rule,
whenever
EPA
conducts
a
scientific
review
of
any
such
study,
the
Agency
will
also
perform
an
ethical
review
of
the
study,
and
both
the
scientific
and
ethical
reviews
will
be
available
to
2
risk
assessors
and
risk
managers
before
EPA
decides
to
rely
on
such
a
study
in
any
risk
assessment
or
risk
management
action.
3
1.
INTRODUCTION
The
goal
of
this
document
is
to
provide
a
general
framework
for
Health
Effects
Division
scientists
in
EPA=
s
Office
of
Pesticide
Programs
(
OPP)
to
judge
the
scientific
adequacy
and
usefulness
of
intentional
human
dosing
toxicity
studies
in
assessing
human
health
hazards
and
risks.
The
guidance
is
divided
into
two
parts.
First,
it
discusses
key
factors
the
reviewing
toxicologist
should
consider
in
evaluating
and
documenting
individual
human
studies.
Second,
it
describes
a
weight
of
evidence
process
that
scientists
should
use
in
judging
the
reliability
of
the
body
of
human
data
and
to
decide
how
the
human
data
fits
in
to
the
overall
body
of
data
relating
to
the
toxicity
of
a
pesticide.

The
guidance
is
designed
to
promote
clarity,
consistency
and
transparency
in
the
conclusions
reached
regarding
the
adequacy
and
usefulness
of
human
studies
in
hazard
and
dose­
response
assessment.

The
topics
described
below
should
not
be
regarded
as
a
checklist
of
requisite
conditions,
but
rather
factors
a
scientist
should
think
about
when
evaluating
a
study
(
as
described
in
Section
2).

Furthermore,
the
weight
of
evidence
approach
described
(
Section
3)
is
not
a
decision
tree,
but
highlights
information
to
be
considered
and
questions
to
be
addressed.
It
requires
that
the
logic
behind
any
judgements
made
be
transparent.

The
guidance
is
intended
to
be
an
analytical
tool
for
evaluating
human
toxicity
studies,
and
does
not
address
directly
how
to
conduct
a
human
study.

2.
EVALUATION
OF
HUMAN
STUDIES
The
quality
and
reliability
of
human
studies
of
pesticides
should
be
evaluated
according
to
sound
scientific
principles.
The
reviewer
should
pay
particular
attention
to
the
following
factors
in
reaching
conclusions
regarding
the
utility
of
such
studies.

2.1.
Study
Design
The
reviewer
should
use
sound
scientific
judgement
to
evaluate
the
study
design
which
should
have
been
described
clearly
and
in
sufficient
detail
to
permit
effective
review
and
evaluation1.
Seeking
1
The
report
may
contain
statements
indicating
whether
the
study
was
conducted
under
Good
Clinical
4
advice
from
other
toxicologists,
or
from
experts
in
other
disciplines
such
as
statistics
or
epidemiology
is
advisable
if
the
reviewer
lacks
knowledge
in
certain
technical
areas.
Specific
parameters
to
be
considered
in
the
study
design
include
the
following:

2.1.1.
Test
Material
The
test
material
for
human
pesticide
toxicity
studies
is
usually
the
technical
grade
of
the
active
ingredient
or,
occasionally
a
formulated
product.
If
the
test
material
is
the
technical
grade
of
the
active
ingredient,
it
is
important
to
report
not
only
the
concentration
of
the
active
ingredient,
but
also
the
identity
and
concentration
of
impurities
associated
with
the
manufacture
of
the
technical
material,
and
any
other
ingredient
chemicals
such
as
diluents
or
stabilizers
added
to
the
technical
material.
If
the
test
material
is
a
formulation,
identification
of
all
inert
ingredients
is
important
for
determining
whether
components
other
than
the
active
ingredient
may
contribute
to
or
influence
the
reported
effects
(
this
information
may
not
be
reported
in
the
study,
but
will
be
available
in
EPA=
s
files
under
the
Registration
Number).
Also,
such
information
is
important
to
judge
whether
the
active
ingredient
in
a
formulation
was
administered
at
a
concentration
sufficient
to
produce
a
response.
The
reviewer
should
also
consider
potential
effects
of
a
vehicle,
if
it
is
used
in
the
study.
Information
should
be
included
on
whether
potential
vehicle­
induced
effects
were
evaluated
with
a
vehicle
control
group.
If
not,
the
reviewer
should
seek
out
information
in
the
published
literature
from
other
human
studies
and
animal
testing
on
the
toxicity
of
the
vehicle.

2.1.2.
Subject
Selection
Practice
(
GCP)
regulations
(
i.
e.,
21
CFR
314.126,
21
CFR
50,
or
40
CFR
26).
Some
old
human
studies
might
not
have
such
statements;
however,
it
does
not
necessarily
suggest
that
these
studies
did
not
follow
the
GCP.
These
human
toxicity
studies
may
still
have
value
for
hazard
assessment.
5
Studies
should
use
healthy
adult
volunteers.
Although
it
is
preferable
to
collect
data
on
both
sexes,
this
may
not
be
an
issue
if
other
information
indicates
that
males
and
females
are
likely
to
respond
similarly
to
the
test
material.
The
reviewer
should
note
whether
the
study
identified
persons
whose
work
history
involved
exposure
to
other
agents
that
may
confound
study
results
(
for
example,

pesticide
applicators
or
agricultural
workers).
The
reviewer
should
indicate
whether
the
investigators
evaluated
the
health
status
of
subjects
by
physical
examinations,
clinical
tests,
and
documented
health
histories
to
identify
individuals
with
medically
significant
conditions
which
might
compromise
the
study.
The
reviewer
should
indicate
whether
researchers
attempted
to
identify
confounding
factors
such
as
alcohol
consumption,
smoking,
pharmaceutical
drug
use,
or
other
exposures.
Lastly,
the
reviewer
should
note
any
information
gathered
during
the
health
screen
of
each
subject
that
may
explain
unusual
results.

2.1.3
Treatment
and
Control
Groups
Of
critical
importance
to
the
study
is
the
inclusion
of
appropriate
control
groups
to
insure
that
observed
responses
are
associated
with
treatment
and
not
due
to
biases
or
factors
unrelated
to
treatment.
The
reviewer
should
indicate
how
subjects
were
assigned
to
treatment
groups,
in
other
words,
whether
assignment
was
random
or
based
on
attempts
to
match
for
confounding
variables
(
e.
g.,
age,
sex,
smoking).
The
reviewer
should
indicate
the
types
and
adequacy
of
control
groups
used
in
the
study
­
including
whether
concurrent
untreated
or
placebo
controls
were
used,
and
whether
there
were
baseline
measurements
of
subjects
prior
to
initiation
of
dosing.
The
reviewer
should
note
whether
the
investigators,
subjects
or
both
were
unaware
(
blind)
as
to
the
nature
of
the
treatment
that
subjects
received.

2.1.4.
Number
of
Subjects
per
Group
The
number
of
subjects
for
each
group
should
be
large
enough
to
allow
adequate
statistical
power
for
a
reliable
analysis
of
the
response.
The
number
of
subjects
in
a
human
study
is
usually
based
on
considerations
of
the
magnitude
of
the
treatment
effect
and
the
number
of
dose
groups
evaluated,
and
sometimes
determined
by
the
number
of
subjects
available.
There
are
no
strict
criteria
for
number
of
subjects,
and
thus
this
is
determined
on
a
case­
by­
case
basis.
Studies
with
few
subjects
6
may
still
be
of
value
in
a
weight
of
evidence
analysis
in
identifying
doses
that
produce
an
effect.

Consultation
with
a
statistician
familiar
with
the
design
of
human
studies
is
recommended.

2.1.5.
Dose
Selection
The
reviewer
should
briefly
describe
the
rationale
and
approach
used
for
dose
selection.

Depending
on
the
objective
of
the
study,
there
are
various
potentially
valid
dosing
designs.
For
example,
some
studies
may
employ
multiple
doses
and
other
studies
may
use
a
single
dose
with
a
multiple
rising­
dose
design
which
provides
an
assessment
of
effects,
if
any,
at
each
dose
level
before
proceeding
to
the
next
higher
dose
level.

2.1.6.
Dose
Preparation
and
Administration
The
reviewer
should
determine
the
adequacy
of
administration
of
the
test
material.
How
was
the
concentration
of
the
active
material
determined?
For
multiple
dose
studies
were
dosing
solutions
prepared
each
day?
A
description
of
how
and
when
the
test
material
was
administered
should
be
provided,
for
example,
when
were
doses
administered
in
relation
to
food
consumption?
The
reviewer
should
evaluate
whether
the
method
of
administration
is
appropriate
to
insure
absorption
of
the
test
material
and
should
determine
whether
the
administered
doses
were
correctly
calculated.
The
duration
of
dosing
may
range
from
a
single
dose
on
one
day
to
multiple
doses
over
many
weeks,

depending
on
the
objective
of
the
study.
Although
researchers
often
employ
multiple
dose
levels
to
establish
a
point
of
departure
such
as
a
No
Observed
Adverse
Effect
Level
(
NOAEL),
or
to
establish
differences
in
interspecies
sensitivity,
a
single
dose
human
study
in
conjunction
with
animal
studies
may
be
useful
in
establishing
a
point
of
departure.

2.1.7.1.
Sample
Handling
Given
the
potential
for
the
instability
or
reversibility
of
some
toxic
effects,
for
example,

acetylcholinesterase
inhibition
by
carbamates,
the
reviewer
needs
to
pay
close
attention
to
biological
sample
collection,
processing,
storage,
and
assay
procedures
described
in
the
study
report.
Use
of
improper
or
sub­
optimal
procedures
can
result
in
data
inaccuracies
or
increase
data
variability.

Proper
handling
of
biological
samples
in
human
studies
will
vary
case
by
case,
and
may
be
informed
7
by
studies
on
laboratory
animals,
as
well
as
other
sources
such
as
the
published
literature
on
a
specific
toxicity.

2.1.8.
Clinical
Observations
and
Symptoms
Clinical
signs
or
symptoms
are
important
parameters
to
be
considered
in
assessing
hazard.

Based,
in
part,
on
responses
observed
in
animal
dosing
studies,
the
human
subjects
are
likely
to
have
been
observed
for
clinical
signs
following
administration
of
the
test
material.
Clinical
data
should
be
tabulated
based
on
number
of
individuals
per
dose
group
exhibiting
signs
of
a
particular
type,
as
well
as
the
intensity,
time
of
onset,
and
duration
of
the
responses,
in
order
to
gain
an
understanding
of
a
dose­
response
relationship,
if
possible.
Baseline
data
for
clinical
observations
should
be
reported,

particularly
for
the
more
quantifiable
measures
(
e.
g.,
heart
rate,
blood
pressure).
Adverse
clinical
signs
noted
during
the
exposure
period
can
be
used
as
evidence
for
dose­
related
toxicity
and
can
play
an
important
role
in
determining
a
point
of
departure.

2.1.9.
Clinical
Chemistry
Clinical
chemistry
tests
including
hematology
and
urinalysis
can
serve
to
monitor
the
safety
and
well­
being
of
test
subjects,
as
well
as
providing
valuable
information
regarding
systemic
effects
of
the
test
material.
The
magnitude
of
clinical
chemistry
parameters
should
be
compared
with
pretreatment
and
control
values
for
detection
of
biologically
relevant
shifts.
When
using
pretreatment
control
data,
it
must
be
kept
in
mind
that
normal
values
in
hematologic
and
clinical
chemistry
measurements
depend
on
the
specific
methods
used
to
generate
the
data.
Therefore,
only
values
produced
by
the
identical
methods
in
the
same
period
of
time
from
the
same
laboratory
are
valid
in
such
comparisons.
Sometimes
measurements
of
test
substance
in
the
urine
or
feces
are
added
in
the
study
report.
This
information
is
helpful
in
understanding
the
absorption
and
elimination
of
the
test
material
and
can
serve
as
a
confirmation
for
the
administration
of
test
material
for
each
subject.

2.2.
Evaluation
of
Data
8
The
findings
from
a
human
study
must
be
interpreted
in
the
context
of
the
study
conduct
and
quality
(
as
discussed
above,
e.
g.,
appropriate
subject
selection,
number
of
subjects,
dosing
regimen,

sample
collection/
preparation,
methodology),
as
well
as
the
statistical
and
biological
significance.

2.2.1.
Statistical
Analysis
All
statistical
methods
used
in
the
study
should
be
identified,
described
and
referenced.

Careful
scrutiny
of
the
author=
s
statistical
methods
and
results,
and
when
necessary,
re­
analysis
of
the
author=
s
data
should
be
done
to
corroborate
reported
results.
Treatment
and
pretreatment
values
for
each
subject
should
be
compared
statistically,
as
well
as
a
comparison
with
the
concurrent
control
and/
or
placebo
group
and
treatment
groups
as
a
whole.
The
pattern
of
individual
responses
(
pretreatment
versus
treatment
results)
should
be
addressed
as
well
as
mean
responses
of
subjects
in
the
treatment
group.
It
is
important
to
keep
in
mind
that
a
statistically
significant
response
may
or
may
not
be
biologically
important,
for
example,
a
response
may
be
statistically
significant
yet
be
within
the
range
of
normal
human
response.
When
outliers
are
removed
for
statistical
reasons,
the
reasons
for
removing
them
should
be
specified.
The
selection
of
a
significance
level
is
a
judgement
choice
based
on
consideration
of
the
potential
for
false
positive
and
false
negative
outcomes.
Typical
significance
levels
are
5%
(
p#
0.05)
or
1%
(
p#
0.01).
In
the
case
of
non­
positive
results,
it
is
important
that
a
statistical
analysis
be
performed
to
determine
the
power
of
the
test
to
detect
a
positive
response
considering
such
factors
as
number
of
subjects,
variability
in
baseline
values,
and
characteristics
of
the
response
being
analyzed.
For
example,
data
on
cholinesterase
inhibition
may
be
analyzed
for
power
of
the
study
to
detect
a
negative
finding,
whereas
data
on
signs
and
symptoms
are
less
amenable
to
standard
tests
for
power
and
should
be
subjected
to
alternative
statistical
analyses
depending
on
the
nature
of
the
response
being
analyzed
or
questions
raised
concerning
the
data.
The
reviewers
are
encouraged
to
seek
advice
from
statisticians
regarding
the
appropriateness
and
accuracy
of
the
statistical
analysis.

2.2.2.
Presentation
and
Discussion
of
Results
In
reaching
a
conclusion
as
to
whether
the
data
allow
one
to
identify
endpoints
and
points
of
departure
and
other
important
information
(
e.
g.,
time
of
peak
effects,
duration
and
reversibility
of
9
effect,
time
to
reach
steady
state)
for
use
in
risk
assessment,
the
reviewer
must
look
at
the
biological
and
statistical
significance
of
the
response,
and
the
pattern
of
effects.
The
reviewer
should
prepare
a
series
of
tables,
plots
or
charts
of
the
responses
and
provide
a
discussion
of
the
data
that
addresses
the
following
questions:
what
effects
(
if
any)
were
observed
and
at
what
dose(
s),
was
there
a
dose­
response,
the
magnitude
or
severity
of
the
response,
onset
and
duration
of
the
effects,
and
the
incidence
(
i.
e.,
how
many
subjects
responded).
Such
comparisons
of
the
patterns
or
trends
in
the
response
data
aid
in
the
interpretation
of
biological
relevance,
significance,
and
overall
usefulness
of
the
study.
In
tabulating
the
response
data,
the
reviewer
should
include
both
the
individual
data
for
each
subject,
as
well
as
the
mean
values.
Furthermore,
if
data
are
available
from
multiple
studies
or
from
acute
and
subchronic
dosing
phases
of
a
study,
these
multiple
data
sets
should
be
tabulated
in
a
comparative
manner
in
order
to
evaluate
consistency
and
response
trends
across
different
studies
and
across
different
durations
of
exposure.

Responses
may
or
may
not
be
statistically
significant.
Even
if
an
effect
is
not
statistically
significant,
if
the
pattern
of
onset,
duration,
and
nature
of
effects
appears
consistent
with
what
is
generally
understood
about
the
toxic
properties
of
the
test
material
and
its
class,
then
this
provides
greater
confidence
that
the
effects
are
treatment
related.
For
example,
if
unusual
time
course
data
are
encountered,
the
reviewer
should
determine
if
the
results
can
be
reconciled
with
what
is
understood
about
the
metabolism
and
toxicology
of
the
test
agent.
In
studies
where
no
effects
are
found,
the
reviewer
needs
to
note
the
quality
of
the
study
and
how
well
the
study
design
allowed
for
the
potential
to
detect
an
effect
(
e.
g.,
power
of
study),
and
therefore
how
much
confidence
one
can
place
on
lack
of
an
observed
effect.

2.2.3.
Summary
In
preparing
the
Data
Evaluation
Record,
the
reviewer
should
provide
a
summary
that
highlights
the
key
conclusions/
recommendations
regarding
whether
a
point
of
departure
or
other
information
about
the
biological
and
kinetic
characteristics
of
the
test
material
can
be
identified
from
the
study.
The
summary
should
also
clearly
lay
out
the
rationale
supporting
the
conclusions,
as
well
10
as,
the
confidence
in
the
identified
value(
s),
by
describing
the
strengths
and
weaknesses
of
the
study.

It
should
be
recognized
that
results
of
humans
studies
will
span
a
broad
continuum
that
can
not
be
easily
capsulized
in
guidance,
ranging
from
a
seriously
flawed
study
that
can
not
be
used
to
a
robust
study
that
provides
a
wealth
of
information
that
could
be
incorporated
in
the
risk
assessment.

Evidence
that
falls
within
this
continuum
may
still
have
some
utility
in
the
risk
assessment
and
will
require
case
by
case
judgements.

3.
WEIGHT
OF
EVIDENCE
ANALYSIS
The
intent
of
a
weight
of
evidence
analysis
with
respect
to
human
toxicity
of
pesticides
is
to
determine
how
the
human
data
should
be
integrated
into
the
overall
toxicity
database
of
a
pesticide
including
information
on
kinetics
and
mode
of
action
as
well
as
toxicity
studies.
The
analysis
is
a
two­
step
process.
First,
the
body
of
human
data
should
be
evaluated
for
scientific
quality
and
weighed
in
the
context
of
scientific
reliability
and
relevance,
and
then
the
strengths
and
weaknesses
of
the
key
data
identified
from
all
available
human
and
animal
studies
should
be
considered
together
in
an
integrative
manner.
This
is
followed
by
a
narrative
description
which
includes
the
rationale
used
for
selecting
the
critical
study
or
studies
used
to
establish
reference
values.
The
weight
of
evidence
narrative
should
clearly
convey
to
an
individual
unfamiliar
with
the
toxicology
of
a
chemical
the
logic
and
decision
path
followed
in
reaching
decisions
on
the
selection
of
critical
studies
for
use
in
establishing
a
point
of
departure
such
as
a
NOAEL
or
Benchmark
Dose,
evaluating
interspecies
differences
in
toxicological
response
or
on
other
toxicological
information
for
use
in
risk
assessments.

Previous
sections
of
this
document
highlighted
factors
that
should
be
addressed
in
preparing
Data
Evaluation
Records
for
studies
on
a
chemical
of
interest.
The
weight
of
evidence
analysis
should
be
prepared
by
the
risk
assessment
team
prior
to
selection
of
key
studies
for
assessment
of
health
risks,

and
should
be
included
in
the
risk
assessment
or
made
available
as
a
stand
alone
document
to
the
Health
Effects
Division=
s
Risk
Assessment
Review
Committee
or
any
other
decision
making
body
that
needs
to
know
the
basis
for
the
points
of
departure
chosen
for
risk
assessment.

The
weight
of
evidence
analysis
involves
judgments
about
the
conduct
of
studies
and
the
quality
of
the
dose­
response
data,
In
examining
the
body
of
evidence,
the
consistency
of
responses
among
different
human
studies
(
if
several
studies
are
available)
as
well
as
the
consistency
between
human
and
experimental
animal
results
should
be
considered,
as
well
as
the
impact
of
data
deficiencies
and
limitations.
One
should
not
only
look
at
toxicity
studies
but
consider
all
available
information
11
including
epidemiology
studies,
incident
data,
in
vitro
data,
and
comparative
animal
vs.
human
data
on
absorption,
metabolism,
distribution
and
excretion.
For
example,
if
animal
studies
indicate
that
there
is
a
sex
difference
in
response
to
the
test
material,
the
gender
issue
should
be
addressed
in
the
review
of
the
human
study.
In
another
example,
if
there
is
an
indication
that
humans
are
more
or
less
sensitive
than
animals,
this
species
difference
should
be
addressed.

In
conducting
a
weight
of
evidence
analysis,
the
factors
to
consider
in
integrating
human
data
into
the
totality
of
the
toxicity
database
will
vary
from
study
to
study.
Data
that
either
support
or
reduce
confidence
in
the
use
of
human
studies
should
be
presented.
Factors
that
increase
confidence
in
the
use
of
a
human
data
to
establish
a
point
of
departure,
for
example,
might
include
(
1)
consistent
observations
in
multiple
independent
human
studies;
(
2)
results
consistent
with
animal
data;
(
3)

demonstration
of
a
dose
response
relationship,
and
(
4)
clinical
signs
correlated
with
a
sensitive
subclinical
response.
Examples
of
factors
that
might
diminish
the
usefulness
of
a
study
and
reduce
the
confidence
for
establishing
a
reference
dose
include
(
1)
poor
study
design;
(
2)
inconsistent
results
within
the
study
and
among
different
human
studies;
and
(
3)
inconsistent
results
among
human
and
animal
studies.
The
reviewer
should
identify
the
strengths
and
weaknesses
of
the
human
studies.
No
single
factor
is
determinative,
and
all
information
should
be
judged
in
context.
For
example,

availability
of
dose­
response
information
from
a
human
study
may
enhance
its
usefulness,
but
absence
of
such
data
would
not
necessarily
prevent
consideration
of
the
study
in
establishing
a
point
of
departure.
Data
from
human
studies,
although
presumed
to
be
more
relevant
to
the
assessment
of
human
hazard
and
risk,
might
be
deemed
unreliable
because
of
limitations
in
study
design
or
results
(
e.
g.
small
number
of
subjects,
high
variability
in
the
data,
uncertainties
regarding
the
methodology).

In
such
a
case,
data
from
animal
studies
would
be
used
exclusively
for
selection
of
points
of
departure2.
Conversely,
if
a
human
study
is
robust
(
e.
g.
adequate
number
of
subjects,
sound
methodologies
used
and
a
consistent
pattern
in
responses),
the
data
may
support
use
of
the
human
data
even
if
the
results
are
inconsistent
with
those
from
animal
testing.

2
Special
mention
should
be
made
of
the
situation
where
humans
appear
to
be
more
sensitive
than
laboratory
animals
to
a
pesticide.
In
this
case,
even
weak
human
data
should
not
be
disregarded
if
they
suggest
that
use
of
animal
data
are
not
health
protective.
12
Even
though
a
human
study
might
not
be
considered
sufficiently
robust
to
serve
as
the
critical
study
for
assessment
of
hazard
or
risk,
the
study
might
be
useful
for
a
more
limited
purpose.

Depending
on
the
quality
of
the
human
study,
the
human
data
may
be
used
to
corroborate
the
endpoints
selected
from
key
animal
studies,
or
to
identify
adjustments
that
need
to
be
made
to
uncertainty
factors,
particularly
the
interspecies
uncertainty
factor.
In
some
cases,
the
human
data
may
add
credence
to
data
from
animal
studies
that
indicate
a
steady
state
is
attained
following
treatment
after
a
defined
period
of
dosing.
The
weight
of
evidence
narrative
should
present
all
information
from
a
human
study
that
may
supplement
information
from
animal
studies
even
if
a
human
study
is
not
selected
as
a
point
of
departure
for
estimating
risk.

4.
CONCLUSIONS
This
document
outlines
the
factors
that
should
be
considered
in
evaluating
an
intentional
exposure
study
performed
with
human
subjects.
In
assessing
the
scientific
acceptability
of
a
human
study,
the
reviewer
must
answer
several
questions:

$
Is
the
study
soundBwas
it
well­
designed,
properly
conducted,
and
did
it
answer
the
research
question
conclusively?

$
If
the
study
is
sound,
how
does
it
fit
with
other
available
published
or
unpublished
human
studies
and
with
available
incident
data?
Are
the
human
data
consistent
with
the
toxicology
data
base
in
laboratory
animals?

$
How
should
a
human
study
be
used
in
risk
assessment?
Is
the
study
under
consideration
sufficiently
robust
that
it
defines
a
point
of
departure
from
which
risk
can
be
estimated?
Can
the
human
data
be
used
as
the
basis
for
developing
an
alternative
interspecies
uncertainty
factor?

The
answers
to
these
questions
should
be
derived
from
a
consideration
of
all
available
human
and
laboratory
animal
evidence
as
described
above,
and
should
be
presented
in
a
narrative
which
clearly
articulates
the
strengths
and
limitations
of
the
human
studies,
and
indicates
the
appropriate
role
for
the
study
in
the
hazard
and
dose­
response
assessment.
A
detailed
rationale
should
be
provided
for
all
conclusions.
13
REFERENCES
Code
of
Federal
Regulations
(
1997)
Title
40,
Vol.
1,
Part
1­
49.

Code
of
Federal
Regulations
(
1998)
Title
21,
Vol.
1,
Part
1­
99.

Code
of
Federal
Regulations
(
1998)
Title
21,
Vol.
5,
Part
300­
499.

Food
and
Drug
Administration
(
1993)
Redbook
II,
Chapter
VI:
Human
Studies,
176­
190.

Wilson
BW,
Adele.
(
1997).
Monitoring
the
pesticide
exposed
worker.
In:
Occupational
Medicine:
State
of
the
Art
Reviews­
Vol.
12,
No.
2,
April­
June,
pp.
3487­
363,
published
by
Hanley
and
Bellus,
Inc.,
Philadelphia,
PA.
