SUPPORTING
STATEMENT
FOR
INFORMATION
COLLECTION
REQUEST
FOR
WILLINGNESS
TO
PAY
SURVEY
FOR
§
316(
B)
PHASE
III
COOLING
WATER
INTAKE
STRUCTURES:
INSTRUMENT,
PRE­
TEST,
AND
IMPLEMENTATION
i
TABLE
OF
CONTENTS
PART
A
OF
THE
SUPPORTING
STATEMENT.......................................................................
1
1.
Identification
of
the
Information
Collection.....................................................................
1
1(
a)
Title
of
the
Information
Collection
..................................................................................
1
1(
b)
Short
Characterization
(
Abstract)
....................................................................................
1
2.
Need
For
and
Use
of
the
Collection.................................................................................
5
2(
a)
Need/
Authority
for
the
Collection
...................................................................................
5
2(
b)
Practical
Utility/
Users
of
the
Data
...................................................................................
5
3.
Non­
duplication,
Consultations,
and
Other
Collection
Criteria.........................................
6
3(
a)
Non­
duplication
..............................................................................................................
6
3(
b)
Public
Notice
Required
Prior
to
ICR
Submission
to
OMB...............................................
7
3(
c)
Consultations.................................................................................................................
11
3(
d)
Effects
of
Less
Frequent
Collection...............................................................................
12
3(
e)
General
Guidelines
........................................................................................................
12
3(
f)
Confidentiality
..............................................................................................................
13
3(
g)
Sensitive
Questions
.......................................................................................................
13
4.
The
Respondents
and
the
Information
Requested...........................................................
13
4(
a)
Respondent....................................................................................................................
13
4(
b)
Information
Requested
..................................................................................................
14
(
I)
Data
items,
including
record
keeping
requirements........................................................
14
(
II)
Respondent
activities
...................................................................................................
17
5.
The
Information
Collected
­
Agency
Activities,
Collection
Methodology,
and
Information
Management
..........................................................................................................
18
5(
a)
Agency
Activities..........................................................................................................
18
5(
b)
Collection
Methodology
and
Information
Management
.................................................
18
5(
c)
Small
Entity
Flexibility
.................................................................................................
19
ii
5(
d)
Collection
Schedule.......................................................................................................
19
6.
Estimating
Respondent
Burden
and
Cost
of
Collection..................................................
20
6(
a)
Estimating
Respondent
Burden......................................................................................
20
6(
b)
Estimating
Respondent
Costs
........................................................................................
20
6(
c)
Estimating
Agency
Burden
and
Costs............................................................................
21
6(
d)
Respondent
Universe
and
Total
Burden
Costs
...............................................................
22
6(
e)
Bottom
Line
Burden
Hours
and
Costs
...........................................................................
22
6(
f)
Reasons
for
Change
in
Burden
......................................................................................
22
6(
g)
Burden
Statement
..........................................................................................................
22
PART
B
OF
THE
SUPPORTING
STATEMENT.....................................................................
24
1.
Survey
Objectives,
Key
Variables,
and
Other
Preliminaries
.........................................
24
1(
a)
Survey
Objectives
.........................................................................................................
24
1(
b)
Key
Variables
...............................................................................................................
24
1(
c)
Statistical
Approach
......................................................................................................
25
1(
d)
Feasibility
.....................................................................................................................
25
2.
Survey
Design
...............................................................................................................
26
2(
a)
Target
Population
and
Coverage....................................................................................
26
2(
b)
Sampling
Design
...........................................................................................................
26
(
I)
Sampling
Frames...........................................................................................................
26
(
II)
Sample
Sizes
.................................................................................................................
27
(
III)
Stratification
Variables
..................................................................................................
27
(
IV)
Sampling
Method..........................................................................................................
27
(
V)
Multi­
Stage
Sampling
...................................................................................................
28
2(
c)
Precision
Requirements
.................................................................................................
28
(
I)
Precision
Targets...........................................................................................................
28
(
II)
Non­
Sampling
Errors
...................................................................................................
29
2(
d)
Questionnaire
Design
...................................................................................................
31
iii
3.
Pretests
and
Pilot
Tests..................................................................................................
32
4.
Collection
Methods
and
Follow­
up................................................................................
33
4(
a)
Collection
Methods
.......................................................................................................
33
4(
b)
Survey
Response
and
Follow­
up....................................................................................
34
5.
Analyzing
and
Reporting
Survey
Results.......................................................................
35
5(
a)
Data
Preparation............................................................................................................
35
5(
b)
Analysis
........................................................................................................................
35
5(
c)
Reporting
Results
..........................................................................................................
53
REFERENCES.........................................................................................................................
54
ATTACHMENTS
Attachment
1:
Informational
Slide
Show..................................................................................
63
Attachment
2:
Slide
Show
Script............................................................................................
125
Attachment
3:
Full
Text
of
Stated
Preference
Survey
Component
..........................................
132
Attachment
4:
Federal
Register
Notice
...................................................................................
145
Attachment
5:
Description
of
Statistical
Survey
Design
.........................................................
154
LIST
OF
TABLES
Table
A1:
Reviewers................................................................................................................
12
Table
A2:
Geographic
Stratification
Design..............................................................................
14
Table
A3:
Schedule
for
Survey
Implementation
.......................................................................
20
Table
A4:
Total
Estimated
Bottom
Line
Burden
and
Cost
Summary
........................................
22
Table
B1:
Confidence
Intervals
for
Binary
Survey
Variables,
by
Region..................................
29
Table
B2:
Illustration
of
an
External
Scope
Test
......................................................................
51
1
PART
A
OF
THE
SUPPORTING
STATEMENT
1.
Identification
of
the
Information
Collection
1(
a)
Title
of
the
Information
Collection
Willingness
to
Pay
Survey
for
Section
316(
b)
Phase
III
Cooling
Water
Intake
Structures:

Instrument,
Pre­
test,
and
Implementation
1(
b)
Short
Characterization
(
Abstract)

The
U.
S.
Environmental
Protection
Agency
(
EPA)
is
in
the
process
of
developing
new
regulations
to
provide
national
performance
standards
for
controlling
impacts
from
cooling
water
intake
structures
(
CWIS)
for
Phase
III
facilities
under
section
316(
b)
of
the
Clean
Water
Act
(
CWA).
Phase
III
under
Clean
Water
Act
section
316(
b)
regulations
applies
to
facilities
that
withdraw
water
for
cooling
purposes
from
rivers,
streams,
lakes,
reservoirs,
estuaries,
oceans,
or
other
waters
of
the
United
States,
and
that
are
either
existing
electrical
generators
with
cooling
water
intake
structures
that
are
designed
to
withdraw
50
million
gallons
of
water
per
day
(
MGD)

or
less,
or
existing
manufacturing
and
industrial
facilities.
The
regulation
also
establishes
section
316(
b)
requirements
for
new
offshore
oil
and
gas
extraction
facilities.
EPA
has
previously
published
final
section
316(
b)
regulations
that
address
new
facilities
(
Phase
I)
on
December
18,

2001
(
66
FR
65256)
and
existing
large
power
producers
(
Phase
II)
on
July
9,
2004
(
69
FR
41576).
See
40
CFR
Part
125,
Subparts
I
and
J,
respectively.

As
required
under
Executive
Order
12866,
EPA
is
conducting
economic
impact
and
costbenefit
analyses
for
the
section
316(
b)
regulation
for
Phase
III
facilities.
Cost­
benefit
analysis
should
include
a
comprehensive
estimate
of
total
social
benefits,
including
both
use
and
non­
use
values.
"
Non­
use
values,
like
use
values,
have
their
basis
in
the
theory
of
individual
preferences
and
the
measurement
of
welfare
changes.
According
to
theory,
use
and
non­
use
values
are
additive"
(
Freeman,
2003).
It
is
generally
accepted
in
the
economic
literature
that
non­
use
values
may
be
substantial
in
some
cases.
Additionally,
when
small
per
capita
non­
use
values
are
held
by
a
substantial
fraction
of
the
population,
they
can
be
very
large
in
the
aggregate.

Therefore,
failure
to
recognize
such
values
may
lead
to
improper
inferences
regarding
policy
benefits.
As
stated
by
Freeman
(
2003,
p.
138),
"[
i]
f
non­
use
values
are
large,
ignoring
them
in
2
natural
resource
policymaking
could
lead
to
serious
errors
and
resource
misallocations."
With
regard
to
non­
use
values,
the
literature
also
advises
against
the
use
of
ad
hoc,
expert
assumptions
regarding
the
magnitude
of
these
values
 
particularly
assumptions
that
non­
use
values
are
trivial.
Because
values
are
inherently
subjective,
Bateman
et
al.
(
2002,
p.
75)
emphasize
that
"
it
would
be
wrong
for
experts
to
assume
that
one
resource
is
a
perfectly
good
substitute
for
another"
[
and
hence
that
non­
use
values
are
trivial
or
small].
As
further
noted
by
Bateman
et
al.

(
2002,
p.
75),
"
there
are,
therefore,
no
easy
rules
for
determining
at
the
outset"
whether
non­
use
values
are
likely
to
be
significant
or
non­
significant.
Based
on
this
clear
guidance,
EPA
believes
that
empirical
analysis
should
be
use
to
determine
the
magnitude
of
non­
use
values
for
the
section
316(
b)
Phase
III
rulemaking.

Consideration
of
potential
non­
use
values
is
particularly
important
in
the
case
of
the
section
316(
b)
regulation
because
nearly
all
(
96
percent
of)
impingement
and
entrainment
losses
at
CWIS
consist
of
either
forage
species,
or
non­
landed
recreational
and
commercial
species
that
do
not
have
direct
uses.
Although
individuals
do
not
use
these
resources
directly,
they
may
nevertheless
be
affected
by
changes
in
the
resource
status
or
quality,
such
that
they
would
be
willing
to
pay
to
maintain
these
resources.
Although
economic
theory
clearly
allows
for
the
possibility
of
significant
non­
use
values
in
this
case,
it
does
not
shed
light
on
whether
such
values
are
large
or
small
in
the
aggregate.
Whether
the
associated
nonmarket
values
 
including
non­
use
values
 
are
significant
is
an
empirical
question.

Many
public
comments
on
the
proposed
section
316(
b)
regulation
for
Phase
II
facilities
and
the
Phase
II
Notice
of
Data
Availability
(
NODA)
suggested
that
a
properly
designed
and
conducted
stated
preference,
or
contingent
valuation
(
CV),
survey
would
be
the
most
appropriate
and
acceptable
method
to
estimate
total
(
including
use
and
non­
use)
benefits
of
the
316(
b)
rule
(
U.
S.
EPA,
2004a).
Stated
preference
surveys
use
carefully
designed
questions
to
elicit
respondents'
willingness
to
pay
(
WTP)
for
particular
ecological
improvements,
based
on
their
responses
to
either
discrete
choice
or
open­
ended
questions
regarding
hypothetical
resource
improvements
or
programs.
Such
improvements
may
include
increased
protection
of
aquatic
habitats
or
species
with
particular
attributes.
Stated
preference
survey
methodology
is
the
generally
accepted
means
to
estimate
total
(
including
use
and
non­
use)
resource
values.

Moreover,
stated
preference
survey
methods
"
are
likely
to
offer
the
only
feasible
approaches
to
estimating
non­
use
values"
(
Freeman,
2003,
p.
154).
Although
the
peer
reviewed
literature
is
highly
skeptical
of
stated
preference
surveys
that
attempt
to
decompose
use
and
non­
use
values,
3
there
is
general
conceptual
acceptance
that
the
total
values
of
non­
users
can
be
used
to
approximate
non­
use
values
for
all
individuals
(
Cummings
and
Harrison
1995,
Johnston
et
al.

2003a).

To
assess
public
policy
significance
or
importance
of
the
ecological
gains
from
the
section
316(
b)
regulation
for
Phase
III
facilities,
EPA
requests
approval
from
the
Office
of
Management
and
Budget
to
conduct
a
stated
preference
study
to
measure
total
benefits
of
reduced
fish
losses
at
CWIS
due
to
the
regulation.
The
study
would
focus
on
a
broad
range
of
aquatic
species,
including
forage
fish
and
a
variety
of
fish
species
harvested
by
commercial
and
recreational
fishermen.
The
results
of
the
survey
would
be
used
to
estimate
the
total
benefits
of
the
proposed
316(
b)
regulation,
but
would
also
be
of
interest
to
economists
and
policy
makers
studying
changes
in
fish
populations
and
aquatic
habitat
improvements,
since
past
studies
have
focused
only
on
a
few
select
fish
species
such
as
salmon
and
striped
bass.

Survey
subjects
will
be
randomly
selected
from
a
representative
national
panel
of
respondents
maintained
by
Knowledge
Networks,
an
online
survey
company.
Subjects
will
be
asked
to
complete
a
web­
based
questionnaire.
Participation
in
the
survey
is
voluntary.
EPA
intends
to
administer
the
survey
to
5,000
persons,
including
500
persons
who
will
take
part
in
an
initial
survey
pilot,
3,900
persons
who
will
participate
in
the
main
survey
effort,
and
600
persons
who
will
take
part
in
nonresponse
follow­
up
interviews.
EPA
chose
a
web­
based
survey
format
because
it
is
the
most
cost­
effective
method
available
to
conduct
a
large
statistically­
based
survey
covering
a
wide
geographic
region
in
a
relatively
short
time
frame.
As
highlighted
by
Dillman
(
2000),
internet
surveys
are
increasingly
used
within
a
wide
range
of
research
contexts,

based
on
the
many
potential
advantages
of
web­
based
survey
methods.
Among
these
are
the
ability
to
provide
significant
amounts
of
information
in
ways
impossible
in
telephone
or
mail
survey
instruments,
combined
with
an
ability
to
reach
a
broader,
nationwide
audience
more
costeffectively
than
is
typically
possible
using
in­
person
survey
methods.
Moreover,
in
recent
years
clear
guidance
has
emerged
regarding
the
appropriate
design
and
use
such
survey
approaches
(
Dillman
2000).
Web­
based
surveys
are
increasingly
used
in
stated
preference
research,
with
recent
examples
by
Hoehn
et
al.
(
2004),
Li
et
al.
(
2005),
and
Berrens
et
al.
(
2004).

To
avoid
potential
sampling
biases
associated
with
the
web­
based
survey
methodology,

the
survey
sample
will
be
stratified
by
geographical
region,
and
within
each
region,
by
demographic
variables
including
age,
education,
Hispanic
ethnicity,
race,
gender,
and
household
income.
The
number
of
respondents
in
each
demographic
stratification
group
will
be
inversely
4
proportional
to
the
historical
response
rates
of
individuals
in
that
group
for
similar
types
of
surveys.
By
oversampling
groups
that
tend
to
have
lower
response
and
consistency
rates,
the
demographic
characteristics
of
respondents
who
provide
valid
completed
surveys
will
mirror
U.
S.
Census
Bureau
demographic
benchmarks
more
closely.
Thus,
this
design
will
reduce
error
due
to
non­
coverage
of
non­
telephone
households
in
the
original
Knowledge
Networks
recruitment
process,
and
will
also
reduce
bias
due
to
nonresponse
and
other
non­
sampling
errors.

This
bias
may
occur
due
to
panel
attrition
at
several
stages,
including
recruitment,
maintenance,

and
final
survey
implementation.
EPA
will
undertake
two
sets
of
activities
to
reduce
the
potential
for
nonresponse
bias:
(
1)
use
of
600
nonresponse
follow­
up
data
collection
interviews
to
increase
the
overall
response
rate;
and
(
2)
statistical
correction
for
unobserved
heterogeneity
that
might
be
present
in
the
data
from
respondents
interviewed
for
this
study.

To
assist
in
the
development
of
this
stated
preference
survey,
EPA
has
previously
requested
and
obtained
approval
from
the
Office
of
Management
and
Budget
to
conduct
a
series
of
twelve
focus
groups
with
a
total
of
96
respondents.
These
focus
groups
were
conducted
following
standard,
accepted
practices
in
the
stated
preference
literature,
as
outlined
by
Mitchell
and
Carson
(
1989),
Desvousges
et
al.
(
1984),
Desvousges
and
Smith
(
1988)
and
Johnston
et
al.

(
1995).
Some
of
the
focus
groups
incorporated
individual
cognitive
interviews,
as
detailed
by
(
Kaplowicz
et
al.
2004).
The
focus
groups
and
cognitive
interviews
allowed
EPA
to
better
understand
the
public's
perceptions
and
attitudes
concerning
fishery
resources,
to
frame
and
define
survey
questions,
to
pretest
draft
survey
questions,
to
test
for
and
reduce
potential
biases
that
may
be
associated
with
stated
preference
methodology,
and
to
ensure
that
both
researchers
and
respondents
have
similar
interpretations
of
survey
language
and
scenarios.
In
particular,

cognitive
interviews
allowed
in­
depth
exploration
of
the
cognitive
processes
used
by
respondents
to
answer
survey
questions,
without
the
potential
for
interpersonal
dynamics
to
sway
respondents'
comments
(
Kaplowicz
et
al.
2004).
Detailed
documentation
for
all
focus
groups,

which
were
conducted
by
EPA
under
ICR
#
2155.01,
can
be
found
in
the
docket
for
EPA
ICR
#
2155.02
(
Besedin
et
al.,
2005).

The
total
national
burden
estimate
for
all
components
of
the
survey
is
3,383
hours.
The
burden
estimate
is
based
on
administration
of
survey
questionnaires
to
5,000
respondents.
EPA
assumes
an
average
burden
estimate
of
41
minutes
per
respondent,
based
on
burden
estimates
of
40
minutes
per
respondent
for
completion
of
the
pretest
and
main
survey,
and
45
minutes
per
5
respondent
for
completion
of
nonresponse
follow­
up
interviews.
Given
an
average
wage
rate
of
$
17.71,
the
total
respondent
cost
is
$
59,919.

2.
Need
For
and
Use
of
the
Collection
2(
a)
Need/
Authority
for
the
Collection
The
project
is
being
undertaken
pursuant
to
Section
104
of
the
Clean
Water
Act
dealing
with
research.
Section
104
of
the
Clean
Water
Act
authorizes
and
directs
the
EPA
Administrator
to
conduct
research
into
a
number
of
subject
areas
related
to
water
quality,
water
pollution,
and
water
pollution
prevention
and
abatement.
This
section
also
authorizes
the
EPA
Administrator
to
conduct
research
into
methods
of
analyzing
the
costs
and
benefits
of
programs
carried
out
under
the
Clean
Water
Act.

This
research
project
is
exploring
how
public
values
for
fishery
resources
(
including
use
and
non­
use
values)
are
affected
by
fish
losses
from
impingement
and
entrainment
at
CWIS.

Understanding
total
public
values
 
including
non­
use
values
for
fishery
resources
lost
to
impingement
and
entrainment
 
is
necessary
to
determine
the
full
range
of
benefits
associated
with
reductions
in
impingement
and
entrainment
losses.
Because
non­
use
values
may
be
substantial
in
some
cases,
failure
to
recognize
such
values
may
lead
to
improper
inferences
regarding
policy
benefits
(
Freeman
2003).
Although
the
findings
from
this
study
will
primarily
be
used
by
EPA
to
improve
estimates
of
the
economic
benefits
of
the
section
316(
b)
regulation
for
Phase
III
facilities
as
required
under
Executive
Order
12866,
these
findings
are
also
expected
to
be
useful
to
state
and
local
regulatory
agencies
dealing
with
fishery
resources
and
fish
habitat,

and
to
the
research
community.

2(
b)
Practical
Utility/
Users
of
the
Data
EPA
plans
to
use
the
results
of
the
survey
to
improve
estimates
of
the
economic
benefits
of
the
section
316(
b)
regulation
for
Phase
III
facilities.
Specifically,
the
Agency
will
use
the
survey
results
to
estimate
total
values
for
preventing
losses
of
fish
through
impingement
and
entrainment
at
CWIS,
following
standard
practices
outlined
in
the
literature
(
Freeman
2003;

Bennett
and
Blamey
2001;
Louviere
et
al.
2000;
U.
S.
EPA
2000).
State
and
local
agencies
that
deal
with
fishery
resources
may
also
use
the
results
of
this
survey
to
assist
in
activities
such
as
6
permit
writing.
Furthermore,
the
academic
community,
particularly
researchers
studying
changes
in
fish
populations
and
aquatic
habitat
improvements,
may
find
the
results
of
this
study
to
be
of
substantial
academic
interest.

3.
Non­
duplication,
Consultations,
and
Other
Collection
Criteria
3(
a)
Non­
duplication
EPA
has
not
identified
any
previous
studies
that
would
allow
estimation
of
the
social
benefits
of
nationwide
changes
in
populations
of
fish
species
(
including
forage,
recreational,
and
commercial
species)
affected
by
the
proposed
316(
b)
regulation.
Moreover,
many
studies
that
provide
values
for
arguably
similar
resources
reflect
research
conducted
over
a
decade
ago,
using
methods
that
have
since
been
significantly
improved
in
the
literature.
Although
many
previous
studies
have
estimated
the
value
of
changes
in
catch
rates
or
populations
of
selected
(
often
high
value)
recreational
and
commercial
species,
or
of
changes
in
water
quality
that
affect
fish,
no
studies
have
specifically
valued
changes
in
forage
fish
populations.
For
example,
Olsen
et
al.

(
1991)
conducted
a
survey
of
Pacific
Northwest
residents,
including
both
anglers
and
nonanglers
to
determine
their
willingness­
to­
pay
(
WTP)
for
a
doubling
of
the
size
of
the
Columbia
River
Basin
salmon
and
steelhead
runs.
EPA's
proposed
survey
approach
differs
from
this
study
and
others
like
it
(
such
as
Cameron
and
Huppert,
1989)
in
that
it
would
include
respondents
from
various
geographic
regions
throughout
the
United
States
and
would
provide
values
that
encompass
a
variety
of
forage,
recreational,
and
commercial
species,
instead
of
valuing
a
few
high­
value
recreational
species
in
one
specific
geographical
area.

Some
previous
studies
do
provide
values
for
changes
in
water
quality
that
affect
all
fish
species,
including
forage
fish.
However,
the
resulting
values
for
water
quality
improvements
from
these
studies
also
include
values
for
changes
in
other
factors
such
as
water
clarity,
pollution
levels,
species
diversity,
etc.
For
example,
Magat
et
al.
(
2000)
asked
North
Carolina
and
Colorado
residents
to
value
an
increase
in
water
quality
under
which
fresh
water
in
their
states
would
become
safer
for
swimming,
fish
could
be
eaten
safely,
and
the
number
and
diversity
of
plants
and
aquatic
organisms
would
increase.
EPA's
proposed
survey
approach
differs
from
this
study
and
other
similar
studies
(
such
as
Mitchell
and
Carson,
1981;
or
Whitehead
and
Groothuis,

1992)
in
that
it
will
value
only
changes
in
fish
populations,
not
changes
in
general
water
quality
that
affect
a
variety
of
ecological
services
provided
by
a
water
body.
Hence,
it
will
represent
the
7
only
study
available
in
the
literature
allowing
the
evaluation
of
economic
values
associated
with
changes
in
large­
scale
fish
populations
(
without
changes
in
other
environmental
quality
characteristics),
where
these
changes
involve
significant
increases
or
decreases
in
forage
fish.

3(
b)
Public
Notice
Required
Prior
to
ICR
Submission
to
OMB
In
accordance
with
the
Paperwork
Reduction
Act
(
44
U.
S.
C.
3501
et
seq.),
EPA
published
a
notice
in
the
Federal
Register
on
June
9,
2005,
announcing
that
the
survey
questionnaire
and
sampling
methodology
were
available
for
comment.
A
copy
of
the
second
Federal
Register
notice
is
attached
at
the
end
of
this
document
(
EPA
ICR
#
2155.02).
EPA
received
a
number
of
comments
on
the
proposed
information
collection,
which
are
summarized
in
the
following
paragraphs.

Several
commenters
stated
that
the
survey
would
provide
inaccurate
estimates
of
WTP
to
prevent
fish
losses.
One
commenter
argued
that
the
presentation
materials
prepared
by
EPA
contain
inaccurate
statements
and
invalid
comparisons
that
will
lead
respondents
to
believe
that
the
benefits
of
increased
regulation
of
cooling
water
intakes
would
be
substantially
greater
than
is
actually
the
case.
A
different
commenter
argued
the
opposite:
that
the
proposed
contingent
valuation
survey
is
biased
against
protecting
ecosystems,
and
will
drastically
undervalue
non­
use
benefits.
EPA
agrees
that
certain
details
of
the
survey
and
presentation
materials
provided
in
the
supporting
documentation
for
the
June
9
Federal
Register
notice
required
revision.
Since
issuing
the
June
9
Federal
Register
notice,
EPA
has
conducted
four
focus
groups
and
two
cognitive
interview
sessions
to
improve
the
Agency's
understanding
of
the
public's
perceptions
and
attitudes
concerning
fishery
resources,
to
pretest
draft
survey
questions
and
presentation
materials,
to
test
for
and
reduce
potential
biases
that
may
be
associated
with
stated
preference
methodology,
and
to
ensure
that
both
researchers
and
respondents
have
similar
interpretations
of
survey
language
and
scenarios.
As
a
result
of
this
extensive
pre­
testing,
a
number
of
revisions
have
been
made
to
the
survey
that
have
significantly
improved
its
reliability
and
reduced
its
potential
for
bias.

One
commenter
argued
that
the
survey
does
not
sufficiently
emphasize
the
uncertainty
in
the
estimates
of
the
fish
losses
that
would
be
prevented
by
the
proposed
policies.
EPA
points
out
that
debriefing
sessions
during
focus
groups
and
cognitive
interviews
showed
that
respondents
clearly
understood
that
the
ecological
changes
described
in
the
survey
were
uncertain.
8
Furthermore,
respondents
were
comfortable
making
decisions
in
the
presence
of
this
uncertainty.

Additionally,
EPA
has
modified
the
survey
and
slide
show
to
emphasize
the
uncertainty
of
the
expected
environmental
changes.
For
example,
the
following
statement
has
been
added
to
the
survey:
"
You
will
be
shown
different
policy
options
 
with
different
effects
on
fish.
This
is
because
scientists
are
still
working
to
determine
what
the
exact
effect
on
fish
will
be,
so
it
is
important
to
know
how
you
would
react
to
a
wide
range
of
possible
outcomes.
Common
sense
indicates
that
preventing
the
loss
of
fish
eggs
and
young
fish
will
mean
more
adult
fish
in
future
years,
but
at
this
point
there
is
still
significant
uncertainty
regarding
the
exact
size
of
these
future
effects."
EPA
also
added
debriefing
questions
to
the
survey
instrument
that
are
designed
to
identify
respondents
whose
responses
are
based
on
incorrect
interpretation
of
the
environmental
changes
described
in
the
survey,
including
the
uncertainty
of
the
expected
changes.

The
same
commenter
also
argued
that
the
benefits
of
regulating
cooling
water
intake
structures
for
recreationally
and
commercially
significant
species
are
already
reflected
in
the
EPA's
RIA
for
the
proposed
rule
for
Phase
III
facilities.
EPA
disagrees
with
the
commenter's
statement.
As
stated
by
Freeman,
"
Non­
use
values,
like
use
values,
have
their
basis
in
the
theory
of
individual
preferences
and
the
measurement
of
welfare
changes.
According
to
theory,
use
and
non­
use
values
are
additive"
(
Freeman,
2003).
EPA
notes
that
the
non­
use
benefits
of
the
proposed
regulation
are
a
completely
separate
category
of
value
from
the
commercial
and
recreational
welfare
effects
of
the
proposed
rule.
Furthermore,
results
from
the
focus
groups
and
cognitive
interviews
indicated
that
some
individuals
hold
substantial
non­
use
values
for
protecting
fish
species
with
direct
uses,
in
addition
to
any
use
values
that
they
have
for
those
species.

The
commenter
also
argued
that
the
survey
is
unnecessary
because
in
independently
conducted
verbal
protocol
interviews
with
15
individuals,
respondents
did
not
demonstrate
meaningful
values
for
marginal
changes
in
forage
fish
populations.
EPA
notes
that
this
finding
contradicts
the
results
of
the
EPA
focus
groups
and
cognitive
interviews
conducted
under
ICR
#
2155.01.
These
focus
groups
and
interviews
provided
strong
evidence
that
many
individuals
value
forage
fish
and
are
willing
to
pay
to
prevent
losses
of
all
fish
species
including
forage
fish.

Participants
in
the
EPA
focus
groups
and
interviews
cited
a
variety
of
motivations
for
preventing
fish
losses,
including
the
satisfaction
of
knowing
that
the
fish
exist,
the
desire
to
bequeath
healthy
fish
populations
to
future
generations,
and
the
desire
to
protect
the
functioning
of
aquatic
ecosystems.
Furthermore,
EPA
notes
that
the
economic
literature
advises
against
the
use
of
ad
9
hoc,
expert
assumptions
regarding
the
magnitude
of
non­
use
values
 
particularly
assumptions
that
non­
use
values
are
trivial.
Because
values
are
inherently
subjective,
Bateman
et
al.
(
2002,
p.

75)
emphasize
that
"
it
would
be
wrong
for
experts
to
assume
that
one
resource
is
a
perfectly
good
substitute
for
another"
[
and
hence
that
non­
use
values
are
trivial
or
small].
As
further
noted
by
Bateman
et
al.
(
2002,
p.
75),
"
there
are,
therefore,
no
easy
rules
for
determining
at
the
outset"
whether
non­
use
values
are
likely
to
be
significant
or
non­
significant.
Based
on
this
clear
guidance,
EPA
believes
that
empirical
analysis
should
be
used
to
determine
the
magnitude
of
non­
use
values
in
the
316(
b)
regulation
for
Phase
III
facilities.

The
commenter
argued
that
because
the
survey
materials
are
based
on
unreliable
population
and
fish
losses
data,
the
survey
results
will
also
be
unreliable.
In
response,
EPA
first
notes
that
the
survey
materials
are
based
on
the
best
biological
and
engineering
data
available.

Second,
even
if
this
data
is
uncertain,
the
stated
preference
survey
will
still
produce
results
that
are
meaningful
in
the
context
of
the
scenarios
that
are
presented
to
respondents.
Different
versions
of
the
survey
will
show
a
range
of
different
baseline
and
resource
improvement
levels,

where
these
levels
are
chosen
to
almost
certainly
bound
actual
levels.
Different
respondents
will
be
asked
to
make
choices
over
all
possible
policy
scenarios
where
impingement
and
entrainment
reductions
range
from
0%
(
no
policy)
to
98%.
Given
that
there
will
almost
certainly
be
some
biological
uncertainty
regarding
the
specifics
of
the
actual
baselines
and
improvements,
the
resulting
valuation
estimates
will
allow
flexibility
in
estimating
WTP
for
a
wide
range
of
different
circumstances.

Finally,
the
commenter
expressed
the
opinion
that
the
survey
should
be
peer
reviewed
by
an
independent
panel
of
experts.
EPA
agrees
with
the
commenter's
opinion,
and
the
Agency
plans
to
convene
two
peer­
review
panels.
The
first
panel
will
review
the
results
of
the
focus
groups,
the
instrument
and
the
planned
survey
sampling
design,
and
the
proposed
willingness
to
pay
estimation
methodology,
before
the
survey
is
fielded.
The
second
peer
review
panel
will
review
the
entire
survey
process,
including
EPA's
final
estimated
results
for
the
316(
b)
Phase
III
rulemaking,
after
the
survey
is
completed.

Another
commenter
argued
that
neither
the
Clean
Water
Act
nor
section
316(
b)
require
or
allow
EPA
to
use
monetized
cost­
benefit
analysis
to
determine
regulatory
standards
for
cooling
water
intake
structures.
EPA
is
conducting
this
survey
because
of
Executive
Order
12866,

"
Regulatory
Planning
and
Review,"
which
requires
Federal
Agencies
to
economic
impact
and
cost­
benefit
analysis
for
all
major
rules.
Furthermore,
cost­
benefit
analysis
requires
a
10
comprehensive,
estimate
of
total
social
benefits,
including
both
use
and
non­
use
values.
The
current
information
collection
would
provide
valuable
information
that
includes
non­
use
benefits
of
the
316(
b)
regulation
for
Phase
III
facilities1,
thus
enabling
the
Agency
to
perform
cost­
benefit
analysis
for
the
regulation
and
to
satisfy
the
requirements
of
Executive
Order
12866.

In
addition
to
the
comments
received
in
response
to
the
June
9
Federal
Register
notice,
a
number
of
the
comments
received
by
EPA
in
response
to
the
previous
focus
group
information
collection
request
(
EPA
ICR
#
2155.01)
are
relevant
to
the
current
ICR.
Most
of
these
comments
addressed
the
more
general
topic
of
resource
valuation
and
stated
preference
surveys.
Many
of
the
claims
made
in
the
submitted
comments
represent
empirical
questions
that
were
appropriately
addressed
within
the
survey
design
process.
Some
commenters
argued
that
non­
use
benefits
in
the
Phase
III
policy
context
are
likely
to
be
trivial.
EPA
points
out
that
(
1)
these
claims
are
unsubstantiated
by
the
empirical
literature
and
(
2)
that
the
magnitude
of
non­
use
benefits
in
the
Phase
III
context
is
an
empirical
issue
that
can
only
be
addressed
with
appropriate
research
methods.
Even
if
non­
use
values
for
fish
are
small
on
a
per­
capita
basis,
they
may
be
large
in
the
aggregate.

Commenters
also
questioned
the
reliability
of
the
empirical
estimates
of
total
benefits
of
the
Phase
III
regulation
and
thus
the
practical
value
of
the
information
provided
by
the
focus
groups
and
stated
preference
survey.
However,
EPA
believes,
following
guidance
in
the
literature,
that
stated
preference
methods
are
capable
of
measuring
total
values
(
including
use
and
non­
use)
of
fish
affected
by
impingement
and
entrainment,
if
surveys
and
approaches
are
appropriately
designed.
EPA's
own
guidance
explicitly
permits
the
use
of
stated
preference
methods
for
use
and
non­
use
value
estimation
(
US
EPA
2000),
following
prior
findings
of
the
NOAA
Blue
Ribbon
Panel
on
Contingent
Valuation
(
Arrow
et
al.
1993).
EPA
recognizes
the
difficulties
faced
in
the
design
of
appropriate
stated
preference
survey
instruments
[
focus
groups],
but
does
not
consider
these
difficulties
to
be
insurmountable.
As
stated
by
Freeman
(
2003,
p.
154),
stated
preference
survey
methods
"
are
likely
to
offer
the
only
feasible
approaches
to
estimating
non­
use
values."
Furthermore,
although
the
peer
reviewed
literature
is
highly
skeptical
of
stated
preference
surveys
that
attempt
to
decompose
use
and
non­
use
values,
there
is
general
conceptual
acceptance
that
the
total
values
of
non­
users
can
be
used
to
approximate
nonuse
values
for
all
individuals
(
Cummings
and
Harrison
1995,
Johnston
et
al.
2003a).

1
Note
that
by
definition,
total
values
for
non­
users
are
non­
use
values.
11
Finally,
one
commenter
noted
that
if
EPA
is
to
consider
non­
use
value
effects
on
the
benefit
side
of
the
regulation,
then
the
Agency
should
also
consider
non
market
and
non­
use
value
effects
on
the
cost
side
of
the
regulation,
such
as
values
for
effects
on
mining,
the
efficiency
of
energy
production,
and
the
use
of
depletable
resources.
While
EPA
acknowledges
the
theoretical
validity
of
this
point,
EPA
knows
of
no
precedents
to
guide
design
of
a
survey
on
non­
use
costs,
nor
does
EPA
expect
economic
impacts
as
significant
as
closures
that
would
warrant
further
consideration
of
non­
use
costs.
The
same
commenter
also
argued
that
experimental
economics
may
offer
a
superior
method
for
studying
non­
use
values.
EPA
points
out
that
there
is
no
evidence
in
the
literature
that
suggests
that
experimental
economics
methods
provide
a
suitable
alternative
for
the
measurement
of
non­
use
values,
or
a
substitute
for
qualitative
evidence
that
may
be
gathered
from
appropriately
moderated
focus
group
sessions.

For
a
more
detailed
discussion
of
the
issues
raised
by
commenters
on
the
Federal
Register
notice
published
June
9,
2005,
and
of
issues
raised
by
commenters
on
the
previous
focus
group
ICR,
see
EPA's
response
to
public
comments
on
the
Federal
Register
notice
published
on
November
23,
2004
(
69
FR
68140).

3(
c)
Consultations
During
development
of
the
survey
through
focus
groups
already
approved
by
OMB,

EPA's
Office
of
Water
has
engaged
in
consultations
with
focus
group
and
cognitive
interview
participants,
as
well
as
internal
Agency
reviewers.
Following
standard
guidance
from
the
economic
literature
(
e.
g.,
Arrow
et
al.
1993;
Mitchell
and
Carson
1989;
Johnston
et
al.
1995;

Desvousges
et
al.
1984;
Desvousges
and
Smith
1988;
Kaplowicz
et
al.
2004;
Opaluch
et
al.

1993),
participants
in
focus
groups
and
cognitive
interviews
primarily
provided
input
on
the
survey
questionnaire
design
and
content,
including
the
introductory
materials,
the
survey
format,

the
sequence
and
content
of
questions,
and
relationships
between
questionnaire
design
and
their
interpretation
and
cognitive
processing
of
survey
questions.
Cognitive
interviews
(
Kaplowicz
et
al.
2004)
have
also
been
conducted
during
focus
group
sessions
to
assess
respondents
cognitive
processing
of
stated
preference
survey
questions,
and
to
ensure
that
questions
are
answered
in
a
way
that
corresponds
to
neoclassical
requirements
for
value
estimation.

Internal
Agency
reviewers
provided
and
will
continue
to
provide
input
on
the
content
and
format
of
the
focus
groups,
questionnaire
design
issues,
and
issues
related
to
survey
sampling
12
design
and
methodology.
This
has
to
date
included
comments
on
the
appropriateness
of
the
survey
design
for
measuring
fish
resources
affected
by
cooling
water
intake
structures
(
CWIS)

for
Phase
III
facilities,
and
also
appropriate
correspondence
between
proposed
survey
approaches
and
economic
theory
underlying
benefit
cost
analysis.

Table
A1:
Reviewers
Reviewer
Organization
Telephone
Number
Kelly
Maguire
U.
S.
EPA,
NCEE
202­
566­
2273
Nicole
Owens
U.
S.
EPA,
NCEE
202­
566­
2302
Natalie
Simon
U.
S.
EPA,
NCEE
202­
566­
2347
Christopher
Miller
U.
S.
Forestry
Service
801­
517­
1034
Michael
Dennis
(
survey
statistician)
Knowledge
Networks
650­
289­
2160
Additionally,
as
part
of
the
final
survey,
EPA
plans
to
conduct
a
survey
pilot
on
a
sample
of
500
individuals.
After
this
pilot
has
been
conducted,
EPA
will
solicit
additional
input
from
Agency
and
academic
reviewers
on
the
preliminary
performance
of
the
survey
questionnaire
and
sampling
design.
Based
on
the
results
of
the
survey
pilot
and
comments
received
from
these
reviewers,
EPA
may
make
necessary
changes
to
the
questionnaire
or
sampling
methodology
before
implementing
the
survey
with
3,900
individuals
in
the
main
sample
and
600
individuals
in
the
non­
response
follow­
up
sample.

3(
d)
Effects
of
Less
Frequent
Collection
The
survey
is
a
one­
time
activity.
Therefore,
less
frequent
collection
is
not
practical.

3(
e)
General
Guidelines
The
survey
will
not
violate
any
of
the
general
guidelines
described
in
5
CFR
1320.5
or
in
EPA's
ICR
handbook.
13
3(
f)
Confidentiality
All
responses
to
the
survey
will
be
kept
strictly
anonymous.
To
ensure
that
the
final
survey
sample
includes
a
representative
and
diverse
population
of
individuals,
the
survey
questionnaire
will
elicit
basic
demographic
information,
such
as
age,
household
size,

employment
status,
and
income.
However,
the
survey
questionnaire
will
not
ask
respondents
for
personal
identifying
information,
such
as
names,
phone
numbers,
or
addresses.
Prior
to
taking
the
survey,
respondents
will
be
informed
that
their
responses
will
be
held
strictly
anonymous.

The
survey
data
will
be
made
public
only
after
it
has
been
thoroughly
vetted
to
ensure
that
all
potentially
identifying
information
has
been
removed.

3(
g)
Sensitive
Questions
The
survey
questionnaire
will
not
include
any
sensitive
questions
pertaining
to
private
or
personal
information,
such
as
sexual
behavior
or
religious
beliefs.

4.
The
Respondents
and
the
Information
Requested
4(
a)
Respondents
EPA
will
recruit
survey
participants
through
Knowledge
Networks,
a
national
consumer
information
firm.
The
firm
will
recruit
participants
by
selecting
a
random
stratified
sample
of
4,400
adults
from
a
panel
of
individuals
who
have
previously
indicated
their
willingness
to
participate
in
internet­
based
surveys.
The
sample
selection
process
will
be
designed
to
result
in
a
set
of
participants
who
are
representative
of
the
adult
U.
S.
population.

The
individuals
in
the
Knowledge
Networks
panel
are
recruited
randomly
through
random
digit
dialing.
They
represent
the
broad
diversity
and
key
demographic
dimensions
of
the
U.
S.
population.
The
panel
tracks
closely
to
the
U.
S.
population
on
age,
race,
Hispanic
ethnicity,

geographical
region,
employment
status,
and
other
demographic
elements.
The
differences
that
do
exist
are
small
and
can
be
corrected
statistically
in
the
survey
data
(
i.
e.,
by
nonresponse
adjustments).
Knowledge
Networks
provides
all
panel
members
with
free
internet
access,
and
in
return,
panel
members
are
asked
to
participate
in
internet
surveys
three
to
four
times
a
month.
14
Table
A2
shows
the
stratification
design
for
the
geographic
regions
covered
by
the
sample
for
this
survey.

Table
A2:
Geographic
Stratification
Design
Region
States
Included
Sample
Sizea
Percentage
of
Sample
Northeast
CT,
DC,
DE,
MA,
MD,
ME,
NH,
NJ,
NY,
RI,
VT
665
17%

Southeast
AL,
AR,
FL,
GA,
KY,
LA,
MS,
NC,
OK,
SC,
TN,
TX,
VA,
WV
1,305
33%

Great
Lakes
IA,
IL,
IN,
MI,
MN,
MO,
OH,
PA,
WI
984
25%

Inland
Region
CO,
ID,
KS,
MT,
NE,
ND,
SD,
UT,
WY
206
5%

Pacific
Mountain
AZ,
CA,
NM,
NV,
OR,
WA
739
19%

Total,
All
Regions
All
states
except
AK
and
HI
3,900
100%

a
Sample
sizes
presented
in
this
table
include
only
the
3,900
individuals
asked
to
participate
in
the
final
survey.
An
additional
500
individuals
will
be
included
in
a
pre­
test
of
the
survey,
and
an
additional
600
individuals
will
be
included
in
the
nonresponse
follow­
up
interviews.

In
addition
to
the
4,400
respondents
recruited
for
the
web­
based
survey
panel,
an
additional
600
individuals
will
be
recruited
for
nonresponse
follow­
up
interviews.
These
individuals
will
include
randomly
sampled
respondents
who
chose
not
to
participate
at
some
stage
of
the
panel
recruitment
and
sampling
process,
i.
e.,
respondents
who
were
contacted
but
did
not
join
the
Knowledge
Networks
web
panel,
who
joined
the
panel
but
did
not
connect
their
WebTVs,
who
connected
but
did
not
complete
the
first
survey,
who
completed
the
first
survey
but
did
not
complete
any
following
surveys,
and
who
completed
following
surveys
but
did
not
complete
the
316(
b)
survey.

More
detail
on
planned
sampling
methods
and
statistical
design
of
the
survey
can
be
found
in
Part
B
of
this
supporting
statement.

4(
b)
Information
Requested
(
I)
Data
items,
including
record
keeping
requirements.

Individuals
who
agree
to
participate
in
the
web­
based
survey
will
be
provided
with
a
link
to
online
versions
of
the
supporting
materials
and
survey.
The
supporting
materials,
including
15
the
PowerPoint
presentation
and
the
slide
show
script,
are
provided
in
Attachments
1
and
2.
The
full
text
of
the
stated
preference
component
of
the
survey
is
provided
in
Attachment
3.
EPA
has
determined
that
all
questions
in
the
survey
are
necessary
to
achieve
the
goal
of
this
information
collection,
i.
e.,
to
collect
data
that
can
be
used
to
support
an
analysis
of
the
total
benefits
of
the
316(
b)
regulation.

The
following
is
an
outline
of
the
major
sections
of
the
survey.

Concern
for
Policy
Issues.
The
first
survey
question
asks
respondents
to
rank
the
general
importance
of
a
range
of
public
policy
issues,
including
both
environmental
and
nonenvironmental
issues.
This
question
is
designed
to
highlight
the
many
important
policy
issues
and
substitute
concerns
to
which
respondents
might
choose
to
direct
funds,
rather
than
spending
these
funds
to
prevent
fish
losses.
Such
questions
are
commonly
used
in
introductory
sections
of
stated
preference
surveys
(
e.
g.,
Mitchell
and
Carson
1984),
in
order
to
place
respondents
in
a
mindset
in
which
they
are
cognizant
of
substitute
goods
and
policy
issues
to
which
they
might
direct
their
scarce
household
budgets.

Testing
for
Appropriate
Understanding
of
Introductory
Materials.
Questions
2
and
3
are
questions
designed
to
assess
respondents'
understanding
of
the
material
provided
in
the
introductory
slide
show
presentation.
These
questions
will
allow
EPA
to
identify
those
respondents
who
do
not
 
based
on
answers
to
these
questions
 
provide
evidence
that
they
have
understood
key
areas
of
introductory
information.
Question
2
addresses
the
relative
importance
of
CWIS
losses
on
fish
populations
 
a
topic
addressed
clearly
by
the
introductory
materials.

Question
3
addresses
the
general
purpose
of
the
survey
 
an
issue
also
clearly
addressed
by
the
introductory
materials.
Incorrect
answers
to
these
questions
will
indicate
respondents
with
an
inappropriate
understanding
of
the
provided
introductory
materials,
and
allow
such
responses
to
be
either
removed
or
isolated
statistically
in
the
final
analysis.
These
questions
are
included
following
the
guidance
of
the
NOAA
Blue
Ribbon
Panel
(
Arrow
et
al.
1993),
to
assess
whether
respondents
understand
the
survey
materials
as
presented.

Voting
for
Regulations
to
Prevent
Fish
Losses.
Questions
4,
5,
and
6
are
"
choice
experiment"

or
"
choice
modeling"
questions
(
Adamowicz
et
al.
1998;
Bennett
and
Blamey
2001),
and
ask
respondents
to
choose
how
they
would
vote,
if
presented
with
two
hypothetical
regulatory
options
(
and
a
third
choice
to
reject
both
options).
Each
of
the
multi­
attribute
options
is
characterized
by
[
a]
changes
in
annual
impingement
and
entrainment
losses
of
fish
and
other
16
organisms,
[
b]
effects
on
long­
term
fish
populations,
[
c]
effects
on
recreational
and
commercial
catch,
and
[
d]
an
unavoidable
cost
of
living
increase
for
the
respondent's
household.
Following
standard
choice
experiment
methods,
respondents
choose
the
regulatory
options
that
they
prefer,

based
on
their
preferences.
Respondents
always
have
the
option
to
vote
for
neither
option
 
providing
the
status
quo
option
necessary
for
appropriate
welfare
estimation
(
Adamowicz
et
al.

1998).
Advantages
of
choice
experiments,
and
the
many
examples
of
the
use
of
such
approaches
in
the
literature,
are
discussed
in
later
sections
of
this
ICR.
Following
standard
approaches
(
Opaluch
et
al.
1993,
1999;
Johnston
et
al.
2002a,
2003b),
respondents
are
instructed
to
answer
each
of
the
three
choice
questions
independently,
and
not
to
"
add
up
or
compare
programs
across
different
pages."
This
is
included
to
avoid
biases
associated
with
sequence
aggregation
effects
(
Mitchell
and
Carson
1989).
EPA
will
also
randomize
the
order
in
which
the
policy
option
attributes
are
presented
across
respondents.
This
would
allow
a
statistical
test
of
potential
ordering
effects.

Reasons
for
Voting.
Questions
7
 
13
are
follow­
up
questions
to
the
prior
voting
questions,
and
ask
respondents
how
certain
they
feel
about
their
answers,
whether
their
responses
are
based
on
resource
changes
reflected
in
the
survey,
whether
they
are
aware
that
the
ecological
outcomes
of
the
316(
b)
regulations
are
subject
to
considerable
uncertainty
and
what
factors
affected
their
choices,
and
why
they
voted
for
or
against
the
regulatory
programs.
Question
7
assesses
the
certainty
that
respondents
feel
in
their
choice
experiment
responses,
following
methods
of
Champ
et
al.
(
2004)
and
others.
Responses
to
such
questions
have
been
used
in
the
literature
to
successfully
control
for
hypothetical
bias.
Questions
8­
10
are
designed
to
identify
respondents
whose
responses
are
based
on
incorrect
interpretation
of
the
resource
changes
described
in
the
survey
or
respondents
who
ignored
information
presented
in
the
survey
and
answered
questions
based
on
their
general
convictions
and
principles.
Questions
9
and
11­
13
are
also
designed
to
allow
EPA
to
identify
those
respondents
whose
answers
to
questions
4­
6
reflect
symbolic
or
warm
glow
biases
(
Arrow
et
al.
1993;
Mitchell
and
Carson
1989),
and
respondents
who
did
not
consider
their
budget
constraint
when
answering
the
choice
questions.

Affiliations
and
Recreational
Experience.
Questions
14
and
15
ask
respondents
whether
they
are
members
of
environmental,
sporting,
or
industry
groups,
and
whether
there
are
any
reasons
that
they
might
be
directly
affected
by
regulations
that
affect
fish
abundance.
Questions
16,
17
and
18
ask
respondents
whether
they
participate
in
specific
types
of
water­
related
recreational
activities,
and
whether
those
activities
would
be
affected
by
changes
in
fish
abundance.
17
Question
19
asks
respondents
how
far
they
have
to
travel
to
reach
the
nearest
water
body
that
supports
fish
or
shellfish.
These
questions
can
be
used
to
identify
non­
users
of
the
fishery
resource
 
thereby
allowing
the
estimation
of
non­
user
values
for
I&
E
reductions.
Examples
of
this
approach
to
estimation
of
non­
user
values
are
provided
by
Johnston
et
al.
(
2005),
Whitehead
et
al.
(
1995),
Croke
et
al.
(
1986),
Olsen
et
al.
(
1991),
Cronin
(
1982),
Whitehead
and
Groothuis
(
1992),
and
Mitchell
and
Carson
(
1981).

Demographics.
Questions
20­
29
ask
respondents
to
provide
basic
demographic
information,

including
age,
gender,
highest
level
of
education,
household
size,
household
composition,

location,
employment
status,
and
household
income.

Comments.
The
survey
offers
respondents
a
chance
to
comment
on
the
survey.

(
II)
Respondent
activities
EPA
expects
individuals
to
engage
in
the
following
activities
during
their
participation
in
the
contingent
valuation
survey.

 
Review
the
background
information
provided
in
the
online
supporting
materials.

 
Complete
the
online
survey
questionnaire.

A
typical
subject
from
the
Knowledge
Networks
panel
that
is
recruited
for
the
survey
will
spend
approximately
15
minutes
reviewing
the
online
background
materials.
The
participant
will
then
spend
25
minutes
completing
the
online
survey
questionnaire.
These
estimates
are
derived
from
focus
groups
and
cognitive
interviews
in
which
respondents
completed
both
tasks.

The
survey
administration
format
will
be
designed
to
ensure
that
individuals
who
agree
to
participate
must
review
all
background
information
before
completing
the
survey.

EPA
expects
individuals
recruited
for
nonresponse
follow­
up
interviews
to
engage
in
the
following
activities:

 
Respond
to
telephone
recruitment.

 
Review
the
background
information
provided
in
the
online
supporting
materials.

 
Complete
the
online
survey
questionnaire.
18
A
typical
individual
who
is
recruited
for
the
nonresponse
follow­
up
survey
will
spend
approximately
five
minutes
responding
to
telephone
recruitment,
15
minutes
reviewing
the
online
background
materials,
and
25
minutes
completing
the
online
survey
questionnaire.
The
survey
administration
format
will
be
designed
to
ensure
that
individuals
who
agree
to
participate
in
the
follow­
up
interviews
must
review
all
background
information
before
completing
the
survey.

5.
The
Information
Collected
­
Agency
Activities,
Collection
Methodology,
and
Information
Management
5(
a)
Agency
Activities
The
survey
is
being
developed,
conducted,
and
analyzed
by
Abt
Associates
Inc.
and
Knowledge
Networks
and
is
funded
by
EPA
contracts
No.
68­
C99­
239
and
No.
68­
W­
01­
039,

which
provide
funds
for
the
purpose
of
analyzing
the
economic
benefits
of
the
proposed
rule
for
Phase
III
facilities
subject
to
the
section
316(
b)
regulation.
Agency
activities
associated
with
the
survey
consist
of
the
following:

 
Pre­
test
the
survey
on
a
sample
of
500
individuals.

 
Modify
the
survey
questionnaire
and
sampling
design,
if
necessary.

 
Conduct
the
survey.

 
Conduct
nonresponse
follow­
up
interviews.

 
Analyze
the
survey
results.

Although
not
covered
under
this
ICR,
EPA
will
use
the
survey
results
to
estimate
the
social
value
of
changes
in
impingement
and
entrainment
losses
and
populations
of
forage,

recreational,
and
commercial
species
of
fish,
as
part
of
the
Agency's
analysis
of
the
benefits
of
the
316(
b)
rule
for
Phase
III
facilities.

5(
b)
Collection
Methodology
and
Information
Management
To
pretest
the
survey
questionnaire,
EPA
has
conducted
a
series
of
12
focus
groups,

including
some
using
cognitive
interview
methodologies
(
covered
under
EPA
ICR
#
2155.01).

Focus
groups
have
provided
valuable
feedback
that
allowed
EPA
to
iteratively
edit
and
refine
the
19
questionnaire,
and
eliminate
or
improve
imprecise,
confusing,
and
redundant
questions.
Focus
groups
and
cognitive
interviews
were
conducted
following
standard
approaches
in
the
literature,

as
outlined
by
Desvousges
et
al.
(
1984),
Desvousges
and
Smith
(
1988),
Johnston
et
al.
(
1995),

Schkade
and
Payne
(
1994),
Kaplowicz
et
al.
(
2004),
and
Opaluch
et
al.
(
1993).

EPA
plans
to
implement
the
proposed
survey
as
an
internet­
based,
choice
experiment
questionnaire.
Data
quality
will
be
monitored
by
checking
submitted
surveys
for
completeness
and
consistency,
and
by
asking
respondents
to
assess
the
accuracy
of
their
own
responses.
Data
quality
will
also
be
assessed
using
responses
to
questions
2,
3,
and
7
through
13
 
questions
designed
to
assess
the
presence
or
absence
of
potential
response
biases,
as
discussed
above.

Responses
to
the
survey
will
be
stored
in
an
electronic
database.
This
database
will
be
used
to
generate
a
data
set
for
a
regression
model
of
total
values
for
reductions
in
fish
impingment
and
entrainment
by
section
316(
b)
Phase
III
facilities.

To
protect
the
confidentiality
of
survey
respondents,
the
survey
data
will
be
released
only
after
it
has
been
thoroughly
vetted
to
ensure
that
all
potentially
identifying
information
has
been
removed
the
general
public
will
not
be
given
access
to
the
survey
data.

5(
c)
Small
Entity
Flexibility
This
survey
will
be
administered
to
individuals,
not
businesses.
Thus,
no
small
entities
will
be
affected
by
this
information
collection.

5(
d)
Collection
Schedule
The
schedule
for
implementation
of
the
survey
will
be
as
follows:
20
Table
A3:
Schedule
for
Survey
Implementation
Activity
Duration
of
Each
Activity
Total
Elapsed
Time
Period
Following
OMB
Approval
Make
questionnaire
available
to
pilot
survey
subjects
7
days
10
days
Tabulate
pilot
survey
responses,
solicit
input
from
Agency
and
academic
reviewers
on
performance
of
survey,
and
make
necessary
changes
to
survey
questionnaire
or
sampling
methodology.
7
days
17
days
Concurrent:
Make
questionnaire
available
to
survey
subjects
Conduct
nonresponse
follow­
up
interviews
21
days
(
14
days)
(
21
days)
38
days
Tabulate
survey
responses
and
analyze
survey
data
14
days
52
days
6.
Estimating
Respondent
Burden
and
Cost
of
Collection
6(
a)
Estimating
Respondent
Burden
Subjects
who
participate
in
the
survey
and
follow­
up
interviews
will
expend
time
on
several
activities.
Based
on
the
administration
of
the
main
survey
to
4,400
respondents
and
follow­
up
interviews
to
600
respondents,
the
national
burden
estimate
for
all
respondents
is
3,383
hours.

Based
on
pretests
conducted
in
focus
groups,
EPA
estimates
that
on
average
each
respondent
to
the
main
survey
will
spend
15
minutes
reviewing
the
provided
background
materials
and
25
minutes
completing
the
survey
questionnaire.
Thus,
the
average
burden
per
respondent
is
40
minutes
(
0.67
hours)
for
these
4,400
respondents.

EPA
estimates
that
on
average,
each
follow­
up
interview
respondent
will
spend
5
minutes
responding
to
phone
recruitment,
as
well
as
15
minutes
reviewing
the
provided
background
materials
and
25
minutes
completing
the
survey
questionnaire.
The
average
burden
per
respondent
is
45
minutes
(
0.75
hours)
for
these
600
respondents.

These
burden
estimates
reflect
a
one­
time
expenditure
in
a
single
year.

6(
b)
Estimating
Respondent
Costs
According
to
the
Bureau
of
Labor
Statistics,
the
average
hourly
wage
for
private
sector
workers
in
the
United
States
is
$
17.71
(
2004$).
Assuming
an
average
per­
respondent
burden
of
21
0.67
hours
for
respondents
to
the
pretest
and
main
survey
and
0.75
hours
for
respondents
to
the
nonresponse
follow­
up
interviews,
and
an
average
hourly
wage
of
$
17.71,
the
average
cost
per
respondent
is
$
11.81
and
$
13.28
for
respondents
to
the
pretest/
main
survey
and
nonresponse
follow­
up
interviews,
respectively.
Since
4,400
individuals
are
expected
to
take
the
pretest/
main
survey
and
600
individuals
are
expected
to
participate
in
the
nonresponse
follow­
up
interviews,

the
total
average
cost
for
all
respondents
is
$
11.98
per
respondent.
This
cost
does
not
take
into
account
an
incentive
payment
of
approximately
$
5
to
$
10
per
respondent
for
completing
the
survey,
or
the
compensation
that
respondents
who
are
panel
members
receive
in
the
form
of
free
internet
access
provided
by
Knowledge
Networks.
Internet
access
is
provided
to
make
it
easy
for
respondents
to
complete
surveys,
and
the
incentive
payment
is
provided
to
encourage
respondents
to
complete
this
survey,
which
is
somewhat
longer
than
surveys
Knowledge
Networks
participants
are
typically
asked
to
complete.

EPA
does
not
anticipate
any
capital
or
operation
and
maintenance
costs
for
respondents.

6(
c)
Estimating
Agency
Burden
and
Costs
This
project
is
being
undertaken
by
Abt
Associates
Inc.
and
Knowledge
Networks
with
funding
of
$
287,114
from
EPA
contract
No.
68­
C99­
239
and
No.
68­
W­
01­
039,
which
provide
funds
for
the
purpose
of
analyzing
the
economic
benefits
of
the
proposed
rule
for
Phase
III
facilities
subject
to
the
section
316(
b)
regulation.
The
total
cost
of
this
project
includes
labor
costs,
as
well
as
costs
related
to
use
of
the
Knowledge
Networks
panel.
Abt
Associates
Inc.
and
Knowledge
Networks
staff
are
expected
to
spend
1,143
hours
pre­
testing
the
survey
questionnaire
and
sampling
methodology,
conducting
the
survey,
and
tabulating
and
analyzing
the
survey
results.
The
cost
of
this
contractor
time
is
$
117,856,
and
the
estimated
direct
panel
cost
is
$
169,258.
In
addition
to
the
effort
expended
by
EPA's
contractors,
EPA
staff
are
expected
to
spend
650
hours
managing
and
reviewing
this
project
and
contributing
to
the
analysis.
The
cost
of
this
EPA
staff
time
is
$
19,614.
Thus,
total
agency
and
contractor
burden
is
1,793
hours,
with
a
total
cost
of
$
306,728.
22
6(
d)
Respondent
Universe
and
Total
Burden
Costs
EPA
expects
the
total
cost
for
survey
respondents
to
be
$
59,919
(
2004$),
based
on
a
total
burden
estimate
of
3,383
hours
and
an
hourly
wage
of
$
17.71.

6(
e)
Bottom
Line
Burden
Hours
and
Costs
The
following
table
presents
EPA's
estimate
of
the
total
bottom
line
burden
and
costs
of
this
information
collection:

Table
A4:
Total
Estimated
Bottom
Line
Burden
and
Cost
Summary
Affected
Individuals
Total
Burden
Total
Cost
(
2004$)

Survey
respondents
3,383
hours
$
59,919
EPA
staff
650
hours
$
19,614
EPA's
contractorsa
1,143
hours
$
287,114
Total
Burden
and
Cost
5,176
hours
$
366,647
a
The
total
cost
listed
for
EPA's
contractors
includes
an
incentive
fee
of
approximately
$
10
for
each
survey
respondent.
Respondents
also
receive
additional
compensation
from
Knowledge
Networks
in
the
form
of
free
internet
access.

6(
f)
Reasons
for
Change
in
Burden
The
survey
is
a
one­
time
data
collection
activity.

6(
g)
Burden
Statement
EPA
estimates
that
the
public
reporting
and
record
keeping
burden
associated
with
the
survey
will
average
0.68
hours
per
respondent
(
i.
e.,
a
total
of
3,383
hours
of
burden
divided
among
5,000
survey
respondents).
Burden
means
the
total
time,
effort,
or
financial
resources
expended
by
persons
to
generate,
maintain,
retain,
or
disclose
or
provide
information
to
or
for
a
Federal
agency.
This
includes
the
time
needed
to
review
instructions;
develop,
acquire,
install,

and
utilize
technology
and
systems
for
the
purposes
of
collecting,
validating,
and
verifying
information,
processing
and
maintaining
information,
and
disclosing
and
providing
information;
23
adjust
the
existing
ways
to
comply
with
any
previously
applicable
instructions
and
requirements;

train
personnel
to
be
able
to
respond
to
a
collection
of
information;
search
data
sources;
complete
and
review
the
collection
of
information;
and
transmit
or
otherwise
disclose
the
information.
An
agency
may
not
conduct
or
sponsor,
and
a
person
is
not
required
to
respond
to,
a
collection
of
information
unless
it
displays
a
currently
valid
OMB
control
number.
The
OMB
control
numbers
for
EPA's
regulations
are
listed
in
40
CFR
part
9
and
48
CFR
chapter
15.

To
comment
on
the
Agency's
need
for
this
information,
the
accuracy
of
the
provided
burden
estimates,
and
any
suggested
methods
for
minimizing
respondent
burden,
including
the
use
of
automated
collection
techniques,
EPA
has
established
a
public
docket
for
this
ICR
under
Docket
ID
No.
OW­
2005­
0006,
which
is
available
for
public
viewing
at
the
Water
Docket
in
the
EPA
Docket
Center
(
EPA/
DC),
EPA
West,
Room
B102,
1301
Constitution
Ave.,
NW,

Washington,
DC.
The
EPA
Docket
Center
Public
Reading
Room
is
open
from
8:
30
a.
m.
to
4:
30
p.
m.,
Monday
through
Friday,
excluding
legal
holidays.
The
telephone
number
for
the
Reading
Room
is
(
202)
566­
1744,
and
the
telephone
number
for
the
Water
Docket
is
(
202)
566­
2426.
An
electronic
version
of
the
public
docket
is
available
through
EPA
Dockets
(
EDOCKET)
at
http://
www.
epa.
gov/
edocket.
Use
EDOCKET
to
submit
or
view
public
comments,
access
the
index
listing
of
the
contents
of
the
public
docket,
and
to
access
those
documents
in
the
public
docket
that
are
available
electronically.
Once
in
the
system,
select
"
search,"
then
key
in
the
docket
ID
number
identified
above.
Also,
you
can
send
comments
to
the
Office
of
Information
and
Regulatory
Affairs,
Office
of
Management
and
Budget,
725
17th
Street,
NW,
Washington,

DC
20503,
Attention:
Desk
Office
for
EPA.
Please
include
the
EPA
Docket
ID
number
(
OW­

2005­
0006)
in
any
correspondence.
24
PART
B
OF
THE
SUPPORTING
STATEMENT
1.
Survey
Objectives,
Key
Variables,
and
Other
Preliminaries
1(
a)
Survey
Objectives
The
overall
goal
of
this
survey
is
to
explore
how
the
public
values
(
including
use
and
non­
use
values)
for
fish
and
aquatic
organisms
are
affected
by
impingement
and
entrainment
at
cooling
water
intake
structures
(
CWIS)
located
at
Phase
III
316(
b)
facilities,
as
reflected
in
individuals'
willingness
to
pay
for
programs
that
would
prevent
such
losses.
EPA
has
designed
the
survey
to
provide
data
to
support
the
following
specific
objectives:

 
To
estimate
the
use
and
non­
use
values
that
individuals
place
on
preventing
losses
of
fish
and
other
aquatic
organisms
caused
by
CWIS
at
Phase
III
316(
b)
facilities.

 
To
understand
how
much
individuals
value
preventing
fish
losses,
increasing
fish
populations,
and
increasing
commercial
and
recreational
catch
rates.

 
To
understand
how
such
values
depend
on
the
current
baseline
level
of
fish
populations
and
fish
losses,
the
scope
of
the
change
in
those
measures,
and
the
certainty
level
of
the
predictions.

 
To
understand
how
such
values
vary
with
respect
to
individuals'
economic
and
demographic
characteristics.

Understanding
total
public
values
 
including
non­
use
values
for
fish
resources
lost
to
impingement
and
entrainment
 
is
necessary
to
determine
the
full
range
of
benefits
associated
with
reductions
in
impingement
and
entrainment
losses
at
Phase
III
facilities.
Because
non­
use
values
may
be
substantial
in
some
cases,
failure
to
recognize
such
values
may
lead
to
improper
inferences
regarding
policy
benefits
(
Freeman
2003).

1(
b)
Key
Variables
The
key
elicitation
questions
in
the
survey
ask
respondents
whether
or
not
they
would
vote
for
policies
that
would
increase
their
cost
of
living,
in
exchange
for
specified
changes
in
[
a]

impingement
and
entrainment
losses
of
fish,
[
b]
long­
term
fish
populations,
and
[
c]
recreational
25
and
commercial
catch.
More
specifically,
the
choice
experiment
framework
allows
respondents
to
view
pairs
of
multi­
attribute
policies
associated
with
the
reduction
of
I&
E
losses.

Respondents
are
asked
to
choose
the
program
that
they
would
prefer,
or
to
choose
to
reject
both
policies.
This
follows
well­
established
choice
experiment
methodology
and
format
(
Adamowicz
et
al.
1998;
Louviere
et
al.
2000;
Bennett
and
Blamey
2001;
Bateman
et
al.
2002).
Important
variables
in
the
analysis
of
the
choice
questions
are
how
the
respondent
votes,
the
amount
of
the
cost
of
living
increase,
the
number
of
fish
losses
that
are
prevented,
the
change
in
long­
term
fish
populations,
and
the
change
in
commercial
and
recreational
catch.
Other
important
variables
include
whether
or
not
the
respondent
is
a
user
of
the
affected
aquatic
resources,
other
recreational
activities
in
which
the
respondent
participates,
household
income,
and
other
respondent
demographics.

1(
c)
Statistical
Approach
EPA
believes
that
a
statistical
survey
approach
is
necessary
to
ensure
that
inferences
and
analyses
based
on
the
resulting
data
are
as
statistically
unbiased
and
as
precise
as
practicable.
A
census
approach
is
impractical
because
contacting
all
households
in
the
U.
S.
would
require
an
enormous
expense.
On
the
other
hand,
an
anecdotal
approach
is
not
sufficiently
rigorous
to
provide
a
useful
estimate
of
the
total
value
of
fish.
Thus,
a
statistical
survey
is
the
most
reasonable
approach
to
satisfy
EPA's
analytic
needs
for
the
316(
b)
regulation
benefit
analysis.

To
support
implementation
of
the
survey,
EPA
has
retained
two
contractors.
Abt
Associates
Inc.
(
55
Wheeler
Street,
Cambridge,
MA
02138)
will
assist
in
questionnaire
design,

sampling
design,
and
analysis
of
the
survey
results.
Knowledge
Networks,
Inc
(
1350
Willow
Road,
Suite
102,
Menlo
Park,
CA
94025)
will
assist
in
sampling
design,
recruitment
of
subjects,

and
implementation
of
the
survey.

1(
d)
Feasibility
The
survey
instrument
has
been
repeatedly
pre­
tested
during
a
series
of
twelve
focus
groups,
and
has
been
and
will
continue
to
be
subject
to
review
by
reviewers
in
academia
and
government,
so
EPA
does
not
anticipate
that
respondents
will
have
difficulty
interpreting
or
responding
to
any
of
the
survey
questions.
Additionally,
since
the
survey
will
be
administered
as
26
a
web­
based
survey,
it
will
be
easily
accessible
to
all
respondents.
Thus,
EPA
believes
that
respondents
will
not
face
any
obstacles
in
completing
the
survey,
and
that
the
survey
will
produce
useful
results.
EPA
has
dedicated
sufficient
funding
(
under
EPA
contracts
No.
68­
C99­

239
and
68­
W­
01­
039)
to
design
and
implement
the
survey.
Given
the
timetable
outlined
in
Section
A.
5(
d)
of
this
document,
the
survey
results
will
be
available
for
timely
use
in
the
benefits
analysis
for
the
Phase
III
316(
b)
rule.

2.
Survey
Design
2(
a)
Target
Population
and
Coverage
The
target
population
for
this
survey
includes
individuals
from
continental
U.
S.

households
who
are
18
years
of
age
or
older.
The
sample
will
be
chosen
to
reflect
the
demographic
characteristics
of
the
general
U.
S.
population.

2(
b)
Sampling
Design
(
I)
Sampling
Frames
The
sampling
frame
for
this
survey
is
the
panel
of
individuals
previously
recruited
by
Knowledge
Networks
to
participate
in
online
surveys.
The
overall
sampling
frame
from
which
Knowledge
Networks
selects
these
individuals
is
the
set
of
all
individuals
in
continental
U.
S.

households
who
are
18
years
of
age
or
older
and
who
have
listed
phone
numbers.
Individuals
in
the
Knowledge
Networks
panel
are
recruited
using
a
list­
assisted
random
digit
dialing
telephone
methodology,
thus
providing
a
probability­
based
starting
sample
of
U.
S.
telephone
households.

The
panel
is
routinely
supplemented
to
account
for
panel
attrition.
Panel
sample
weights
are
adjusted
to
U.
S.
Census
demographic
benchmarks
to
reduce
error
due
to
non­
coverage
of
nontelephone
households
and
to
reduce
bias
due
to
nonresponse
and
other
non­
sampling
errors.
The
panel
closely
tracks
the
U.
S.
population
on
age,
race,
Hispanic
ethnicity,
geographical
region,

employment
status,
and
other
demographic
elements,
and
the
differences
that
do
exist
are
small
and
can
be
corrected
statistically
in
the
survey
data.
For
discussion
of
techniques
that
EPA
will
use
to
minimize
nonresponse
and
other
non­
sampling
errors
in
the
survey
sample,
refer
to
Section
2(
b)(
II),
below.
27
(
II)
Sample
Sizes
The
intended
sample
size
for
the
survey
is
5,000
households,
including
500
households
for
the
pre­
test
and
600
households
for
the
non­
response
follow­
up
interviews.
This
sample
size
was
chosen
to
provide
statistically
robust
regression
results
while
minimizing
the
cost
and
burden
of
the
survey.
Given
this
sample
size,
the
level
of
precision
achieved
by
the
analysis
will
be
more
than
adequate
to
meet
the
analytic
needs
of
the
benefits
analysis
for
the
316(
b)

regulation.
For
further
discussion
of
the
level
of
precision
required
by
this
analysis,
see
Section
2(
c)(
I)
below.

(
III)
Stratification
Variables
The
survey
sample
will
be
selected
from
the
Knowledge
Networks
panel
using
a
stratified
selection
process.
The
panel
will
be
stratified
by
geographical
region,
and
within
each
region,
by
demographic
variables
including
age,
education,
Hispanic
ethnicity,
race,
gender,
and
household
income.
Two
factors
will
be
used
to
determine
the
number
of
respondents
included
in
the
sample
for
each
stratification
grouping.
First,
to
increase
the
demographic
similarities
between
members
of
the
sample
who
respond
to
the
survey
and
the
U.
S.
Census
population
benchmarks,
sample
sizes
for
each
demographic
stratification
group
will
take
into
account
each
group's
estimated
response
rate
to
Knowledge
Network's
initial
telephone
sample
of
U.
S.
phone
numbers.
Sample
sizes
will
be
larger
for
under­
represented
groups.
Second,
sample
sizes
for
each
demographic
stratification
group
will
factor
in
the
historic
response
rates
of
individuals
to
similar
types
of
surveys
 
in
particular,
historic
tendencies
to
give
consistent
and
valid
answers
to
conjoint
and
valuation
questions.
These
estimates
for
providing
consistent/
valid
answers,

while
initially
based
on
past
experience,
will
be
refined
as
a
result
of
experience
gained
during
the
pretest.
Sample
sizes
will
be
larger
for
groups
that
tend
to
provide
inconsistent
or
invalid
answers.

(
IV)
Sampling
Method
Using
the
stratification
design
discussed
above,
respondents
will
be
randomly
selected
from
the
Knowledge
Networks
panel
sampling
frame.
As
noted
in
the
previous
paragraph,
the
28
stratification
design
systematically
oversamples
households
from
certain
demographic
groups.

By
oversampling
groups
that
tend
to
have
lower
response
and
consistency
rates,
the
demographic
characteristics
of
respondents
who
provide
valid
completed
surveys
will
mirror
the
Census
demographic
benchmarks
more
closely.
Thus,
this
design
will
reduce
error
due
to
non­
coverage
of
non­
telephone
households
in
the
original
Knowledge
Networks
recruitment
process,
and
will
also
reduce
bias
due
to
nonresponse
and
other
non­
sampling
errors.

(
V)
Multi­
Stage
Sampling
Multi­
stage
sampling
will
not
be
necessary
for
this
survey.

2(
c)
Precision
Requirements
(
I)
Precision
Targets
Table
B1,
below,
shows
the
expected
level
of
precision
for
this
survey's
statistical
design,

at
both
regional
and
national
levels.
The
sample
design
effect
(
i.
e.,
the
ratio
of
the
design­
based
variance
estimate
divided
by
the
variance
estimate
that
would
have
been
obtained
from
a
simple
random
sample
of
the
same
size)
for
the
region­
level
estimates
is
1.3,
which
takes
into
account
unequal
probabilities
of
selection
arising
from
the
panel
recruitment
stage
and
probabilistic
selection
from
the
panel
for
the
survey.
The
national
sample
design
effect
is
also
assumed
to
be
1.3.
The
confidence
intervals
presented
in
Table
B1
take
into
account
the
reduction
in
the
effective
sample
size
resulting
from
these
sample
design
effects.

The
table
shows
90
percent
confidence
intervals
for
a
typical
binary
survey
variable
with
a
sample
mean
of
50
percent
(
for
example,
the
number
of
respondents
who
would
vote
for
a
particular
policy
option).
The
confidence
intervals
range
from
+/­
6.6
percentage
points
in
the
Inland
region,
which
has
a
sample
size
of
206,
to
+/­
2.7
percentage
points
in
the
Southeast
region,
which
has
a
sample
size
of
1,305.
For
the
3,900
respondents
in
the
total
national
main
survey
sample,
the
90
percent
confidence
interval
is
+/­
1.5
percentage
points.
29
Table
B1:
Confidence
Intervals
for
Binary
Survey
Variables,
by
Region
Region
Sample
Size
90
Percent
Confidence
Intervala
Northeast
665
+/­
3.7%

Southeast
1,305
+/­
2.7%

Great
Lakes
984
+/­
3.0%

Inland
206
+/­
6.6%

Pacific
Mountain
739
+/­
3.5%

All
Regions
3,900
+/­
1.5%

a
Represents
a
90
percent
confidence
interval
for
a
binary
survey
variable
(
for
example,
whether
or
not
respondents
would
visit
a
new
fishing
site)
with
a
sample
mean
of
0.5
(
or
50
percent).

(
II)
Non­
Sampling
Errors
One
issue
that
may
be
encountered
in
stated
preference
surveys
is
the
problem
of
protest
responses.
Protest
responses
are
responses
from
individuals
who
reject
the
survey
format
or
question
design,
even
though
they
may
value
the
resources
being
considered
(
Mitchell
and
Carson
1989).
For
example,
some
respondents
may
feel
that
any
amount
of
impingement
and
entrainment
is
unacceptable,
and
choose
not
to
respond
to
the
survey.
To
deal
with
this
issue,

EPA
has
included
several
questions,
including
an
open­
ended
comments
section,
to
help
identify
protest
responses.
The
use
of
such
methods
to
identify
protest
responses
is
well­
established
in
the
literature
(
Bateman
et
al.
2002).
Moreover,
many
researchers
(
e.
g.,
Bateman
et
al.
2002)

suggest
that
a
choice
experiment
format,
such
as
that
proposed
here,
may
ameliorate
such
responses.

A
different
type
of
non­
sampling
error,
which
may
result
from
use
of
panel
data,
is
nonresponse
bias.
The
Knowledge
Networks
web­
enabled
panel
involves
several
stages
of
recruitment,
maintenance,
and
survey
implementation.
Because
attrition
occurs
at
each
of
these
stages,
the
cumulative
response
rate
for
a
survey
using
the
panel
is
lower
than
would
be
obtained
in
a
simple
cross­
sectional
survey
using
random­
digit
dialing.
However,
approaches
to
minimize
nonresponse
bias
in
internet
surveys
are
established
in
the
literature
(
Dillman
2000).
Because
of
the
use
of
these
techniques
(
such
as
initial
RDD
probability
sampling,
careful
panel
management,
and
effective
survey
sampling
procedures),
the
representativeness
of
Knowledge
30
Networks'
survey
samples
is
generally
comparable
to
those
telephone
surveys
sponsored
by
various
Federal
agencies.
Nonetheless,
EPA
will
conduct
several
additional
activities
to
minimize
the
potential
for
nonresponse
bias
in
the
current
survey:

(
1)
EPA
will
use
a
stratified
survey
sample
to
mitigate
the
effect
of
nonresponse
bias.

(
2)
EPA
will
use
nonresponse
follow­
up
interviews
to
increase
the
overall
response
rate.

(
3)
EPA
will
correct
statistically
for
unobserved
heterogeneity
that
might
be
present
in
the
data
from
respondents
interviewed
for
this
study
Stratified
Survey
Sample
EPA
will
use
a
stratified
survey
sample
to
mitigate
the
effect
of
nonresponse
bias.
By
varying
the
sample
sizes
of
various
demographic
groups,
this
survey
will
control
for
the
differential
propensity
of
individuals
in
different
demographic
groups
to
respond
to
the
original
Knowledge
Networks
recruitment
process
and
to
this
survey.

Nonresponse
Follow­
up
Interviews
In
order
to
minimize
nonresponse
bias
from
individuals
who
chose
not
to
respond
during
some
stage
of
panel
recruitment,
Knowledge
Networks
will
conduct
nonresponse
follow­
up
interviews
within
the
initial
sample
for
all
stages
of
panel
attrition.
Respondents
will
include
individuals
who
were
contacted
but
did
not
join
the
Knowledge
Networks
web
panel,
who
joined
the
panel
but
did
not
connect
their
WebTVs,
who
connected
but
did
not
complete
the
first
survey,
who
completed
the
first
survey
but
did
not
complete
any
following
surveys,
and
who
completed
following
surveys
but
did
not
complete
the
316(
b)
survey.
The
end
result
is
a
direct
measurement
of
nonresponse
bias
and
an
increase
in
the
effective
cumulative
response
rate
using
a
weighted
response
rate
formulation.
The
nonresponse
follow­
up
study
will
include
interviews
with
600
individuals,
using
a
self­
administered,
computer­
assisted
mode
of
data
collection
that
is
consistent
with
the
main
sample.
These
follow­
up
interviews
will
be
used
to
supplement
the
responses
from
the
main
study
data
set,
and
will
be
identified
by
sample
grouping
(
e.
g.,
panel
recruitment
non­
responder,
panel
drop­
off
case,
etc.).

Statistical
Correction
for
Unobserved
Heterogeneity
EPA
also
plans
to
use
a
statistical
technique
to
correct
for
unobserved
heterogeneity,

based
on
work
by
Heckman
(
Heckman,
1979).
This
correction
may
be
conducted
as
substitute
or
31
as
complement
to
a
nonresponse
study
effort.
The
technique
will
provide
the
probabilities
of
participation
for
each
subpopulation
grouping
(
as
defined
by
Census
long
form
data,
voting
data,

and
other
data
sources)
included
in
the
initial
sample
frame.
Importantly,
these
probabilities
of
participation
will
demonstrate
the
extent
of
self­
selection
bias
in
the
actual
survey
sample.
The
selection
correction
technique
will
take
fully
into
account
each
stage
of
the
panel
recruitment
and
sampling
process:
that
is,
all
stages
in
between
construction
of
the
sample
frame
for
panel
recruitment
and
web
interviewing
of
the
panel
respondents.

The
selection
correction
technique
will
be
applied
to
the
key
estimate
of
interest
for
this
survey:
respondents'
WTP
for
environmental
changes
resulting
from
the
316(
b)
regulation,

including
reduction
in
fish
mortality
from
impingement
and
entrainment
and
expected
changes
in
fish
population
and
commercial
and
recreational
catch.
The
technique
will
be
used
to
produce
both
adjusted
and
unadjusted
estimates
of
WTP.
Because
the
technique
yields
regression
estimates
that
explain
survey
outcome
data
using
subpopulation
data
related
to
neighborhood
characteristics,
it
can
be
used
to
identify
the
sources
of
differences
between
the
adjusted
and
unadjusted
estimates.

2(
d)
Questionnaire
Design
The
information
requested
by
the
survey
is
discussed
in
Section
4(
b)(
I)
of
Part
A
of
the
supporting
statement.
The
full
text
of
the
questionnaire
is
provided
in
Attachment
3.

The
following
bullets
discuss
EPA's
reasons
for
including
the
questions
in
the
survey:

 
Concern
for
Policy
Issues.
EPA
included
this
section
to
prepare
respondents
to
answer
the
stated
preference
questions
by
motivating
respondents
to
think
about
the
relative
importance
of
different
policy
issues.

 
Voting
for
Regulations
to
Prevent
Fish
Losses.
The
questions
in
this
section
are
the
key
part
of
the
survey.
Respondents'
choices
when
presented
with
specific
fish­
related
resource
changes
and
household
cost
increases
are
the
main
data
that
allow
estimation
of
willingness­
to­
pay
for
resource
changes.
The
questions
are
presented
in
a
choice
experiment
(
A,
B,
or
neither)
format
because
this
is
an
elicitation
format
that
has
been
successfully
used
by
a
number
of
previous
valuation
studies
(
Adamowicz
et
al.
1998;

Bateman
et
al.
2002;
Bennett
and
Blamey
2001;
Louviere
et
al.
2000;
Johnston
et
al.

2002a,
2005;
Opaluch
et
al.
1993).
Furthermore,
many
focus
group
participants
indicated
32
that
they
have
some
previous
experience
making
choices
within
a
framework
in
which
they
are
asked
to
vote
for
one
of
a
series
of
options,
and
are
comfortable
with
this
format.

 
Reasons
for
Voting.
These
questions
provide
information
that
will
be
used
to
determine
which
aspects
of
the
resource
change
were
important
to
respondents.
These
questions
also
allow
EPA
to
identify
protest
responses
and
responses
that
reflect
symbolic
or
warm
glow
biases,
following
Mitchell
and
Carson
(
1989).

 
Affiliations
and
Recreational
Experience.
These
questions
elicit
affiliation
and
recreational
experience
data
to
test
if
certain
respondent
characteristics
may
heavily
influence
responses
to
the
referendum
questions.
These
questions
will
also
allow
EPA
to
identify
resource
non­
users,
for
purposes
of
estimating
non­
user
WTP.

 
Demographics.
Responses
to
these
questions
will
be
used
to
estimate
the
influence
of
demographic
variables
on
respondents'
voting
choices,
and
ultimately,
their
WTP
to
prevent
I&
E
losses
of
fish.
This
information
will
allow
EPA
to
use
the
regression
results
to
estimate
WTP
for
populations
in
different
regions
affected
by
the
316(
b)
rule
for
Phase
III
facilities.

 
Comments.
This
section
is
primarily
intended
to
help
identify
protest
responses,
i.
e.

responses
from
individuals
who
rejected
the
format
of
the
survey
or
the
way
the
questions
were
phrased.

3.
Pretests
and
Pilot
Tests
EPA
has
conducted
extensive
pretests
of
the
survey
instrument
during
a
set
of
12
focus
groups
(
EPA
ICR
#
2155.01).
These
focus
groups
have
also
included
individual
cognitive
interviews
with
survey
respondents
(
Kaplowicz
et
al.
2004),
and
think­
aloud
or
verbal
protocol
analyses
(
Schkade
and
Payne
1994).
Individuals
in
these
focus
groups
completed
draft
survey
questionnaires
and
provided
comments
and
feedback
about
the
survey
format
and
content,
their
interpretations
of
the
questions,
and
other
issues
relevant
to
stated
preference
estimation.
Based
on
their
responses,
EPA
has
made
a
number
of
improvements
to
the
survey
questionnaire.

Particular
emphasis
in
these
survey
pretests
was
placed
on
testing
for
the
presence
of
potential
biases
associated
with
poorly­
designed
stated
preference
surveys,
including
hypothetical
bias,

strategic
bias,
symbolic
(
warm
glow)
bias,
framing
effects,
embedding
biases,
methodological
misspecification,
and
protest
responses
(
Mitchell
and
Carson
1989).
Focus
groups
and
cognitive
33
interviews
led
to
numerous
changes
to
ameliorate
and
minimize
these
biases
in
the
final
survey
instrument
and
introductory
materials.
Results
from
the
final
two
focus
groups
(
one
including
cognitive
interviews)
provided
convincing
evidence
that
the
great
majority
of
respondents
answer
the
stated
preference
survey
in
ways
appropriate
for
stated
preference
WTP
estimation,

and
that
their
responses
do
not
reflect
the
biases
noted
above
(
Besedin
et
al.,
2005;
see
docket
for
EPA
ICR
#
2155.02).
Moreover,
survey
pretests
have
demonstrated
the
ability
of
follow­
up
survey
questions
(
i.
e.,
after
the
primary
choice
questions)
to
clearly
identify
the
respondents
who's
answers
to
the
choice
questions
display
the
influence
of
bias,
or
do
not
understand
the
survey
context
or
information.

Additionally,
as
part
of
the
current
information
collection,
EPA
plans
to
conduct
a
survey
pilot
on
a
sample
of
500
individuals.
After
this
pilot
has
been
conducted,
the
Agency
will
solicit
additional
input
from
Agency
and
academic
reviewers
on
the
preliminary
performance
of
the
survey
questionnaire
and
sampling
design.
Based
on
the
results
of
the
survey
pilot
and
comments
received
from
these
reviewers,
EPA
may
make
necessary
changes
to
the
questionnaire
or
sampling
methodology
before
implementing
the
survey
with
3,900
individuals.

4.
Collection
Methods
and
Follow­
up
4(
a)
Collection
Methods
The
survey
will
be
administered
as
an
online
survey
through
the
Knowledge
Networks
electronic
distribution
system.
Selected
Knowledge
Network
panel
members
will
be
sent
a
personalized
e­
mail
notifying
them
that
the
background
materials
and
survey
questionnaire
are
available
online.
The
email
notification
also
contains
a
hyperlink
to
the
survey.
Because
this
hyperlink
includes
a
unique
identifier,
there
is
no
possibility
that
any
respondent
could
submit
more
than
one
response.
Furthermore,
since
no
one
except
the
selected
panel
members
has
access
to
the
e­
mail
from
Knowledge
Networks,
panel
members
are
the
only
individuals
who
will
be
able
to
respond
to
the
survey.

Respondents
will
receive
an
incentive
fee
of
$
5
to
$
10
for
completing
this
survey.
All
members
of
the
Knowledge
Network
panel
are
provided
with
home
internet
access
as
part
of
their
agreement
to
participate
in
the
panel.
34
4(
b)
Survey
Response
and
Follow­
up
The
target
response
rate
for
the
survey
is
80
percent.
This
response
rate
represents
the
fraction
of
the
Knowledge
Network
sample
members
that
are
expected
to
complete
the
survey.

However,
the
actual
response
rate
compared
to
the
general
population
is
lower,
given
that
the
Knowledge
Network
telephone
recruitment
process
has
a
36
percent
response
rate.
Knowledge
Network
panel
members
who
are
selected
for
the
survey
will
be
contacted
by
e­
mail
to
inform
them
that
the
survey
is
available
online.
To
improve
the
response
rate,
e­
mail
reminders
will
be
sent
to
panel
members
who
do
not
complete
the
survey
within
a
short
period
of
time.

Additionally,
an
incentive
fee
of
$
5
to
$
10
will
be
provided
to
respondents
who
complete
the
survey.

Although
the
cumulative
response
rate
for
this
survey
may
seem
low
compared
to
response
rates
typically
achieved
using
other
survey
methods
(
e.
g.,
phone
and
mail
surveys),
the
web­
based
format
facilitates
geographic
and
demographic
stratification,
which
will
substantially
improve
the
quality
of
the
survey.
As
noted
in
Section
2(
b),
the
stratification
design
systematically
oversamples
households
from
certain
demographic
groups
that
historically
have
lower
response
and
consistency
rates,
which
helps
to
ensure
that
the
demographic
characteristics
of
respondents
who
provide
valid
completed
surveys
will
mirror
the
Census
demographic
benchmarks
more
closely.
Also,
as
noted
in
Section
2(
c)(
II),
EPA
will
use
statistical
techniques
to
correct
for
unobserved
heterogeneity
in
the
survey
sample,
as
well
as
nonresponse
follow­
up
interviews
to
increase
the
effective
survey
response
rate.
Combined,
these
techniques
will
reduce
error
due
to
non­
coverage
of
non­
telephone
households
in
the
original
Knowledge
Networks
recruitment
process,
and
also
reduce
bias
due
to
nonresponse
and
other
non­
sampling
errors.
Furthermore,
practical
limitations
including
time
and
budget
constraints
prevented
use
of
other
survey
methodologies
for
this
survey.
For
an
example
of
a
similar
research
effort
that
successfully
used
this
web­
based
survey
approach
to
evaluate
WTP
for
changes
in
water
quality,

see
Viscusi
et
al.
(
2004).
35
5.
Analyzing
and
Reporting
Survey
Results
5(
a)
Data
Preparation
Since
the
survey
will
be
administered
as
an
online
survey,
survey
responses
will
automatically
be
entered
into
an
electronic
database
at
the
time
they
are
submitted.
After
all
responses
are
submitted,
the
database
contents
will
be
converted
into
a
format
suitable
for
use
with
a
statistical
analysis
software
package.
The
online
survey,
database
management,
and
data
set
conversion
will
be
conducted
by
Knowledge
Networks
and
Abt
Associates
Inc.

All
survey
responses
will
be
vetted
by
EPA
for
completeness.
Additionally,
respondents'

answers
to
the
choice
experiment
questions
will
be
tested
to
ensure
that
they
are
internally
consistent
with
respect
to
scope
and
other
expectations
of
neoclassical
preference
theory.

5(
b)
Analysis
Once
the
survey
data
has
been
converted
into
a
data
file,
it
will
be
analyzed
using
stateof
the­
art
statistical
analysis
techniques.
The
following
section
discusses
the
model
that
will
be
used
to
analyze
the
stated
preference
data
from
the
survey.

Analysis
of
Stated
Preference
Data
The
model
for
analysis
of
stated
preference
data
is
grounded
in
the
standard
random
utility
model
of
Hanemann
(
1984)
and
McConnell
(
1990).
This
model
is
applied
extensively
within
stated
preference
research,
and
allows
well­
defined
welfare
measures
(
i.
e.,
willingness
to
pay)
to
be
derived
from
choice
experiment
models
(
Bennett
and
Blamey
2001;
Louviere
et
al.

2000).
Within
the
standard
random
utility
model
applied
to
choice
experiments,
hypothetical
policy
alternatives
are
described
in
terms
of
attributes
that
focus
groups
(
Johnston
et
al.
1995;

Adamowicz
et
al.
1998;
Opaluch
et
al.
1993)
reveal
as
relevant
to
respondents'
utility,
or
wellbeing
One
of
these
attributes
would
include
a
mandatory
monetary
cost
to
the
respondent's
household.

Applying
this
standard
model
to
choices
among
policies
to
reduce
entrainment
and
impingement
(
I&
E)
losses,
EPA
defines
a
standard
utility
function
Ui(.)
that
includes
attributes
of
an
I&
E
reduction
plan
and
the
net
cost
of
the
plan
to
the
respondent.
Following
standard
36
random
utility
theory,
utility
is
assumed
known
to
the
respondent,
but
stochastic
from
the
perspective
of
the
researcher,
such
that
(
1)
Ui(.)
=
U(
Xi,
D,
Y­
Fi)
=
v(
Xi,
D,
Y­
Fi)
+
 i
where:

Xi
=
a
vector
of
variables
describing
attributes
of
I&
E
reduction
plan
i;

D
=
a
vector
characterizing
demographic
and
other
attributes
of
the
respondent.

Y
=
disposable
income
of
the
respondent.

Fi
=
mandatory
additional
cost
faced
by
the
household
under
plan
i;

v(.)
=
a
function
representing
the
empirically
estimable
component
of
utility;

 i
=
stochastic
or
unobservable
component
of
utility,
modeled
as
an
econometric
error.

Econometrically,
a
model
of
such
a
preference
function
is
obtained
by
methods
designed
for
limited
dependent
variables,
because
researchers
only
observe
the
respondent's
choice
among
alternative
policy
options,
rather
than
observing
values
of
Ui(.)
directly
(
Maddala,
1983;

Hanemann,
1984).
Standard
random
utility
models
are
based
on
the
probability
that
a
respondent's
utility
from
a
policy
Plan
i,
Ui(.),
exceeds
the
utility
from
alternative
Plans
j,
Uj(.),

for
all
potential
plans
j 
i
considered
by
the
respondent.
More
specifically,
the
random
utility
model
presumes
that
the
respondent
assesses
the
utility
that
would
result
from
each
I&
E
reduction
plan
i,
and
chooses
the
plan
that
would
offer
the
highest
utility.

When
faced
with
k
distinct
plans
defined
by
their
attributes,
the
respondent
will
choose
plan
i
if
the
anticipated
utility
from
plan
i
exceeds
that
of
all
other
k­
1
plans.
Drawing
from
(
1),

the
respondent
will
choose
plan
i
if
(
2)
(
v(
Xi,
D,
Y­
Fi)
+
 i)
 
(
v(
Xk,
D,
Y­
Fk)
+
 k)
 
k 
i.

If
the
 i
are
assumed
independently
and
identically
drawn
from
a
type
I
extreme
value
(
Gumbel)
distribution,
the
model
may
be
estimated
as
a
conditional
logit
model,
as
detailed
by
Maddala
(
1983),
Greene
(
2003)
and
others.
This
model
is
most
commonly
used
when
the
respondent
considers
more
than
two
options
in
each
choice
set
(
e.
g.,
Plan
A,
Plan
B,
Neither
Plan),
and
results
in
an
econometric
(
empirical)
estimate
of
the
systematic
component
of
utility
37
v(.),
based
on
observed
choices
among
different
policy
plans.
Based
on
this
estimate,
one
may
calculate
welfare
measures
(
willingness
to
pay)
following
the
well­
known
methods
of
Hanemann
(
1984),
as
described
by
Freeman
(
2003)
and
others.
Following
standard
choice
experiment
methods
(
Adamowicz
et
al.
1998;
Bennett
and
Blamey
2001),
each
respondent
will
consider
questions
including
three
potential
choice
options
(
i.
e.,
Plan
A,
Plan
B,
Neither
Plan)
 
choosing
the
option
that
provides
the
highest
utility
as
noted
above.
Following
clear
guidance
from
the
literature,
a
"
neither
plan"
or
status
quo
option
is
always
included
in
the
visible
choice
set,
to
ensure
that
WTP
measures
are
well­
defined
(
Louviere
et
al.
2000).

EPA
also
anticipates
that
respondents
will
consider
more
than
one
choice
question
within
the
same
survey,
to
increase
information
obtained
from
each
respondent.
This
is
standard
practice
within
choice
experiment
and
dichotomous
choice
contingent
valuation
surveys
(
Poe
et
al.
1997;
Layton
2000).
While
respondents
will
be
instructed
to
consider
each
choice
question
as
an
independent,
non­
additive
choice,
it
is
nonetheless
standard
practice
within
the
literature
to
allow
for
the
potential
of
correlation
among
questions
answered
within
a
single
survey,
by
a
single
respondent.
That
is,
responses
provided
by
individual
respondents
may
be
correlated
even
though
responses
across
different
respondents
are
considered
independent,
identically
distributed
(
Poe
et
al.
1997;
Layton
2000;
Train
1998).

There
are
a
variety
of
approaches
to
such
potential
correlation.
Following
standard
practice,
EPA
anticipates
the
estimation
of
a
variety
of
models
to
assess
their
performance.

Models
to
be
assessed
include
random
effects
and
random
parameters
(
mixed)
discrete
choice
models,
now
common
in
the
stated
preference
literature
(
Greene
2003;
McFadden
and
Train
2000;
Poe
et
al.
1997;
Layton
2000).
Within
such
models,
selected
elements
of
coefficient
vector
are
assumed
normally
distributed
across
respondents,
often
with
free
correlation
allowed
among
parameters
(
Greene
2002).
If
only
the
model
intercept
is
assumed
to
include
a
random
component,
then
a
random
effects
model
results.
If
both
slope
and
intercept
parameters
may
vary
across
respondents,
then
a
random
parameters
model
is
estimated.
EPA
anticipates
that
such
models
will
be
estimated
using
standard
maximum
likelihood
for
mixed
conditional
logit,

as
described
by
Train
(
1998),
Greene
(
2002)
and
others.
Model
performance
for
alternative
specifications
of
mixed
logit
will
be
assessed
by
EPA
using
standard
statistical
measures
of
model
fit
and
convergence,
as
detailed
by
Greene
(
2002),
Greene
(
2003),
and
Train
(
1998).
38
Advantages
of
Choice
Experiments
Choice
experiments
following
the
random
utility
model
outlined
above
are
favored
by
many
researchers
over
other
variants
of
stated
preference
methodology
(
Adamowicz
et
al.
1998;

Bennett
and
Blamey
2001),
and
may
be
viewed
as
a
"
natural
generalization
of
a
binary
discrete
choice
CV"
(
Bateman
et
al.
2002,
p.
271).
Advantages
of
choice
experiments
include
a
capacity
to
address
choices
over
a
wide
array
of
potential
policies,
grounding
in
well­
developed
random
utility
theory,
and
the
similarity
of
the
discrete
choice
context
to
familiar
referendum
or
voting
formats
(
Bennett
and
Blamey
2001).
Compared
to
other
types
of
stated
preference
valuation,

choice
experiments
are
better
able
to
measure
the
marginal
value
of
changes
in
the
characteristics
or
attributes
of
environmental
goods,
and
avoid
response
difficulties
and
biases
(
Bateman
et
al.

2002).
For
example,
choice
experiments
may
reduce
the
potential
for
`
yea­
saying'
and
symbolic
biases
(
Blamey
et
al.
1999;
Mitchell
and
Carson
1989),
as
many
pairs
of
multi­
attribute
policy
choices
(
e.
g.,
Plan
A,
Plan
B,
Neither)
will
offer
no
clearly
superior
choice
for
a
respondent
wishing
to
express
solely
symbolic
environmental
motivations.
For
similar
reasons
choice
experiments
may
ameliorate
protest
responses
(
Bateman
et
al.
2002).
An
additional
advantage
of
such
methods
is
that
they
permit
straightforward
assessments
of
the
impact
of
resource
scope
and
scale
on
respondents'
choices.
This
will
enable
EPA
to
easily
conduct
scope
tests
and
other
assessments
of
the
validity
of
survey
responses
(
Bateman
et
al.
2002,
p.
296­
342).
Finally,
such
methods
are
well­
established
in
the
stated
preference
literature
(
Bennett
and
Blamey
2001).

Additional
details
of
choice
experiment
methodology
(
also
called
choice
modeling)
are
provided
by
Bennett
and
Blamey
(
2001),
Adamowicz
et
al.
(
1998),
Louviere
et
al.
(
2000)
and
many
other
sources
in
the
literature.

An
additional
advantage
of
choice
experiments
in
the
present
application
is
that
they
are
commonly
applied
to
assess
WTP
for
ecological
resource
improvements
of
a
type
quite
similar
to
those
at
issue
in
the
316(
b)
policy
case.
Examples
of
the
application
of
choice
experiments
to
estimate
WTP
associated
with
changes
in
aquatic
life
and
habitat
include
Hoehn
et
al.
(
2004),

Johnston
et
al.
(
2002b),
and
Opaluch
et
al.
(
1999),
among
others.
EPA
has
drawn
upon
these
and
other
examples
of
successful
choice
experiment
design
to
provide
a
basis
for
survey
design
in
the
present
case.

A
final
and
key
advantage
of
choice
experiments
in
the
present
application
is
the
ability
to
estimate
respondents
WTP
for
a
wide
range
of
different
potential
outcomes
of
316(
b)
policies,

differentiated
by
their
attributes.
The
proposed
choice
experiment
survey
versions
will
allow
39
different
respondents
to
choose
among
a
wide
variety
of
hypothetical
policy
options,
some
with
larger
and
other
with
very
small
changes
in
the
presented
attributes
(
annual
fish
losses,
long­
term
fish
populations,
recreational
and
commercial
catch,
household
cost).
That
is,
because
the
survey
is
to
be
implemented
as
a
choice
experiment
survey,
levels
of
attributes
in
choice
scenarios
will
vary
across
respondents
(
Louviere
et
al.
2000).
The
experimental
design
will
also
explicitly
allow
for
variation
in
baseline
population
and
harvest
levels,
following
standard
practice
in
the
literature
(
Louviere
et
al.
2000;
Bateman
et
al.
2002).

Aside
from
providing
the
capacity
to
estimate
WTP
for
a
wide
range
of
policy
outcomes,

it
also
frees
EPA
from
having
to
predetermine
a
single
policy
outcome
for
which
WTP
will
be
estimated.
Given
the
potential
biological
uncertainty
involved
in
the
316(
b)
policy
case,
the
ability
to
estimate
values
for
a
wide
range
of
potential
outcomes
is
critical.

The
ability
to
estimate
WTP
for
a
wide
range
of
different
policy
outcomes
is
a
fundamental
property
of
the
choice
experiment
method
(
Bateman
et
al.
2002;
Louviere
et
al.

2000;
Adamowicz
et
al.
1998).
EPA
emphasizes
that
the
survey
version
included
in
this
ICR
is
for
illustration
only;
it
is
but
one
of
what
will
ultimately
be
a
large
number
of
survey
versions
covering
a
wide
range
of
potential
policy
outcomes.
The
experimental
design
(
see
below)
will
allow
for
survey
versions
showing
a
range
of
different
baseline
and
resource
improvement
levels,

where
these
levels
are
chosen
to
(
almost
certainly)
bound
the
"
actual"
levels.
Given
that
there
will
almost
certainly
be
some
biological
uncertainty
regarding
the
specifics
of
the
"
actual"

baselines
and
improvements,
the
resulting
valuation
estimates
will
allow
flexibility
in
estimating
WTP
for
a
wide
range
of
different
circumstances.
Additional
details
on
the
statistical
(
experimental)
design
of
choice
experiments
is
provided
in
later
sections
of
this
ICR.

Comment
on
Survey
Preparation
and
Pretesting
Following
standard
practice
in
the
stated
preference
literature
(
Johnston
et
al.
1995;

Desvousges
and
Smith
1988;
Desvousges
et
al.
1984;
Mitchell
and
Carson
1989),
all
survey
elements
and
methods
have
been
subjected
to
extensive
development
and
pretesting
in
focus
groups
to
ameliorate
the
potential
for
survey
biases
(
cf.
Mitchell
and
Carson
1989),
and
to
ensure
that
respondents
have
a
clear
understanding
of
the
policies
and
goods
under
consideration,
such
that
informed
choices
may
be
made
that
reflect
respondents'
underlying
preferences.
Following
the
guidance
of
Arrow
et
al.
(
1993),
Johnston
et
al.
(
1995),
and
Mitchell
and
Carson
(
1989),

focus
groups
were
used
to
ensure
that
respondents
are
aware
of
their
budget
constraints,
the
40
scope
of
the
resource
changes
under
consideration,
and
the
availability
of
substitute
environmental
resources.

As
noted
above,
survey
pretests
have
also
included
individual
cognitive
interviews
with
survey
respondents
(
Kaplowicz
et
al.
2004),
and
think­
aloud
or
verbal
protocol
analyses
(
Schkade
and
Payne
1994).
Individuals
in
these
pretests
completed
draft
survey
questionnaires
and
provided
comments
and
feedback
about
the
survey
format
and
content,
their
interpretations
of
the
questions,
and
other
issues
relevant
to
stated
preference
estimation.
Based
on
their
responses,
EPA
made
numerous
improvements
to
the
survey
questionnaire.
Of
particular
emphasis
in
these
survey
pretests
is
testing
for
the
presence
of
potential
biases
including
hypothetical
bias,
strategic
bias,
symbolic
(
warm
glow)
bias,
framing
effects,
embedding
biases,

methodological
misspecification,
and
protest
responses
(
Mitchell
and
Carson
1989).
Focus
groups
and
cognitive
interviews
led
to
numerous
changes
to
ameliorate
and
minimize
these
biases
in
the
final
survey
instrument
and
introductory
materials.
Results
from
focus
groups
and
cognitive
interviews
used
to
test
the
final
survey
version
provided
convincing
evidence
that
the
great
majority
of
respondents
answer
the
stated
preference
survey
in
ways
appropriate
for
stated
preference
WTP
estimation,
and
that
their
responses
do
not
reflect
the
biases
noted
above.

Moreover,
survey
pretests
have
demonstrated
the
ability
of
follow­
up
survey
questions
(
i.
e.,
after
the
primary
choice
questions)
to
clearly
identify
the
respondents
who's
answers
to
the
choice
questions
display
the
influence
of
bias,
or
do
not
understand
the
survey
context
or
information.

The
number
of
focus
groups
used
in
survey
design
(
12)
exceeds
the
number
of
focus
groups
used
in
typical
applications
of
stated
preference
valuation,
and
is
approximately
equal
to
the
number
of
focus
groups
used
by
Johnston
et
al.
(
2002b)
and
Opaluch
et
al.
(
1999)
in
their
development
of
choice
experiment
surveys
addressing
generally
similar
types
of
ecological
resources.
Moreover,
unlike
these
prior
analysis,
EPA
also
incorporated
cognitive
interviews
as
detailed
by
Kaplowicz
et
al.
(
2004).
Given
this
extensive
effort
in
survey
design
 
applying
the
most
state­
of­
the­
art
methods
available
in
the
literature
 
EPA
believes
that
survey
design
far
exceeds
standards
that
are
typical
in
the
published
literature.
The
details
of
focus
groups
to
be
used
in
survey
design
are
discussed
by
EPA
in
a
prior
ICR
(#
2155.01).

Econometric
Specification
Based
on
prior
focus
groups,
expert
review,
and
attributes
of
the
policies
under
consideration,
EPA
anticipates
that
three
attributes
will
be
incorporated
in
the
vector
of
variables
41
describing
attributes
of
I&
E
reduction
plan
(
vector
Xi),
in
addition
to
the
attribute
characterizing
unavoidable
household
cost
Fi.
These
attributes
will
characterize
the
annual
reduction
in
I&
E
losses
(
x1),
anticipated
long­
term
effects
on
fish
populations
(
x2),
and
anticipated
long­
term
effects
on
recreational
and
commercial
harvest
(
x3).
These
variables
will
allow
respondents'

choices
to
reveal
the
potential
impact
of
both
annual
fish
losses
and
long­
term
population
effects
on
utility.
Based
on
results
of
prior
focus
groups
and
expert
opinion,
these
will
be
presented
as
averages
across
identified
aggregate
species
groups.
The
survey
will
also
allow
for
changes
in
baseline
population
levels,
to
assess
whether
WTP
depends
on
the
"
starting
point"
of
fish
populations.

Although
the
literature
offers
no
firm
guidance
regarding
the
choice
of
specific
functional
forms
for
v(.)
within
choice
experiment
estimation,
in
practice
linear
forms
are
often
used
(
Johnston
et
al.
2003b),
with
some
researchers
applying
more
flexible
(
e.
g.,
quadratic)
forms
(
Cummings
et
al.
1994).
Standard
linear
forms
are
anticipated
as
the
simplest
form
to
be
estimated
by
EPA,
from
which
more
flexible
functional
forms
(
able
to
capture
interactions
among
model
variables)
will
be
derived
and
compared.
Anticipated
extensions
to
the
simple
linear
model
include
more
fully­
flexible
forms
that
allow
for
systematic
variations
in
slope
and
intercept
coefficients
associated
with
demographic
or
other
attributes
of
respondents.
Such
variations
may
be
incorporated
by
appending
the
simple
linear
specification
with
quadratic
interactions
between
variables
in
vector
D
and
the
variables
Xi
and
Fi
(
cf.
Johnston
et
al.
2003b).

One
may
also
incorporate
quadratic
interactions
between
policy
attributes
Xi
and
Fi,
(
cf.

Johnston
et
al.
2002b).
Such
quadratic
extensions
of
the
basic
linear
model
allow
for
additional
flexibility
in
modeling
the
relationship
between
policy
attributes
(
including
cost)
and
utility,
as
suggested
by
Hoehn
(
1991)
and
Cummings
et
al.
(
1994).
EPA
anticipates
estimating
both
simple
linear
specifications,
as
well
as
more
fully­
flexible
quadratic
specifications
following
Hoehn
(
1991)
and
Cummings
et
al.
(
1994),
to
identify
those
models
which
provide
the
most
satisfactory
statistical
fit
to
the
data
and
correspondence
to
theory.
EPA
anticipates
estimating
all
models
within
the
mixed
logit
framework
outlined
above.
Model
fit
will
be
assessed
following
standard
practice
in
the
literature
(
e.
g.,
Greene
2003;
Maddala
1983).
Linear
and
quadratic
functional
forms
here,
as
they
are
common
practice
in
the
literature,
are
presented
and
discussed
in
many
existing
sources
(
e.
g.,
Hoehn
1991,
Cummings
et
al.
1994,
Johnston
et
al.
1999,
and
Johnston
et
al.
2003b).
42
For
example,
for
each
choice
occasion,
the
respondent
may
choose
Option
A,
Option
B,

or
Neither,
where
"
neither"
is
characterized
by
0
values
for
all
attributes
(
except
Baseline
population
levels).
Assuming
that
the
model
is
estimated
using
a
standard
approximation
for
the
observable
component
of
utility,
an
econometric
specification
of
the
desired
model
(
within
the
overall
multinomial
logit
model)
might
appear
as:

v( )
=
 0
+
 1(
Loss
Reduction)
+
 2(
Population
Change)
+
 3(
Catch
Change)
+
 4(
Cost)
+
 
5(
Loss
Reduction)(
Baseline)
+
 
6(
Population
Change)(
Baseline)
+
 
7(
Catch
Change)(
Baseline)
+
 
8(
Cost)(
Baseline)
+
 
9(
Fish
Saved)(
Population
Change)
+
 
10(
Fish
Saved)(
Catch
Change)
+
 
11(
Population
Change)(
Catch
Change)

Main
effects
are
in
bold.
Interactions
are
in
italics.
This
sample
specification
 
one
of
many
to
be
estimated
by
EPA
 
allows
one
to
estimate
the
relative
"
main
effects"
of
policy
attributes
(
annual
reduction
in
I&
E
losses,
long­
term
effects
on
fish
populations,
and
long­
term
effects
on
recreational
and
commercial
harvest)
on
utility,
as
well
as
interactions
between
these
main
effects.
This
specification
also
allows
EPA
to
assess
the
impact
of
baseline
fish
populations
on
the
marginal
value
of
changes
in
other
model
attributes.
In
sum,
specifications
such
as
this
allow
WTP
to
be
estimated
for
a
wide­
range
of
potential
policy
outcomes,
and
allow
EPA
to
test
for
a
wide­
range
of
main
effects
and
interactions
within
the
utility
function
of
respondents.
Such
flexible
utility
specifications
for
stated
preference
estimation
are
recommended
by
numerous
sources
in
the
literature,
including
Johnston
et
al.
(
2002b),
Hoehn
(
1991),
and
Cummings
et
al.

(
1994),
and
follow
standard
practice
in
choice
modeling
outlined
by
Louviere
et
al.
(
2000)
and
others.

Experimental
Design
Experimental
design
for
the
choice
experiment
surveys
will
follow
established
practices.

Fractional
factorial
design
will
be
used
to
construct
choice
questions
with
an
orthogonal
array
of
attribute
levels,
with
questions
randomly
divided
among
distinct
survey
versions
(
Louviere
et
al.

2000).
Based
on
standard
choice
experiment
experimental
design
procedures
(
Louviere
et
al.

2000),
the
number
of
questions
and
survey
versions
will
be
determined
by,
among
other
factors:

a]
the
number
of
attributes
in
the
final
experimental
design
and
complexity
of
questions,
b]
the
extent
to
which
estimation
of
interactions
and
higher­
level
effects
is
desired,
and
c]
pretests
revealing
the
number
of
choice
experiment
questions
that
respondents
are
willing/
able
to
answer
43
in
a
single
survey
session,
and
the
number
of
attributes
that
may
be
varied
within
each
question
while
maintaining
respondents'
ability
to
make
appropriate
neoclassical
tradeoffs.

Based
on
models
proposed
above
and
recommendations
in
the
literature,
EPA
anticipates
an
experimental
design
that
allows
for
an
ability
to
estimate
main
effects,
quadratic
effects,
and
two­
way
interactions
between
policy
attributes
(
Louviere
et
al.
2000).
Choice
sets
(
Bennett
and
Blamey
2001),
including
variable
level
selection,
will
be
designed
by
EPA
based
on
the
goal
of
illustrating
realistic
policy
scenarios
that
"
span
the
range
over
which
we
expect
respondents
to
have
preferences,
and/
or
are
practically
achievable"
(
Bateman
et
al.
2002,
p.
259),
following
guidance
in
the
literature.
This
includes
guidance
with
regard
to
the
statistical
implications
of
choice
set
design
(
Hanemann
and
Kanninen
1999)
and
the
role
of
focus
groups
in
developing
appropriate
choice
sets
(
Bennett
and
Blamey
2001).

Based
on
these
guiding
principles,
the
following
experimental
design
framework
is
proposed
by
EPA.
The
experimental
design
will
be
conducted
by
Donald
Anderson,
President
of
StatDesign,
Inc.,
a
statistician
with
significant
experience
in
experimental
designs
for
choice
experiments
(
e.
g.,
Johnston
et
al.
2003b;
Newell
and
Swallow
2002).
The
experimental
design
will
allow
for
both
main
effects
and
selected
interactions
to
be
efficiently
estimated,
based
on
a
choice
experiment
framework.
For
a
more
detailed
discussion
of
the
experimental
design,
refer
to
Attachment
5.

Each
treatment
(
survey
question)
includes
two
choice
Options
(
A
and
B),
characterized
by
four
attributes
that
vary
across
the
two
choice
options
(
Fish
Saved,
Population
Change,
Catch
Change,
and
Household
Cost).
Hence,
there
are
a
total
of
eight
attributes
for
each
treatment.

Based
on
focus
groups
and
pretests,
and
guided
by
realistic
ranges
of
attribute
outcomes,
EPA
allows
for
four
different
potential
levels
for
Fish
Saved,
Population
Change,
and
Catch
Change,

and
allows
for
five
different
levels
of
monthly
Household
Cost.

These
attributes
and
levels
may
be
summarized
as
follows:


Fish
SavedA,
Fish
SavedB
(
4
possible
levels
each:
25,
50,
75,
and
95%)


Population
ChangeA,
Population
ChangeB
(
4
possible
levels
each:
0,
5,
8,
and
10%)


Catch
ChangeA,
Catch
ChangeB
(
4
possible
levels
each:
0,
2,
5,
and
10%)


CostA,
CostB
(
5
possible
levels
each;
$
1,
$
2,
$
3,
$
4,
$
6)

The
subscripts
(
A,
B)
denote
to
the
attributes
in
Option
A
and
B,
respectively.
The
available
or
potential
levels
for
each
attribute
are
balanced
across
A
and
B.
In
addition,
there
is
a
context
variable
which
indicates
the
starting
point
of
fish
population,
denoted
Baseline.
This
context
44
attribute
is
the
same
across
Options
A
and
B
for
each
question,
but
may
influence
the
marginal
utility
of
the
other
attributes
through
interactions
in
the
utility
function.


BaselineAB
(
4
possible
levels;
40,
50,
60,
and
70%)

Beyond
the
levels
specified
above,
each
question
will
include
a
"
neither
plan"
option,

characterized
by
zero
values
for
all
attributes
except
for
the
Baseline.

Following
standard
practice,
EPA
constrained
the
design
somewhat
in
response
to
findings
in
focus
groups
and
the
prior
literature.
For
example,
focus
groups
showed
that
respondents
react
negatively
and
often
protest
when
offered
choices
in
which
one
option
dominates
the
other
in
all
attributes.
Given
that
such
choices
provide
negligible
statistical
information
compared
to
choices
involving
non­
dominant/
dominated
pairs,
they
are
typically
avoided
in
choice
experiment
statistical
designs.
For
example,
Hensher
and
Barnard
(
1990)

recommend
eliminating
profiles
including
dominating
or
dominated
profiles,
because
such
profiles
generally
provide
no
useful
information.
Following
this
guidance,
EPA
constrained
the
design
to
eliminate
such
dominant/
dominating
pairs.
EPA
also
constrained
the
design
to
eliminate
the
possibility
of
pairs
in
which,
when
looking
across
two
options,
one
of
the
options
offers
both
a
greater
reduction
in
fish
losses
and
a
smaller
increase
in
the
population.
The
elimination
of
such
nonsensical
(
or
non­
credible)
pairs
is
common
practice,
and
is
done
to
avoid
protest
bids
and
confusion
among
respondents
(
Bateman
et
al.
2002).

The
resulting
experimental
design
is
characterized
by
64
unique
A
vs.
B
option
pairs,

where
attribute
levels
for
option
A
and
B
differ
across
each
of
the
pairs.
Each
pair
represents
a
unique
choice
modeling
question
 
with
a
unique
set
of
attribute
levels
distinguishing
options
A
and
B.
Following
standard
practice
for
internet
surveys,
these
questions
will
be
randomly
assigned
to
survey
respondents,
with
each
respondent
considering
three
questions.

Information
Provision
According
to
Arrow
et
al.
(
1993,
p.
4605),
"[
i]
f
CV
surveys
are
to
elicit
useful
information
about
willingness
to
pay,
respondents
must
understand
exactly
what
it
is
they
are
being
asked
to
value."
It
is
also
well
known
that
information
provision
can
influence
WTP
estimates
derived
from
stated
preference
survey
instruments
and
that
respondents
must
be
provided
with
sufficient
information
to
make
an
informed
assessment
of
policy
impacts
on
utility
(
e.
g.,
Bergstrom
and
Stoll
1989;
Bergstrom
et
al.
1989;
Hoehn
and
Randall
2002).
As
stated
45
clearly
by
Bateman
et
al.
(
2002,
p.
122),
"[
d]
escribing
the
good
and
the
policy
context
of
interest
may
require
a
combination
of
textual
information,
photographs,
drawings,
maps,
charts
and
graphs.
 [
V]
isual
aids
are
helpful
ways
of
conveying
complex
information 
while
simultaneously
enhancing
respondents'
attention
and
interest."
Given
that
many
respondents
may
not
be
fully
familiar
with
the
details
of
programs
to
reduce
I&
E
losses
and
potential
impacts
on
aquatic
life,
it
is
anticipated
that
the
survey
will
follow
the
approach
of
Opaluch
et
al.
(
1993),

Johnston
et
al.
(
1999),
and
Johnston
et
al.
(
2002a),
each
of
whom
used
short
slide
shows
and/
or
video
presentations
to
provide
information
to
survey
respondents
[
see
also
Horne
et
al.
(
2005),

Ready
et
al.
(
1995),
Powe
and
Bateman
(
2004),
and
Duke
and
Ilvento
(
2004)].
Focus
groups
in
all
cases
demonstrated
that
the
use
of
such
multimedia
methods
was
able
to
substantially
increase
respondents'
comprehension
of
the
goods
and
policies
addressed
by
the
survey
instrument,
and
to
encourage
appropriate
neoclassical
tradeoffs
in
responding
to
choice
experiment
questions
(
e.
g.,
Opaluch
et
al.
1993).
This
finding
is
consistent
with
previous
valuation
studies
such
as
Bateman
et
al.
(
2002,
p.
122),
who
indicate
that
that
photographs
can
be
the
"
best
way"
of
depicting
certain
types
of
policy
changes,
as
long
as
they
are
"
pre­
tested
in
the
same
way
as
textual
descriptions."

Following
this
guidance
of
Bateman
et
al.
(
2002)
and
prior
examples
of
Opaluch
et
al.

(
1993)
and
Johnston
et
al.
(
2002a),
among
others,
EPA
extensively
pretested
all
photographs
and
graphics
used
in
the
introductory
materials
and
survey
itself,
to
ensure
that
these
graphical
elements
were
not
prejudicial,
and
that
they
did
not
bias
responses.
Photographs
or
graphics
judged
to
be
prejudicial
based
on
focus
groups
and/
or
cognitive
interviews
were
removed
and
replaced.
EPA
acknowledges
that
certain
types
of
graphics
and
photographs
can
be
prejudicial
in
certain
contexts
 
and
hence
has
pretested
all
graphical
elements
extensively.
However,
EPA
also
emphasizes
that
there
is
no
precedent
or
support
in
the
literature
for
the
total
elimination
of
pictures
in
survey
instruments.
To
the
contrary,
the
literature
explicitly
indicates
that
pictures
and
graphics
may
be
necessary
and
useful
components
of
survey
instruments
in
many
cases
(
Bateman
et
al.
2002).
EPA
highlights
that
numerous
peer­
reviewed
surveys
described
in
the
literature
include
pictures
and
graphics
both
in
survey
instruments
and
in
introductory
materials
such
as
slide
shows.
For
example,
see
Horne
et
al.
(
2005),
Ready
et
al.
(
1995),
Powe
and
Bateman
(
2004)
and
Duke
and
Ilvento
(
2004),
Opaluch
et
al.
(
1993),
Johnston
et
al.
(
1999,

2002a,
2002b),
and
Mazzotta
et
al.
(
2002).
Bateman
et
al.
(
2002)
also
includes
examples
of
various
types
of
survey
materials
including
pictures
and
graphical
elements.
46
Amelioration
of
Hypothetical
Bias
EPA
considers
the
amelioration
of
hypothetical
bias
to
be
a
paramount
concern
in
survey
design.
However,
the
agency
also
considers
 
based
on
prior
evidence
from
the
literature
 
that
hypothetical
bias
is
not
unavoidable.
For
example,
not
all
research
finds
evidence
of
hypothetical
bias
in
stated
preference
valuation
(
Champ
and
Bishop
2001;
Smith
and
Mansfield
1998;
Vossler
and
Kerkvliet
2003;
Johannesson
1997),
and
some
shows
that
hypothetical
bias
may
be
ameliorated
using
cheap­
talk,
certainty
adjustments,
or
other
mechanisms
(
Champ
et
al.

1997;
Champ
et
al.
2004;
Cummings
and
Taylor
1999;
Loomis
et
al.
1996).

EPA
emphasizes
that
it
has
been
established
that
referendum­
type
SP
choices
are
incentive
compatible
given
that
certain
conditions
are
met,
including
the
condition
that
responses
are
considered
by
respondents
to
be
consequential,
or
potentially
influencing
public
policy
decisions
(
Carson
et
al.
2000).
Hence,
choice­
based
surveys
should
provide
no
incentive
for
non­
truthful
preference
revelation
(
or
hypothetical
bias),
as
long
as
choices
are
considered
consequential.
Induced­
value
laboratory
experiments
verify
this
result,
showing
incentive
compatibility
in
both
hypothetical
and
real
referenda,
and
an
equivalent
ability
to
elicit
valid
demand
information
(
Taylor
et
al.
2001).

Both
the
introductory
materials
and
the
survey
itself
are
explicitly
designed
to
emphasize
the
importance
of
the
budget
constraint
and
program
cost.
For
example,
the
introductory
slide
show
presentation
contains
four
distinct
reminders
concerning
the
importance
of
program
cost
and
the
budget
constraint.
The
survey
itself
includes
an
additional
two
explicit
reminders
of
program
cost
and
the
budget
constraint.

The
survey
has
also
been
explicitly
designed
to
maximize
the
consequentiality
of
choice
experiment
questions,
thereby
maximizing
incentive
compatibility
(
i.
e.,
reducing
strategic
and
hypothetical
biases),
following
clear
guidance
of
Carson
et
al.
(
2000).
Elements
specifically
designed
to
maximize
consequentiality
include
a]
explicit
mention
of
the
agency
involved,
b]

explicit
mention
that
this
survey
is
associated
with
considerations
of
actual
policies
that
are
being
considered
by
US
EPA,
c]
numerous
details
provided
in
the
slide
show
and
survey
concerning
specifics
of
the
proposed
policies,
d]
emphasis
that
some
sort
of
policy
will
be
enacted
by
EPA
 
and
that
the
type
of
policy
enacted
will
depend
in
part
on
survey
results.
Johnston
and
Joglekar
(
2005)
show
the
capacity
of
such
information
to
eliminate
hypothetical
bias
in
choicebased
stated
preference
WTP
estimation.
47
Focus
groups
and
cognitive
interviews
provided
clear
evidence
that
respondents
viewed
choices
as
consequential,
that
they
considered
their
budget
constraints
when
responding
to
all
questions,
and
that
they
would
answer
the
same
way
were
similar
questions
to
be
asked
in
a
binding
referendum.
When
asked
if
they
thought
about
the
program
cost
in
the
same
way
as
"
money
coming
out
of
their
pocket,"
the
vast
majority
of
focus
group
and
interview
respondents
indicated
that
they
treated
program
costs
the
same
way
that
they
would
have
if
there
were
actual
money
consequences.
For
example,
respondents
made
statements
such
as
"
If
this
is
just
an
exercise
to
see
how
I
would
react,
I'd
still
answer
the
same
as
if
the
government
really
used
it
to
create
policy,"
and
"
I
would
be
very
certain
that
I
would
answer
in
a
real
referendum
the
way
I
did
it
here."

Given
this
evidence
from
focus
groups
and
cognitive
interviews,
and
clear
indications
of
consequentiality
in
the
survey,
EPA
finds
little
evidence
to
suggest
that
hypothetical
bias
should
be
significant
in
the
proposed
survey
instrument.
Regarding
the
potential
use
of
cheap
talk
mechanisms
or
other
devices
to
further
address
the
potential
for
hypothetical
bias,
the
Agency
emphasizes
that
the
literature
is
mixed
as
to
their
performance.
For
example,
the
seminal
work
by
Cummings
and
Taylor
(
1999)
shows
that
cheap
talk
is
able
to
reduce
hypothetical
biases.

Similar
results
are
shown
by
Aadland
and
Caplan
(
2003).
However,
other
authors
(
e.
g.,

Cummings
et
al.
1995;
List
2001;
Brown
et
al.
2003)
find
that
a
cheap
talk
script
is
only
effective
under
certain
circumstances,
and
for
certain
types
of
respondents.
For
example,
Cummings
et
al.

(
1995)
find
that
a
relatively
short
cheap
talk
script
actually
worsens
hypothetical
bias,
while
a
longer
script
appears
to
ameliorate
bias.
Brown
et
al.
(
2003)
finds
cheap
talk
only
effective
at
higher
bid
amounts
 
a
result
mirrored
by
Murphy
et
al.
(
2004).
Still
other
authors
find
no
effect
of
cheap
talk,
including
Poe
et
al.
(
2002).
Given
the
clearly
mixed
experiences
with
such
mechanisms,
EPA
is
not
convinced
that
cheap
talk
scripts
are
likely
to
provide
a
panacea
for
hypothetical
bias
in
the
present
case
 
although
they
appear
to
reduce
bias
in
a
limited
set
of
circumstances.
More
importantly
however,
given
the
already
lengthy
slide
show,
the
addition
of
a
cheap
talk
script
would
likely
detract
from
the
respondents'
ability
to
remember
other
critical
information.
Hence,
the
incorporation
of
a
cheap
talk
script
would
involve
significant
tradeoffs,

for
which
the
benefits
are
unclear.

Amelioration
of
Symbolic
Biases
and
Warm­
Glow
Effects
48
Following
clear
guidance
of
Arrow
et
al.
(
1993)
and
others,
EPA
has
taken
repeated
steps
to
ensure
that
survey
responses
reflect
the
value
of
the
affected
fish
resources
only,
and
do
not
reflect
symbolic
or
warm
glow
concerns
(
Mitchell
and
Carson
1989).
Focus
groups
and
cognitive
interviews
indicated
that
respondents
were
aware
of
the
ecological
role
of
fish,
and
in
many
cases
grounded
their
preferences
in
the
role
that
fish
play
in
larger
aquatic
ecosystems
(
e.
g.,
as
food
for
other
organisms,
etc.).
However,
these
pretests
showed
that
the
overwhelming
majority
of
focus
group
respondents
answered
questions
based
on
the
specific
changes
in
resources
described
in
the
survey,
and
not
based
on
symbolic
concerns.

Following
explicit
guidance
of
the
NOAA
Blue
Ribbon
Panel
on
Contingent
Valuation
(
Arrow
et
al.
1993,
p.
4609),
EPA
has
explicitly
designed
all
elements
of
the
survey
to
"
deflect
the
general
`
warm
glow'
of
giving
or
the
dislike
of
`
big
business'
away
from
the
specific
program
that
is
being
valued."
This
was
done
in
a
variety
of
ways,
based
on
prior
examples
in
the
literature.
For
example,
following
the
general
examples
of
Opaluch
et
al.
(
1993),
the
survey
introductory
materials
clearly
indicate
that
facilities
owners
and
investors
would
"
do
their
fair
share"
to
bear
the
costs
of
the
new
technology,
and
that
reductions
in
I&
E
losses
could
not
be
achieved
without
the
imposition
of
costs
on
households.
As
noted
in
the
proposed
slide
show:

Slide
26:
"
While
these
policies
would
reduce
fish
losses,
they
would
also
increase
the
production
costs
of
commercial
facilities.
While
a
significant
proportion
of
these
costs
would
be
absorbed
by
the
facility
owners
and
investors,
it
is
unavoidable
that
some
would
be
passed
on
to
consumers.
This
would
increase
the
cost
of
living
for
all
Northeast
households,
including
yours."

Similar
techniques
are
clearly
illustrated
in
the
survey
of
Mitchell
and
Carson
(
1984),
to
ameliorate
symbolic
concerns.
Focus
groups
clearly
showed
that
this
language
allowed
respondents
to
answer
choice
questions
based
on
the
attributes
provided,
while
at
the
same
time
deflecting
"
the
general
`
warm
glow'
of
giving
or
the
dislike
of
`
big
business'"
discussed
by
Arrow
et
al.
(
1993).
Moreover,
tests
of
survey
versions
that
did
not
include
this
language
revealed
significant
and
obvious
protest
responses
related
to
the
perception
that
"
big
business"

was
not
doing
their
fare
share
to
address
the
problem.
This
concern
did
not
arise
in
survey
pretests
once
the
above­
noted
language
was
inserted
into
the
survey.

The
survey
and
introductory
materials
also
include
clear
language
to
instruct
respondents
only
to
consider
the
specific
attributes
in
the
survey,
and
not
to
base
answers
on
broader
environmental
concerns.

Slide
52:
"
When
considering
these
policies,
you
should
only
consider
effects
on
fish
and
the
cost
to
your
household.
This
is
because
scientists
expect
no
other
significant
environmental
or
economic
impacts,
other
than
those
described
in
the
survey "
49
Focus
groups
and
cognitive
interviews
clearly
showed
that
the
vast
majority
of
focus
group
survey
responses
did
not
reflect
symbolic
or
warm­
glow
concerns.
This
is
also
consistent
with
the
statement
from
Arrow
et
al.
(
1993)
that
a
referendum­
type
format
may
limit
the
warm­
glow
effect.
For
example,
one
focus
group
participant
stated,
"
You
know,
if
you
were
telling
me
you
were
saving
spiders,
who
cares?
You
know
what
I
mean?
So,
it
is
not
just
to
do
the
right
thing.

I
think
that
fish
are
important."
Another
participant
stated,
"
No,
I
don't
have
any
specific
fish
in
mind.
I
think
every
fish
will
be
equally
important
for
one
thing
or
another,
whether
it's
for
food
or
I
guess
some
fish
are
food
for
other
fish.
Or
even
just
to
enjoy,
you
know.
They
have
a
right
to
live
like
anything
else,
you
know?"

This
evidence
notwithstanding,
EPA
believes
that
it
is
important
to
include
follow­
up
questions
to
ensure
that
responses
do
not
reflect
symbolic
biases.
Questions
9,11,12,
and
13
in
the
survey
instrument
 
which
address
the
rationale
for
choice
responses
given
earlier
in
the
survey
 
explicitly
test
for
the
presence
of
symbolic
or
warm­
glow
biases.
Follow­
up
questions
such
as
these
are
common
in
stated
preference
survey
instruments,
to
assess
the
underlying
reasons
for
the
observed
valuation
responses
(
e.
g.,
Mitchell
and
Carson
1984).

Assessing
Scope
Sensitivity
The
NOAA
Blue
Ribbon
Panel
on
Contingent
Valuation
(
Arrow
et
al.
1993,
p.
4605)

states
clearly
that
"
if
CV
surveys
are
to
elicit
useful
information
about
willingness
to
pay,

respondents
must
understand
exactly
what
it
is
they
are
being
asked
to
value
(
or
vote
upon) "

They
further
indicate
that
surveys
providing
"
sketchy
details"
about
the
results
of
proposed
policies
call
"
into
question
the
estimates
derived
there
from,"
and
hence
suggest
a
high
degree
of
detail
and
richness
in
the
descriptions
of
scenarios.
Similar
guidance
is
provided
by
other
key
sources
in
the
CVM
literature
(
e.
g.,
Mitchell
and
Carson
1989;
Louviere
et
al.
2000).
Among
the
reasons
for
this
guidance
are
that
such
descriptions
tend
to
encourage
appropriate
framing
and
sensitivity
to
scope.

Following
Arrow
et
al.
(
1993),
Mitchell
and
Carson
(
1989),
and
others,
while
noting
the
clear
limitations
in
scope
tests
discussed
by
Heberlein
et
al.
(
2005),
EPA
believes
that
it
is
important
that
survey
responses
in
this
case
show
sensitivity
to
scope.
This
is
one
of
the
primary
reasons
for
the
use
of
choice
experiment
methodology,
which
is
better
able
to
capture
WTP
differentials
related
to
changes
in
resource
scope
(
Bateman
et
al.
2002).
Unlike
open­
ended
questions,
in
which
scope
insensitivity
is
a
primary
concern,
EPA
emphasizes
that
choice
50
experiments
generally
have
shown
much
less
difficulty
with
respondents
reacting
appropriately
to
the
scope
and
scale
of
resource
changes.
Moreover,
as
clearly
noted
by
Bennett
and
Blamey
(
2001,
p.
231),
"
internal
scope
tests
are
automatically
available
from
the
results
of
a
[
choice
modeling]
exercise."
That
is,
within
choice
experiments,
sensitivity
to
scope
is
indicated
by
the
statistical
significance
and
sign
of
parameter
estimates
associated
with
program
attributes
(
Bennett
and
Blamey
2001).
Internal
scope
sensitivity
will
therefore
be
assessed
through
model
results
for
the
variables
Fish
Saved,
Population
Change,
and
Catch
Change.
Statistical
significance
of
these
variables
 
along
with
a
positive
sign
 
indicates
that
respondents,
on
average,
are
more
likely
to
choose
plans
with
larger
quantities
of
these
variables.

In
addition
to
weak
or
internal
scope
tests
implicit
in
all
choice
experiment
statistical
analysis,
EPA
will
also
conduct
external
scope
tests
(
cf.
Giraud
et
al.
1999).
The
primary
difference
between
internal
and
external
tests
is
that
the
former
assess
sensitivity
to
scope
across
choices
of
a
single
respondent,
while
the
latter
involves
split­
sample
assessments
across
different
respondents.
Within
a
choice
modeling
context,
external
scope
tests
are
generally
considered
"
stronger,"
although
also
more
likely
to
be
confounded
by
differences
in
the
implied
choice
frame
(
Bennett
and
Blamey
2001).
A
variety
of
options
for
external
scope
tests
exist,
depending
on
the
structure
of
stated
choice
questions
under
consideration.

In
the
present
case,
attribute­
by­
attribute
external
scope
tests
will
be
conducted
over
a
split
sub­
sample
of
respondents
considering
a
specific
set
of
choices,
with
all
held
constant
across
the
considered
choices
except
the
scope
of
the
attribute
for
which
the
test
is
to
be
conducted.
For
example,
to
conduct
an
external
scope
test
for
reductions
in
annual
fish
losses,

one
would
consider
a
set
of
choices
that
is
identical
over
two
respondent
groups,
except
that
one
considers
a
choice
with
a
greater
reduction
in
fish
losses.
Assessing
the
choices
over
this
split
sample
allows
for
an
external
test
of
scope.
To
illustrate
this
test,
consider
the
following
stylized
choice
between
Option
A
and
Option
B.
The
generic
labels
"
Level
0",
"
Level
1",
and
"
Level
2"

are
used
to
denote
attribute
levels,
where
for
all
attributes
Level
2
>
Level
1
>
Level
0.
51
Table
B2:
Illustration
of
an
External
Scope
Test
Variable
Option
A
Option
B
Fish
Saved
per
Year
Sample
1:
Fish
Saved
Level
1
Sample
2:
Fish
Saved
Level
2
Fish
Saved
Level
0
Effect
on
Long­
Term
Fish
Populations
Population
Change
Level
0
Population
Change
Level
0
Effect
on
Annual
Recreational
and
Commercial
Catch
Catch
Change
Level
0
Catch
Change
Level
0
Increase
in
Cost
of
Living
for
Your
Household
Cost
Level
1
Cost
Level
0
In
the
above
example,
only
Fish
Saved
and
Cost
vary
across
the
choice
options.
Because
both
Fish
Saved
and
Cost
are
higher
in
Option
A
than
in
Option
B,
neither
option
is
dominant.

In
the
illustrated
split­
sample
test,
respondent
sample
1
views
the
choice
with
Fish
Saved
at
Level
1,
while
respondent
sample
2
views
an
otherwise
identical
choice
with
Fish
Saved
at
Level
2,
where
Level
2
>
Level
1.
If
responses
are
externally
sensitive
to
scope
in
Fish
Saved,
this
will
manifest
in
a
greater
proportion
of
sample
2
respondents
choosing
Option
A
than
sample
1
respondents.
This
hypothesis
may
be
easily
assessed
using
a
test
of
equal
proportions
across
the
two
sub­
samples,
and
provides
a
simple
attribute­
by­
attribute
test
of
external
scope.
Analogous
tests
may
be
conducted
for
all
attributes
within
the
choice
experiment
design,
using
parallel
methods.
EPA
emphasizes
that
the
formal
applicability
of
the
above­
noted
scope
test
is
contingent
upon
the
specific
choice
frame
implied
by
levels
of
other
attributes
in
the
choice
question.
This
is
a
characteristic
of
nearly
all
external
scope
tests
applied
in
choice
experiment
frameworks
(
Bennett
and
Blamey
2001).

Split­
sample
tests
such
as
those
proposed
above
often
require
the
addition
of
question
versions
to
the
experimental
design,
to
accommodate
the
specific
structural
needs
of
the
attribute­
by­
attribute
external
scope
test.
Otherwise,
confounding
effects
of
other
varying
attributes
can
render
results
of
scope
tests
ambiguous.
In
the
present
case,
the
proposed
tests
would
require
the
addition
of
up
to
six
unique
question
versions
to
the
experimental
design,

enabling
scope
tests
for
the
three
non­
cost
attributes
within
the
316(
b)
choice
experiment
scenarios.
If
scope
tests
in
additional
question
frames
are
desired
(
e.
g.,
the
same
scope
test
illustrated
above,
but
given
Level
1
for
population
and
catch
attributes),
still
additional
question
52
versions
would
be
added.
While
small
numbers
of
questions
added
to
the
experimental
design
should
have
minimal
impacts
on
overall
efficiency
(
e.
g.,
orthogonality
of
the
design),
larger
numbers
may
have
a
more
significant
impact.
Hence,
given
constraints
on
the
total
number
of
survey
respondents,
there
is
a
potential
empirical
tradeoff
between
the
number
of
external
scope
tests
that
may
be
conducted
and
the
efficiency
of
the
experimental
design
and
statistical
analysis.

Communicating
Uncertainty
to
Respondents
EPA
agrees
that
the
role
of
risk
and
uncertainty
is
an
important
issue
to
be
addressed
in
the
development
of
benefits
estimates,
and
points
out
that
the
literature
provides
numerous
examples
of
cases
in
which
appropriate
survey
design,
including
focus
groups,
was
used
to
successfully
address
such
concerns.
For
example,
as
stated
by
Desvousges
et
al.
(
1984),
"
using
contingent
valuation
to
estimate
the
benefits
of
hazardous
waste
management
regulations
requires
detailed
information
on
how
and
the
extent
to
which
respondents
understand
risk
(
or
probability)
and
how
government
regulatory
actions
might
change
it 
Using
focus
groups
helped
make
this
determination "
EPA
also
emphasizes
that
all
regulatory
analyses
involve
uncertainty
of
some
type
(
Boardman
et
al.
2001).

The
ecological
outcome
of
I&
E
reductions
is
subject
to
considerable
uncertainty.
EPA
believes
that
it
is
important
that
survey
respondents
be
aware
of
this
uncertainty,
and
that
their
responses
reflect
the
knowledge
that
the
resource
changes
reflected
in
the
survey
are
scientific
estimates.
However,
EPA
is
also
aware
of
the
clear
advice
from
the
choice
modeling
literature
(
e.
g.,
Bennett
and
Blamey
2001;
Louviere
et
al.
2000)
to
avoid
cognitive
burden
on
respondents.

Hence,
the
proposed
survey
materials
clearly
indicate
the
uncertainty
involved
with
the
described
resource
changes
in
choice
modeling
scenarios,
yet
do
so
in
a
way
designed
to
minimize
cognitive
burden.

For
example,
the
introductory
slide
show
states:

Slide
30:
"
You
will
be
shown
different
policy
options
 
with
different
effects
on
fish.
This
is
because
scientists
are
still
working
to
determine
what
the
exact
effect
on
fish
will
be,
so
it
is
important
to
know
how
you
would
react
to
a
wide
range
of
possible
outcomes.
Common
sense
indicates
that
preventing
the
loss
of
fish
eggs
and
young
fish
will
mean
more
adult
fish
in
future
years,
but
at
this
point
there
is
still
significant
uncertainty
regarding
the
exact
size
of
these
future
effects."

This
passage
clearly
indicates
the
uncertainty
involved
with
scientific
estimates
of
the
outcomes
of
I&
E
regulations.
This
is
followed
by
further
reminders
of
uncertainty:
53
Slide
37:
"
The
survey
uses
a
0
to
100
scale
to
show
expected
long­
term
effects
on
fish
populations.
This
is
the
best
estimate
as
to
what
will
happen
to
average
fish
stocks,
across
all
species,
after
3­
5
years
of
new
policies."

Slide
53:
"
This
survey
asks
you
to
make
important
choices,
even
though
some
information
is
not
known
to
scientists.
For
example,
there
is
no
way
to
know
the
exact
effect
of
these
new
policies
on
the
fishing
industry.
Even
so,
the
government
will
be
making
its
final
decision
by
July
2006.
The
results
of
this
study
will
be
used
by
EPA
to
assess
the
value
of
the
proposed
policies
to
the
public,
and
will
also
be
posted
on
EPA's
website.
Your
responses
to
this
survey
will
make
a
difference."

Focus
groups
and
cognitive
interviews
indicated
clearly
that
respondents
understood
that
the
numbers
provided
in
the
survey
were
scientific
estimates
subject
to
uncertainty,
and
answered
choice
questions
accordingly.
Respondents,
in
general,
were
comfortable
making
decisions
in
the
presence
of
this
uncertainty.
For
example,
one
respondent
stated,
" 
we
know
that
fish
populations
are
declining
and
so
these
are
all
estimates.
And
I
accept
them
as
estimates."
Other
respondents
made
statements
such
as:
"
If
they
had
stated
figures
as
though
these
figures
are
it
 
that
I
would
question,"
and
"
I'll
take
that
the
estimate
comes
with
some
degree
of
scientific
knowledge
behind
it.
More
than
throwing
a
dart,
but
accept
it
for
what
it
was."

EPA
also
tested
alternative
versions
of
the
survey
instrument
in
which
choice
experiment
attributes
were
presented
as
90%
confidence
ranges,
rather
than
as
point
estimates.
The
suggested
question
format
was
pre­
tested
in
cognitive
interviews
that
were
conducted
with
individual
respondents
on
September
2.
Respondents
were
explicitly
asked
whether
the
ranges
were
helpful
in
understanding
the
uncertainty
of
estimates
presented
in
the
choice
question
or
whether
they
were
a
source
of
confusion.
Seven
out
of
the
eight
respondents
interviewed
on
this
occasion
indicated
that
that
ranges
were
more
confusing,
that
the
original
presentation
of
resource
changes
was
more
clear,
and
that
they
clearly
understood
that
the
ecological
changes
described
in
the
survey
were
uncertain.
Furthermore,
respondents
were
comfortable
making
decisions
in
the
presence
of
this
uncertainty.

5(
c)
Reporting
Results
The
results
of
the
survey
will
be
made
public
as
part
of
the
benefits
analysis
for
the
316(
b)
regulation
for
Phase
III
facilities.
Provided
information
will
include
summary
statistics
for
the
survey
data,
extensive
documentation
for
the
statistical
analysis,
and
a
detailed
description
of
the
final
results.
The
survey
data
will
be
released
only
after
it
has
been
thoroughly
vetted
to
ensure
that
all
potentially
identifying
information
has
been
removed.
54
References
Adamowicz,
W.,
J.
Loviere,
and
M.
Williams.
1994.
"
Combining
Revealed
and
Stated
Preference
Methods
for
Valuing
Environmental
Amenities."
Journal
of
Environmental
Economics
and
Management.
26:
271­
292.

Adamowicz,
W.,
P.
Boxall,
M.
Williams,
and
J.
Louviere.
1998.
"
Stated
Preference
Approaches
for
Measuring
Passive
Use
Values:
Choice
Experiments
and
Contingent
Valuation."

American
Journal
of
Agricultural
Economics
80(
1):
64­
75.

Aadland,
D.,
and
A.
J.
Caplan.
2003.
"
Willingness
to
Pay
for
Curbside
Recycling
with
Detection
and
Mitigation
of
Hypothetical
Bias."
American
Journal
of
Agricultural
Economics
85(
2):
492­
502.

Arrow,
K.
,
R.
Solow,
E.
Leamer,
P.
Portney,
R.
Rander,
and
H.
Schuman.
1993.
"
Report
of
the
NOAA
Panel
on
Contingent
Valuation."
Federal
Register
58
(
10):
4602­
4614.

Bateman,
I.
J.,
R.
T.
Carson,
B.
Day,
M.
Hanemann,
N.
Hanley,
T.
Hett,
M.
Jones­
Lee,
G.

Loomes,
S.
Mourato,
E.
Ozdemiroglu,
D.
W.
Pierce,
R.
Sugden,
and
J.
Swanson.
2002.

Economic
Valuation
with
Stated
Preference
Surveys:
A
Manual.
Northampton,
MA:

Edward
Elgar.

Ben­
Akiva,
M.
and
S.
R.
Lerman.
1985.
Discrete
Choice
Analysis:
Theory
and
Applications
to
Travel
Demand.
MIT
Press,
Cambridge,
MA.

Bennett,
J.
and
R.
Blamey,
eds.
2001.
The
Choice
Modelling
Approach
to
Environmental
Valuation.
Northampton,
MA:
Edward
Elgar.

Bergstrom,
J.
C.,
J.
R.
Stoll,
J.
P.
Titre,
and
V.
L.
Wright.
"
Economic
Value
of
Wetlands­
based
Recreation."
Ecological
Economics
2(
1990):
129­
147.

Bergstrom,
John
C.
and
John
R.
Stoll.
1989.
"
Aplication
of
experimental
economics
concepts
and
precepts
to
CVM
field
survey
procedures."
Western
Journal
of
Agricultural
Economics
14(
1):
98­
109.

Berrens,
R.
P.
A.
K.
Bohara,
H.
C.
Jenkins­
Smith,
C.
L.
Silva
and
D.
L.
Weimer.
2004.

"
Information
and
effort
in
contingent
valuation
surveys:
application
to
global
climate
change
using
national
internet
samples."
Journal
of
Environmental
Economics
and
Management
47(
2):
331­
363.
55
Besedin,
Elena,
Robert
Johnston,
Matthew
Ranson,
and
Jenny
Ahlen,
Abt
Associates
Inc.
2005.

"
Findings
from
2005
Focus
Groups
Conducted
Under
EPA
ICR
#
2155.01."
Memo
to
Erik
Helm,
U.
S.
EPA/
OW,
October
18,
2005.
See
docket
for
EPA
ICR
#
2155.02
Blamey,
R.
K.,
J.
W.
Bennett,
and
M.
D.
Morrison.
1999.
"
Yea­
saying
in
Contingent
Valuation
Surveys."
Land
Economics
75:
126­
141.

Boardman,
A.
E.,
D.
H.
Greenberg,
A.
R.
Vining,
and
D.
L.
Weimer.
2001.
Cost­
Benefit
Analysis:

Concepts
and
Practice,
2nd
edition.
Upper
Saddle
River,
NJ:
Prentice
Hall.

Boyle,
K.
J.,
B.
Roach,
and
D.
G.
Waddington.
1998.
1996
Net
Economic
Values
for
Bass,
Trout
and
Walleye
Fishing,
Deer,
Elk
and
Moose
Hunting,
and
Wildlife
Watching:
Addendum
to
the
1996
National
Survey
of
Fishing,
Hunting
and
Wildlife­
Associated
Recreation.

Report
96­
2.
U.
S.
Fish
and
Wildlife
Service,
August.

Brown,
T.
C.,
I.
Ajzen,
and
D.
Hrubes.
2003.
"
Further
Tests
of
Entreaties
to
Avoid
Hypothetical
Bias
in
Referendum
Contingent
Valuation."
Journal
of
Environmental
Economics
and
Management
46(
2):
353­
361.

Cameron,
T.
A.
and
D.
D.
Huppert.
1989.
"
OLS
versus
ML
Estimation
of
Non­
market
Resource
Values
with
Payment
Card
Interval
Data."
Journal
of
Environmental
Economics
and
Management
17:
230­
246.

Cameron,
T.
1992.
"
Combining
contingent
valuation
and
travel
cost
data
for
the
valuation
of
nonmarket
goods."
Land
Economics
68(
3):
302­
317.

Carson,
R.
T.,
T.
Groves,
and
M.
J.
Machina.
2000.
"
Incentive
and
Informational
Properties
of
Preference
Questions."
Working
Paper,
Department
of
Economics,
University
of
California,
San
Diego.

Champ
P.
A.,
and
R.
C.
Bishop.
2001.
"
Donation
Payment
Mechanisms
and
Contingent
Valuation:
An
Empirical
Study
of
Hypothetical
Bias."
Environmental
and
Resource
Economics
19(
4):
383­
402.

Champ,
P.
A.,
R.
C.
Bishop,
T.
C.
Brown,
D.
W.
McCollum.
1997.
"
Using
Donation
Mechanisms
to
Value
Non­
use
Benefits
from
Public
Goods."
Journal
of
Environmental
Economics
and
Management
33(
2):
151­
162.

Champ,
P.
A.,
R.
Moore,
R.
C.
Bishop.
2004.
"
Hypothetical
Bias:
The
Mitigating
Effects
of
Certainty
Questions
and
Cheap
Talk."
Selected
paper
prepared
for
presentation
at
the
American
Agricultural
Economics
Association
Annual
Meeting,
Denver,
Colorado.
56
Croke,
K.,
R.
G.
Fabian,
and
G.
Brenniman.
1986.
"
Estimating
the
Value
of
Improved
Water
Quality
in
an
Urban
River
System."
Journal
of
Environmental
Systems
16(
1):
13­
24.

Cronin,
F.
J.
1982.
"
Valuing
Nonmarket
Goods
Through
Contingent
Markets."
Pacific
Northwest
Laboratory,
PNL
4255,
Richland,
WA.

Cummings,
R.
G.
and
G.
W.
Harrison.
1995.
"
The
Measurement
and
Decomposition
of
Non­
use
Values:
A
Critical
Review."
Environmental
and
Resource
Economics
1995(
5):
225­
247.

Cummings,
R.
G.,
G.
W.
Harrison,
and
L.
L.
Osborne.
1995.
"
Can
the
Bias
of
Contingent
Valuation
Surveys
Be
Reduced?"
Economics
working
paper,
Columbia,
SC:
Division
of
Research,
College
of
Business
Administration,
Univ.
of
South
Carolina.

Cummings,
R.
G.,
P.
T.
Ganderton,
and
T.
McGuckin.
1994.
"
Substitution
Effects
in
CVM
Values."
American
Journal
of
Agricultural
Economics
76(
2):
205­
214.

Cummings,
R.
G.
and
L.
O.
Taylor.
1999.
"
Unbiased
Value
Estimates
for
Environmental
Goods:
A
Cheap
Talk
Design
for
the
Contingent
Valuation
Method."
American
Economic
Review
89(
3):
649­
665.

Dennis,
Mike.
2005.
"
Knowledge
Networks'
Plan
for
Increasing
the
Response
Rate
and
for
Statistical
Correction
for
the
Main
Study
for
Estimating
Respondents'
Willingness
to
Pay
for
Reduction
in
Fish
Mortality."
Memo
prepared
June
13,
2005.

Desvousges,
W.
H.,
V.
K.
Smith,
D.
H.
Brown,
and
D.
K.
Pate.
1984.
"
The
Role
of
Focus
Groups
in
Designing
a
Contingent
Valuation
Survey
to
Measure
the
Benefits
of
Hazardous
Waste
Management
Regulations."
Research
Triangle
Institute:
Research
Triangle
Park,
NC.

Desvousges,
W.
H.
and
V.
K
Smith.
1988.
"
Focus
Groups
and
Risk
Communication:
the
Science
of
Listening
to
Data."
Risk
Analysis
8:
479­
484.

Dillman,
D.
A.
2000.
Mail
and
Internet
Surveys:
The
Tailored
Design
Method.
New
York:
John
Wiley
and
Sons.

Duke,
J.
M.
and
T.
W.
Ilvento.
2004.
"
A
Conjoint
Analysis
of
Public
Preferences
for
Agricultural
Land
Preservation."
Agricultural
and
Resource
Economics
Review
33(
2):

209­
219.

Feather,
P.
M.,
D.
Hellerstein,
and
T.
Thomaso.
1995.
"
A
discrete­
count
model
of
recreational
demand."
Journal
of
Environmental
Economics
and
Management
29:
214­
227.

Freeman,
A.
M.,
III.
2003.
The
Measurement
of
Environmental
and
Resource
Values:
Theory
and
Methods.
Washington,
DC:
Resources
for
the
Future.
57
Freeman,
A.
M.,
III.
1993.
"
Non­
use
values
in
natural
resource
damage
assessment."
In
Valuing
Natural
Assets,
R.
J.
Kopp
and
V.
Kerry
Smith
(
eds).
Resources
for
the
Future,

Washington,
DC.

Giraud,
K.
L.,
J.
B.
Loomis
and
R.
L.
Johnson.
1999.
"
Internal
and
external
scope
in
willingnessto
pay
estimates
for
threatened
and
endangered
wildlife."
Journal
of
Environmental
Management
56:
221­
229.

Greene,
W.
H.
2002.
NLOGIT
Version
3.0
Reference
Guide.
Plainview,
NY:
Econometric
Software,
Inc.

Greene,
W.
H.
2003.
Econometric
Analysis.
5th
ed.,
Prentice
Hall,
Upper
Saddle
River,
NJ.

Hanemann,
W.
M.
1984.
"
Welfare
Evaluations
in
Contingent
Valuation
Experiments
with
Discrete
Responses."
American
Journal
of
Agricultural
Economics
66(
3):
332­
41.

Hanemann,
W.
M.
and
B.
Kanninen.
1999.
"
The
Statistical
Analysis
of
Discrete­
Response
CV
Data."
In
Valuing
Environmental
Preferences:
Theory
and
Practice
of
the
Contingent
Valuation
Method
in
the
US,
EU,
and
Developing
Countries.
Edited
by
I.
J.
Bateman
and
K.
G.
Willis,
Oxford
University
Press,
Oxford,
UK.

Heberlein,
T.
A.,
M.
A.
Wilson,
R.
C.
Bishop,
and
N.
C.
Schaeffer.
2005.
"
Rethinking
the
Scope
Test
as
a
Criterion
in
Contingent
Valuation."
Journal
of
Environmental
Economics
and
Management
50(
1):
1­
22.

Heckman,
J.
J.
1979.
"
Sample
Selection
Bias
as
a
Specification
Error."
Econometrica
47(
1):

153­
161.

Hoehn,
J.
P.
1991.
"
Valuing
the
Multidimensional
Impacts
of
Environmental
Policy:
Theory
and
Methods."
American
Journal
of
Agricultural
Economics
73(
2):
289­
299.

Hoehn,
J.
P.,
F.
Lupi,
and
M.
D.
Kaplowitz.
2004.
Internet­
Based
Stated
Choice
Experiments
in
Ecosystem
Mitigation:
Methods
to
Control
Decision
Heuristics
and
Biases.
In
Proceedings
of
Valuation
of
Ecological
Benefits:
Improving
the
Science
Behind
Policy
Decisions,
a
workshop
sponsored
by
the
US
EPA
National
Center
for
Environmental
Economics
and
the
National
Center
for
Environmental
Research.

Hoehn,
John
P.,
and
Alan
Randall.
2002.
"
The
Effect
of
Resource
Quality
Information
on
Resource
Injury
Perceptions
and
Contingent
Values."
Resource
and
Energy
Economics
24(
2002):
13­
31.
58
Horne,
P.,
P.
C.
Boxall,
and
W.
L.
Adamowicz.
2005.
"
Multiple­
use
management
of
forest
recreation
sites:
a
spatially
explicit
choice
experiment."
Forest
Ecology
and
Management
207(
1/
2):
189­
99.

Huang,
J­
C.,
T.
C.
Haab,
and
J.
C.
Whitehead.
1997.
"
Willingness
to
pay
for
quality
improvements:
Should
revealed
and
stated
preference
data
be
combined?"
Journal
of
Environmental
Economics
and
Management
34(
3):
240­
255.

Johannesson,
M.
1997.
"
Some
Further
Experimental
Results
on
Hypothetical
Versus
Real
Willingness
to
Pay."
Applied
Economics
Letters
4:
535­
536.

Johnston,
R.
J.,
E.
Y.
Besedin,
and
R.
F.
Wardwell.
2003a.
"
Modeling
Relationships
Between
Use
and
Non­
use
Values
for
Surface
Water
Quality:
A
Meta­
Analysis."
Water
Resources
Research
39(
12):
1363.

Johnston,
R.
J.
and
D.
P.
Joglekar.
2005.
"
Validating
Hypothetical
Surveys
Using
Binding
Public
Referenda:
Implications
for
Stated
Preference
Valuation."
American
Agricultural
Economics
Association
(
AAEA)
Annual
Meeting,
Providence,
July
24­
27.

Johnston,
R.
J.,
G.
Magnusson,
M.
Mazzotta
and
J.
J.
Opaluch.
2002a.
"
Combining
Economic
and
Ecological
Indicators
to
Prioritize
Salt
Marsh
Restoration
Actions."
American
Journal
of
Agricultural
Economics
84(
5):
1362­
1370.

Johnston,
R.
J.,
J.
J.
Opaluch,
M.
J.
Mazzotta,
and
G.
Magnusson.
2005.
"
Who
Are
Resource
Nonusers
and
What
Can
They
Tell
Us
About
Non­
use
Values?
Decomposing
User
and
Nonuser
Willingness
to
Pay
for
Coastal
Wetland
Restoration."
Water
Resources
Research
41(
7),
doi:
10.1029/
2004WR003766.

Johnston,
R.
J.,
S.
K.
Swallow,
C.
W.
Allen,
and
L.
A.
Smith.
2002b.
"
Designing
Multidimensional
Environmental
Programs:
Assessing
Tradeoffs
and
Substitution
in
Watershed
Management
Plans."
Water
Resources
Research
38(
7):
IV1­
13.

Johnston,
R.
J.,
S.
K
Swallow
and
T.
F.
Weaver.
1999.
"
Estimating
Willingness
to
Pay
and
Resource
Trade­
offs
With
Different
Payment
Mechanisms:
An
Evaluation
of
a
Funding
Guarantee
for
Watershed
Management."
Journal
of
Environmental
Economics
and
Management
38(
1):
97­
120.

Johnston,
R.
J.,
S.
K.
Swallow,
T.
J.
Tyrrell
and
D.
M.
Bauer.
2003b.
"
Rural
Amenity
Values
and
Length
of
Residency."
American
Journal
of
Agricultural
Economics
85(
4):
1000­
1015.
59
Johnston,
R.
J.,
T.
F.
Weaver,
L.
A.
Smith
and
S.
K.
Swallow.
1995.
"
Contingent
Valuation
Focus
Groups:
Insights
From
Ethnographic
Interview
Techniques."
Agricultural
and
Resource
Economics
Review
24(
1):
56­
69.

Kaplowicz,
M.
D.,
F.
Lupi
and
J.
P.
Hoehn.
2004.
"
Multiple
Methods
for
Developing
and
Evaluating
a
Stated­
Choice
Questionnaire
to
Value
Wetlands."
Chapter
24
in
Methods
for
Testing
and
Evaluating
Survey
Questionnaires,
eds.
S.
Presser,
J.
M.
Rothget,
M.
P.

Coupter,
J.
T.
Lesser,
E.
Martin,
J.
Martin,
and
E.
Singer.
New
York:
John
Wiley
and
Sons.

Kling.
C.
1997.
"
The
Gains
from
Combining
Travel
Cost
and
Contingent
Valuation
Data
to
Value
Nonmarket
Goods."
Land
Economics
73(
3):
428­
439.

Layton,
D.
F.
2000.
"
Random
coefficient
models
for
stated
preference
surveys."
Journal
of
Environmental
Economics
and
Management
40(
1):
21­
36.

Li,
H.,
R.
P.
Berrens,
A.
K.
Bohara,
H.
C.
Jenkins­
Smith,
C.
L.
Silva,
and
D.
L.
Weimer.
2005.

"
Exploring
the
Beta
Model
Using
Proportional
Budget
Information
in
a
Contingent
Valuation
Study."
Economics
Bulletin
17(
8):
1­
9.

List,
J.
A.
2001.
"
Do
Explicit
Warnings
Eliminate
the
Hypothetical
Bias
in
Elicitation
Procedures?
Evidence
from
Field
Auctions
for
Sportscards."
American
Economic
Review
91(
5):
1498­
1507.

Loomis,
J.,
T.
Brown,
B.
Lucero,
and
G.
Peterson.
1996.
"
Improving
Validity
Experiments
of
Contingent
Valuation
Methods:
Results
of
Efforts
to
Reduce
the
Disparity
of
Hypothetical
and
Actual
Willingness
to
Pay."
Land
Economics
72(
4):
450­
461.

Louviere,
J.
J.,
D.
A.
Hensher,
and
J.
D.
Swait.
2000.
Stated
Preference
Methods:
Analysis
and
Application.
Cambridge,
UK:
Cambridge
University
Press.

Maddala,
G.
S.
1983.
"
Limited­
Dependent
and
Qualitative
Variables
in
Econometrics."

Econometric
Society
Monographs
No.
3,
Cambridge
University
Press,
Cambridge.

Magat,
Wesley
A.,
J.
Huber,
K.
W.
Viscusi,
and
J.
Bell.
2000.
"
An
Iterative
Choice
Approach
to
Valuing
Clean
Lakes,
Rivers,
and
Streams."
Journal
of
Risk
and
Uncertainty
21(
1):
7­

43.

Mazzotta,
M.
J.,
J.
J.
Opaluch,
G.
Magnuson,
and
R.
J.
Johnston.
2002.
"
Setting
Priorities
for
Coastal
Wetland
Restoration:
A
GIS­
Based
Tool
That
Combines
Expert
Assessments
And
Public
Values."
Earth
System
Monitor
12(
3):
1­
6.
60
McConnell,
K.
E.
1990.
"
Models
for
Referendum
Data:
The
Structure
of
Discrete
Choice
Models
for
Contingent
Valuation."
Journal
of
Environmental
Economics
and
Management
18(
1):
19­
34.

McFadden,
D.
1981.
"
Econometric
models
of
probabilistic
choice."
In
Structural
Analysis
of
Discrete
Data,
C.
F.
Manski
and
D.
L.
McFadden
(
eds.).
MIT
Press,
Cambridge,
MA.

McFadden,
D.
and
K.
Train.
2000.
"
Mixed
Multinomial
Logit
Models
for
Discrete
Responses."

Journal
of
Applied
Econometrics
15(
5):
447­
470.

Mitchell,
Robert
C.,
and
R.
T.
Carson.
1981.
An
Experiment
in
Determining
Willingness
to
Pay
for
National
Water
Quality
Improvements.
Preliminary
draft
of
a
report
to
the
U.
S.

Environmental
Protection
Agency.
Resources
for
the
Future,
Inc.,
Washington.

Mitchell,
Robert
Cameron,
and
R.
T.
Carson.
1984.
A
Contingent
Valuation
Estimate
of
National
Freshwater
Benefits:
Technical
Report
to
the
U.
S.
Environmental
Protection
Agency.
Washington,
DC:
Resources
for
the
Future.

Mitchell,
R.
C.
and
R.
T.
Carson.
1989.
Using
Surveys
to
Value
Public
Goods:
The
Contingent
Valuation
Method.
Resources
for
the
Future,
Washington,
D.
C.

Murphy,
J.
J.,
T.
Stevens
and
D.
Weatherhead.
2004.
"
Is
Cheap
Talk
Effective
at
Eliminating
Hypothetical
Bias
in
a
Provision
Point?"
Working
Paper
No.
2003­
2.
Department
of
Resource
Economics,
University
of
Massachusetts,
Amherst.

Newell,
L.
W.
and
S.
K.
Swallow.
2002.
"
Are
Stated
Preferences
Invariant
to
the
Prospect
of
Real­
Money
Choice?"
Selected
Paper,
American
Agricultural
Economics
Association,

Long
Beach,
California,
July
28­
31.

Olsen,
D.,
J.
Richards,
and
R.
D.
Scott.
1991.
"
Existence
and
Sport
Values
for
Doubling
the
Size
of
Columbia
River
Basin
Salmon
and
Steelhead
Runs."
Rivers
2(
1):
44­
56.

Opaluch,
J.
J.,
S.
K.
Swallow,
T.
Weaver,
C.
Wessells,
and
D.
Wichelns.
1993.
"
Evaluating
impacts
from
noxious
facilities:
Including
public
preferences
in
current
siting
mechanisms."
Journal
of
Environmental
Economics
and
Management
24(
1):
41­
59.

Opaluch,
J.
J.,
T.
A.
Grigalunas,
M.
Mazzotta,
R.
J.
Johnston,
and
J.
Diamantedes.
1999.

Recreational
and
Resource
Economic
Values
for
the
Peconic
Estuary.
Prepared
for
the
Peconic
Estuary
Program.
Peace
Dale,
RI:
Economic
Analysis
Inc.
124
pp.

Parsons,
G.
R.,
P.
M.
Jakus,
and
T.
Tomasi.
1999.
"
A
comparison
of
welfare
estimates
from
four
models
for
linking
seasonal
recreational
trips
to
multinomial
logit
models
of
site
choice."

Journal
of
Environmental
Economics
and
Management
38:
143­
157.
61
Poe,
G.
L.,
J.
E.
Clark,
D.
Rondeau,
and
W.
D.
Schulze.
2002.
"
Provision
Point
Mechanisms
and
Field
Validity
Tests
of
Contingent
Valuation."
Environmental
and
Resource
Economics
23:
105­
131.

Poe,
G.
L.,
M.
P.
Welsh,
and
P.
A.
Champ.
1997.
"
Measuring
the
Difference
in
Mean
Willingness
to
Pay
when
Dichotomous
Choice
Contingent
Valuation
Responses
are
not
Independent."

Land
Economics
73(
2):
255­
267.

Powe,
N.
A.
and
I.
J.
Bateman.
2004.
"
Investigating
Insensitivity
to
Scope:
A
Split­
Sample
Test
of
Perceived
Scheme
Realism."
Land
Economics
80(
2):
258­
271.

Provencher,
Bill
and
R.
Bishop.
1997.
"
An
Estimable
Dynamic
Model
of
Recreation
Behavior
with
an
Application
to
Great
Lakes
Angling."
Journal
of
Environmental
Economics
and
Management
33:
107­
27.

Ready,
R.
C.,
J.
C.
Whitehead,
and
G.
C.
Blomquist.
1995.
"
Contingent
Valuation
When
Respondents
are
Ambivalent."
Journal
of
Environmental
Economics
and
Management
29(
2):
181­
196.

Rosenberger,
R.
and
J.
Loomis.
1999.
"
The
Value
of
Ranch
Open
Space
to
Tourists:
Combining
Observed
and
Contingent
Behavior
Data."
Growth
and
Change
30:
366­
383.

Rowe,
R.
D.,
E.
R.
Morey,
A.
D.
Ross,
and
W.
D.
Shaw.
1985.
Valuing
Marine
Recreational
Fishing
on
the
Pacific
Coast.
Energy
and
Resource
Consultants
Inc.
Report
prepared
for
the
National
Marine
Fisheries
Service,
National
Oceanic
and
Atmospheric
Administration.
Report
LJ­
85­
18C.
March.

Smith,
V.
K.,
and
C.
Mansfield.
1998.
"
Buying
Time:
Real
and
Hypothetical
Offers."
Journal
of
Environmental
Economics
and
Management
36:
209­
224.

Schkade,
D.
A.
and
J.
W.
Payne.
1994.
"
How
People
Respond
to
Contingent
Valuation
Questions:
A
Verbal
Protocol
Analysis
of
Willingness
to
Pay
for
an
Environmental
Regulation."
Journal
of
Environmental
Economics
and
Management
26:
88­
109.

Taylor,
L.
O.,
M.
McKee,
S.
K.
Laury,
and
R.
G.
Cummings.
2001.
"
Induced­
Value
Tests
of
the
Referendum
Voting
Mechanism."
Economics
Letters
71(
1):
61­
65.

Train,
K.
1998.
"
Recreation
Demand
Models
with
Taste
Differences
Over
People."
Land
Economics
74(
2):
230­
239.

U.
S.
EPA.
2004a.
"
Phase
II
­
Large
Existing
Electric
Generating
Plants
Response
to
Public
Comment."
http://
www.
epa.
gov/
waterscience/
316b/
commentph2.
htm.
62
U.
S.
EPA.
2000.
Guidelines
for
Preparing
Economic
Analyses.
(
EPA
240­
R­
00­
003).
U.
S.

EPA,
Office
of
the
Administrator,
Washington,
DC,
September
2000.

U.
S.
Department
of
Labor,
Bureau
of
Labor
Statistics.
2004.
"
Employer
costs
for
Employee
Compensation:
March
2004."
Press
release.
June
24,
2004.

http://
www.
bls.
gov/
news.
release/
pdf/
ecec.
pdf.

W.
Kip
Viscusi,
Joel
Huber,
and
Jason
Bell.
2004.
"
The
Value
of
Regional
Water
Quality
Improvements."
Harvard
Law
and
Economics
Discussion
Paper
No.
477,
June.

Vossler
C.
A.
and
J.
Kerkvliet.
2003.
"
A
Criterion
Validity
Test
of
the
Contingent
Valuation
Method:
Comparing
Hypothetical
and
Actual
Voting
Behavior
for
a
Public
Referendum."
Journal
of
Environmental
Economics
and
Management
45(
3):
631­
649.

Whitehead,
John
C.,
and
P.
A.
Groothuis.
1992.
"
Economic
Benefits
of
Improved
Water
Quality:
a
Case
Study
of
North
Carolina's
Tar
Pamlico
River."
Rivers
3:
170­
178.

Whitehead,
J.
C.,
G.
C.
Blomquist,
T.
J.
Hoban,
and
W.
B.
Clifford.
1995.
"
Assessing
the
Validity
and
Reliability
of
Contingent
Values:
A
Comparison
of
On
Site
Users,
Off
Site
Users,

and
Non­
users."
Journal
of
Environmental
Economics
and
Management
29(
2):
238­
251.

Winkelmann,
R.
2000.
Econometric
Analysis
of
Count
Data.
Springer,
New
York.
