
                          Supporting Statement Part A
	for Information Collection Request for	
A survey to improve economic analysis of surface water quality changes: Instrument, Pre-test, and Implementation
               OMB Control Number: 2090-NEW, EPA ICR #: 2588.01
                                       
                                       
                                       
                                       
                                       
                                       
                                       
                                       
                                       
                                       

                                       
                               TABLE OF CONTENTS

List of Attachments	4
PART A OF THE SUPPORTING STATEMENT	5
1.	Identification of the Information Collection	5
1(a)	Title of the Information Collection	5
1(b)	Short Characterization (Abstract)	5
2.	Need For and Use of the Collection	6
2(a)	Need/Authority for the Collection	6
2(b)	Practical Utility/Users of the Data	7
3.	Non-duplication, Consultations, and Other Collection Criteria	8
3(a)	Non-duplication	8
3(b)	Public Notice Required Prior to ICR Submission to OMB	10
3(c)	Consultations	10
3(d)	Effects of Less Frequent Collection	12
3(e)	General Guidelines	12
3(f)	Confidentiality	12
3(g)	Sensitive Questions	13
4.	The Respondents and the Information Requested	13
4(a)	Respondents	13
4(b)	Information Requested	16
5.	The Information Collected  -  Agency Activities, Collection Methodology, and Information Management	20
5(a)	Agency Activities	20
5(b)	Collection Methodology and Information Management	21
5(c)	Small Entity Flexibility	21
5(d)	Collection Schedule	22
6.	Estimating Respondent Burden and Cost of Collection	24
6(a)	Estimating Respondent Burden	24
6(b)	Estimating Respondent Costs	24
6(c)	Estimating Agency Burden and Costs	24
6(d)	Respondent Universe and Total Burden Costs	25
6(e)	Bottom Line Burden Hours and Costs	25
6(f)	Reasons for Change in Burden	26
6(g)	Burden Statement	26


	List of Attachments
      Attachment 1	 -  Screenshots of draft survey			
      Attachment 2	 -  Federal Register Notices
      Attachment 3 	 -  Description of statistical survey design
      Attachment 4 	 -  Responses to public comments 
      

      `							
      PART A OF THE SUPPORTING STATEMENT

1.	Identification of the Information Collection
1(a)	Title of the Information Collection 
A survey to improve economic analysis of surface water quality changes 

1(b)	Short Characterization (Abstract)

Researchers and analysts in EPA's Office of Research and Development (ORD), Office of Water (OW), and National Center for Environmental Economics (NCEE) are collaborating to improve EPA's ability to perform benefit-cost analysis on changes in surface water quality (lakes, rivers, and streams).  We are requesting approval to conduct a survey that will provide data critical to that effort.
Several non-market valuation methods can be used to estimate the economic benefits of improving environmental quality, but they often require more time and resources than are available to federal agency analysts in a regulatory context.  Benefit transfer can provide reasonably accurate estimates of economic benefits under certain conditions with fewer resources and far less time.  Federal agencies often rely on benefit transfer when analyzing the economic impacts of environmental regulation.  In conducting benefit-cost analyses of surface water quality regulations, however, it has become apparent that some important aspects of people's preferences about water quality are highly uncertain, which has made it necessary for analysts to make un-tested assumptions to fill the data gaps.  This information collection is necessary to provide insight on those relationships and improve the EPA's and other federal agencies' ability to perform empirically driven benefit transfer in regulatory analysis. 
Analysts in the Office of Policy, the Office of Water, and the Office of Research and Development are developing an integrated assessment model of water quality and economics designed to be flexible and modular, such that it could eventually be capable of estimating benefits for a wide range of surface water changes.  The data collected with this survey will inform that effort.  Analysts elsewhere in the EPA and other federal agencies, such as USDA, may also be able to use the results of this study to improve benefit transfer in other surface water quality applications as well.
The survey will be administered electronically to a probability-based panel of respondents via the internet.  All survey modes have limitations and are subject to sampling bias, but an internet-based survey provides advantages in efficiency and accuracy over other collection modes. Probability-based internet panels recruit respondents via address-based sampling from the USPS Delivery Sequence File, which is the most complete sample frame available for the U.S. A detailed discussion of probability-based internet panels and the advantages they provide appears in Part A Section 4(a) of this Supporting Statement.  
The total national burden estimate for all components of the survey is 2,040 hours. The burden estimate is based on 120 completed pretest surveys and 6,000 completed main surveys. Assuming 20 minutes are needed to complete the survey, the total respondent cost comes to $61,200 for the pre-test and main survey combined, using an average wage rate of $30.00 (U.S. Department of Labor, https://www.bls.gov/news.release/empsit.t19.htm).

2.	Need for and Use of the Collection
2(a)	Need/Authority for the Collection
When time and other resources are not sufficient for an original study of non-market benefits, benefit transfer can provide reasonably accurate estimates under certain conditions.  Benefit transfer involves adapting the results of previously conducted studies to approximate the conditions of the current application.  This can be done using just a few studies if there are existing studies that closely match the current application, or it can be done in a meta-analytic framework in which many studies are combined to generate a conditional distribution of estimates.  In either case, transferring benefits from other studies may require analysts to parameterize certain relationships that were not included in the original studies.  EPA has identified several such relationships for which there are little or no data to inform the parameterization and so analysts must make assumptions to complete the benefit transfer. Below are descriptions of how each of the identified relationships are addressed under EPA's most recent water quality benefit estimation approach and how this proposed data collection effort would improve EPA's estimates.
One of the key relationships this information collection will address is that between a household's willingness to pay (WTP) and the distance to the improved resource.  Known in the literature as "distance decay," (when treated as continuous) or "extent of market" (when treated discretely) this relationship is critical in estimating benefits for surface water improvements. As the extent of market for a given improvement increases (decreases) the number of households with positive WTP could increase (decrease) exponentially and may determine whether population centers are included in (excluded from) the aggregation of benefits. Distance decay, holding all else equal, will determine the magnitude of WTP for households near the edge of the extent of market compared to households closer to the improved resource. The most recent EPA analyses of water regulations (U.S. EPA 2015, 2020) have assumed that households are willing-to-pay for water quality improvements within 100 miles of their home, with no distance decay within that range, but they are not willing to pay for improvements outside of that range.  EPA justifies the 100-mile assumption by noting that households are more likely to be familiar with waterbodies and their qualities within that distance (U.S. EPA 2020, Appendix G p.4). This assumption about extent of market can have a substantial impact on total WTP.  Corona et al. (2020) show that increasing the extend of market (with no distance decay) from 62 miles to 100 miles increases total WTP by a factor of four in their case study. It is likely, however, that WTP declines gradually as distance increases because of the increasing availability of substitutes and rising travel costs. It is also possible that WTP is non-zero at greater distances than we have assumed in the past because people may hold existence value for aquatic resources they do not use.  This study will collect data that will allow EPA to improve our understanding of distance decay and extent of market.  
Another important relationship for which EPA analysts lack data is that between WTP and the quantity of water affected by a regulation. EPA regulations that affect water quality usually impact waters across large regions of the US and economic theory is clear that the quantity of waters impacted should influence WTP for a given set of improvements. EPA estimates the marginal willingness to pay (MWTP, the WTP for a one-unit improvement in the WQI) using a meta-analysis of 65 SP studies. None of those studies, however, address water quality changes at a national scale and most of the studies value improvements within a single state (U.S. EPA 2020). To address this data gap, EPA captures the quantity of water improved through a ratio of the affected reach miles to the total number of reach miles present in the original studies' sampling area (the sub proportion variable). To estimate economic benefits from surface water regulations, that proportion is then calculated for each census block group impacted by the regulation and those values are used for benefit transfer.  Proxying for quantity in this way not only obscures the direct relationship between WTP and the amount of water improved but also the tradeoffs that SP survey respondents make between the magnitude of the quality changes and the amount of water affected. This study will be one of the first to directly model the relationship between willingness to pay for water quality improvements and quantity of water improved, and the only SP study to do so at a national scale. In each choice situation, this study will give respondents information on both the size of the water quality change and the amount of water affected by the change.  This will allow direct estimation of WTP for large scale improvements and how respondents trade off quality and quantity.  The information collected under this study will allow EPA analysts to better estimate this relationship and improve the accuracy of benefit transfers.  
The third key relationship this information collection will address is between use values, such as those motivated by recreation, and existence value related to aquatic biological condition.  There are at least two reasons that these two sources of value should be considered separately when estimating benefits of water quality improvements. First, as described above, economic theory implies that use values should decline with distance from a resource, while there is no such implication for existence values.  How existence values change as distance from the resource increases is thus an empirical question that has not been addressed in the literature.  Secondly, the underlying biophysical drivers of the quality of recreational experiences at a resource and the ecological health of that resource are different. For example, fecal coliform can create human health hazards through water contact recreation; it has no impact on the aquatic biological condition, however. So, the two sources of value may not be affected in the same way under a given set of water quality changes. EPA's current approach to estimating water quality benefits relies on a single indicator, the water quality index (WQI, e.g., U.S. EPA 2015, 2020). The meta-analysis on which EPA's benefit function is based uses the WQI as the common metric to synthesize the findings of the underlying studies and the only quantitative measure of water quality in the WTP function. While this is a necessary assumption given the current body of valuation literature, its implications on total WTP are unknown because no suitable stated preference study has estimated use and existence values separately. EPA economists, ecologists, and environmental assessors have identified an appropriate metric of aquatic ecological condition for stated preference valuation through focus group testing. This data collection effort will use two separate water quality indicators on the survey to capture use and existence values in a separable way.  The data collected with this survey will provide insight to the relationship between use and existence values and the implicit assumption that they can be captured with a single indicator of water quality. 
The project is being undertaken pursuant to section 104 of the Clean Water Act dealing with research. Section 104 authorizes and directs the EPA Administrator to conduct research into subject areas related to water quality, water pollution, and water pollution prevention and abatement. This section also authorizes the EPA Administrator to conduct research into methods of analyzing the costs and benefits of programs carried out under the Clean Water Act. The data collected under this request will help EPA and other practitioners better inform assumptions regarding the above relationships when conducting benefit-cost analyses. 

2(b)	Practical Utility/Users of the Data
 
This research effort could be used to inform future benefit-cost analysis of surface water improvements.  The current approach, while representing the current state of the science, makes either explicit or implicit assumptions to fill data gaps.  The data collected with this survey and the subsequent analysis will provide empirical insights for those relationships.  The data collected with this effort will be useful to the academic research community by addressing research questions that have not received enough attention in the published literature. EPA is collaborating with several research teams that received STAR Grants to develop our respective studies in such a way to perform external validity tests and cross-validation studies. 

3.	Non-duplication, Consultations, and Other Collection Criteria
3(a)	Non-duplication

There are many studies in the environmental economics literature that quantify the benefits or willingness to pay (WTP) associated with various types of surface water quality and aquatic ecosystem changes. Newbold et al. (2018) identified 51 stated preference valuation studies of water quality. The majority (92%) of these studies examined water quality changes at a local or state level.   Transferring values beyond the study areas is only appropriate when the characteristics of the resource, the baseline and post-policy conditions, and the affected populations are sufficiently similar in the policy case (U.S. EPA 2010). Aggregating values from numerous independent studies may overestimate benefits unless one properly accounts for income constraints and the availability of substitutes.  
Although meta-analyses have synthesized the results from the stated preference literature on surface water quality (Van Houtven et al. 2007, Johnston et al. 2017, Newbold et al. 2018) and have been used for benefit-transfer of EPA policies (U.S. EPA 2020, 2015), such approaches require simplifying (and often untested) assumptions. As discussed in Part A, section (2) of this ICR, such assumptions include distance decay and extent of market for household WTP, how quantity affects WTP, and the use of a single metric to capture multiple sources of value from water quality.  The data collected with the current survey and the subsequent analysis will provide empirical evidence pertaining to those relationships.  
There are two existing nationwide stated preference studies of water quality improvements but features of their study designs limit their usefulness in EPA benefit transfer and meta-analysis.  The first study is by Carson and Mitchell (1993), and asked respondents to value nationwide improvements in water quality using a stated preference survey.  Water quality was expressed using the "water quality ladder", which communicates water quality in terms of whether the water is considered suitable for boating, fishing, and swimming. They elicited respondents' WTP for three different changes, leading to all lakes and rivers in the U.S. reaching a minimum rating of boatable, fishable, or swimmable.  
Although the water quality (WQ) ladder is salient to respondents and can be easily communicated, there are two key limitations when measuring water quality with this common metric.  The first is that the WQ ladder focuses on human uses by construction, and more specifically water-based recreation. Yet, some aspects of water quality that contribute to an improved recreational experience are not correlated with aspects of water quality that contribute to the health of an aquatic ecosystem.  For example, fecal coliform contamination can make water contact recreation dangerous but has no impact on the aquatic ecosystem.  Eliciting values for improvements expressed solely in terms of the WQ ladder may confound sources of value that the public holds for aquatic ecosystem health. A key feature of the survey described in this ICR is that the collected data will allow EPA to test whether the public holds values for aquatic ecosystem health that are independent of those for recreational uses. 
The second limitation of using the WQ ladder metric is that its discrete nature makes it difficult to estimate any value households may hold for intra-category improvements in water quality.  This is important for regional and national water quality policies that, although broad in spatial scope, may on average yield relatively small incremental improvements in water quality in terms of magnitude.  The data from this information collection will allow EPA to assess what, if any, value the public places on improvements that do not cross a human-use threshold.  
Carson and Mitchell also asked respondents to value improvements that would lead waters to uniformly meet a minimum rating across the country.  They did not specify baseline water quality or improvement levels at a finer spatial scale. This simplification, while making the valuation exercise more manageable for respondents, ignores sub-national variation and prevents inferences about how WTP may decay with distance.  Both are important factors when estimating WTP for national and regional improvements in surface water quality and will be explicitly addressed by the survey instrument described in this ICR, thus better informing assumptions underlying EPA's benefit transfer methodologies. 
More recently, Viscusi et al. (2008) conducted a national stated preference study of lakes and rivers in the U.S. They asked respondents a series of iterative choice questions utilizing a measure of the percent of rivers and lakes within the U.S. that are rated "Good" versus "Not Good."  Although this "percent of waters" measure is continuous with respect to the quantity of waters, it is restricted to the discrete "good" versus "not good" rating describing water quality and is thus subject to similar limitations as studies utilizing the water quality ladder metric -- the discrete nature of the metric makes it difficult to estimate household values for intra-category improvements.  Finally, Viscusi et al. asked respondents to value water quality improvements to water bodies within 100 miles of their place of residence, which may capture the preponderance of use values, but, like the Carson and Mitchell study, does not allow inferences about how WTP decays with distance.  As described above, the data collected from the survey instrument described in this ICR will provide information to better inform EPA's underlying assumptions regarding distance decay, public values for relatively small (intra-category) water quality improvements, and whether the public values water quality changes independent of recreational uses. 
In summary, although two nationwide stated preference studies have been conducted, and both were novel at the time, the study proposed in this ICR addresses several gaps that will provide a deeper and more comprehensive understanding of public values for improvements in surface water quality at a regional and national level. In fact, Carson and Mitchell (1993) state that "... more research is needed to determine how benefits change with small changes in water quality and, in particular, the spatial location of those changes."  These are precisely the gaps that the study described in this ICR intends to fill.  The proposed study includes continuous measures of water quality, but at the same time delineates thresholds communicating ordinal categories of quality, similar to those presented by Carson and Mitchell (1993), Viscusi et al. (2008), and numerous other stated preference studies of water quality (e.g., Anderson and Edwards, 1986; Desvousges et al., 1987; Johnston et al., 1999). In doing so, the proposed study will be able to estimate any public values for intra-category improvements in water quality. The proposed study design will also elicit respondents' WTP for water quality improvements in different regions, including the region where a respondent lives and regions at various distances from the respondent's residence.  This feature of the study design will allow for explicit modelling of how WTP decays with distance from the waterbodies being improved and allow EPA to determine the extent of the market for those improvements.  This will provide a more detailed and accurate examination of public values for water quality improvements, and better inform assumptions underlying EPA's current benefit transfer methodologies. Lastly, the proposed study includes two separate water quality measures, one focusing on human and recreational uses and the other on aquatic biological condition. Accounting for both dimensions of water quality independently allows for a more complete understanding of public values for changes in surface water quality.  

3(b)	Public Notice Required Prior to ICR Submission to OMB

First Round of Public Comment
In accordance with the Paperwork Reduction Act (44 U.S.C. 3501 et seq.), EPA published a notice in the Federal Register on September 9, 2021, announcing EPA's intent to submit this application for a new Information Collection Request (ICR) to the Office of Management and Budget (OMB), and soliciting comments on aspects of the information collection request.  A copy of the Federal Register Notice (86 FR 53960) is attached at the end of this document (See Attachment 2).  One set of comments was submitted by the National Association of Clean Water Agencies (NACWA). The comments submitted by NACWA and EPA's response to those comments are included in Attachment 4.   


3(c)	Consultations

Consultations with Scholars: On November 2[nd] and 3[rd] of 2017, EPA's Office of Research and Development hosted a workshop in Narragansett, RI of STAR grantees conducting stated preference studies of surface water quality benefits.  At this meeting, the EPA presented the plans for our own study and received feedback from several academic researchers with expertise in this area.  The feedback was predominantly supportive of our research goals, and workshop participants provided many useful suggestions regarding our survey design and estimation approach.

A second meeting with the same STAR grantees was held in Ithaca, NY on April 2[nd] and 3[rd] of 2019 to report progress.  EPA received additional feedback on this effort and discussed survey design features that would complement the STAR grantees projects and provide opportunities for cross-validation of our results.  

Consultations with Respondents: As part of the planning and design process for this collection, EPA conducted a series of ten focus groups and 24 one-on-one cognitive interviews. Five of the focus groups took place in Arlington, VA; one in Alexandria, VA; two in Phoenix, AZ, and one in Chicago, IL.  Focus group locations were chosen to collect a diverse set of perceptions and experiences with freshwater lakes, rivers, and streams. Cognitive interviews were conducted in Alexandria and Arlington, Virginia.  Early focus group sessions were used to explore how respondents think of water quality and quantity, how to communicate measures of those attributes, to identify a list of widely recognized waterbodies and the main features that make those waters widely recognized.  Later sessions and cognitive interviews were employed to test draft versions of the survey. These consultations with potential respondents were critical in identifying sections of the questionnaire that were unnecessary or lacked clarity, and in producing a survey instrument that would be meaningful and comprehensible to most respondents. The later focus group sessions and the cognitive interviews were also helpful in estimating the amount of time respondents would need to complete the survey instrument. While completion times varied, most participants completed the survey in 20 minutes or less. The focus group sessions and cognitive interviews were conducted under OMB Control # 2090-0028.

Consultations with Experts: The survey instrument benefited from consultation with three leading scholars specializing in stated preference surveys for estimating benefits associated with water quality improvements and environmental quality more broadly: Dr. Catherine Kling, Professor and Faculty Director, Cornell Atkinson Center for Sustainability; Dr. Daniel Phaneuf, Professor, Department of Agricultural and Applied Economics, University of Wisconsin-Madison; and Dr. Robert Johnston, Director, George Perkins Marsh Institute, Professor, Department of Economics, Clark University.

EPA also held a meeting in Corvallis, OR on March 26[th] and 27[th] of 2019 with limnologists, ecologists and other economists in the Office of Research and Development.  The purpose of the meeting was to refine the metric for aquatic ecological health that would be used on the survey and discuss data needs for the study design and data analysis phases of the project.

Survey Design Team: Dr. Christopher Moore at the U.S. Environmental Protection Agency serves as the project manager for this study. Dr. Moore is assisted by Dr. Matthew Massey, Dr. David Smith, Dr. Bryan Parthum, Dr. Wes Austin, all of whom are with the U.S. EPA's National Center for Environmental Economics and have extensive experience in stated preference methods and other non-market valuation approaches.  Also assisting on the project are Dr. Paul Ringold, Dr. Steve Paulsen, Dr. Matthew Heberling, Dr. Nathanial Merrill, and Dr. Steve Newbold.  Drs. Ringold and Paulsen are aquatic ecologists in the Office of Research and Development's Western Ecology Division.  Drs. Heberling and Merrill are economists with extensive non-market valuation experience in the Office of Research and Development's Center for Environmental Measurement and Modeling.  Dr. Newbold is an associate professor of economics in the School of Business at the University of Wyoming with a background in ecology and extensive experience in non-market valuation.
	
3(d)	Effects of Less Frequent Collection

The survey is a one-time activity. Therefore, this section does not apply.

3(e)	General Guidelines

The survey will not violate any of the general guidelines described in 5 CFR 1320.5 or in EPA's ICR Handbook.

3(f)	Confidentiality
All responses to the survey will be kept confidential to the extent provided by law. EPA's detailed survey questionnaire will not ask respondents for personal identifying information, such as names or phone numbers. Instead, each survey response will receive a unique identification number. The addresses of respondents will not be provided by the contractor. The latitude and longitude of respondents' place of residence will be perturbed using an algorithm that randomly shifts geocoded latitude and longitude to maintain confidentiality. The extent of the perturbation ranges from 100 to 2,000 feet to the north or south, and east or west, depending on the population density in a respondent's Census Block Group. The only geographic identifier then provided to EPA is the Census Block FIPS code corresponding to the perturbed coordinates.  
Prior to taking the survey, respondents will be informed that their responses will be kept confidential to the extent provided by law. The name and address of the respondent will not appear in the resulting database, preserving the respondents' identity. The survey data will be made public only after it has been thoroughly vetted to ensure that all other potentially identifying information has been removed.

3(g)	Sensitive Questions

The survey questionnaire will not include any sensitive questions pertaining to private or personal information, such as sexual behavior or religious beliefs.

4.	The Respondents and the Information Requested
4(a)	Respondents 
Eligible respondents for this stated preference survey will be U.S. civilian, non-institutionalized individuals, age 18 years and older who reside in the 48 contiguous United States and the District of Columbia. Respondents will be selected randomly from an existing probability-based Internet panel. EPA has not chosen a specific panel to administer the survey but the KnowledgePanel maintained by Ipsos is a likely candidate.  Such panels employ an addressed-based sampling (ABS) methodology from the Delivery Sequence File of the United States Postal Service  -  a database with full coverage of all delivery points in the United States. ABS is considered a promising alternative to random digit dialing (Dillman et al. 2009) because of the number of cell phone-only households in the United States. As of December 2018, more than half, 55.2%, of U.S. households are cell phone-only. The sample will thus represent all households regardless of their phone status.  Probability-based internet panels also include households that did not previously have internet access by providing such households with an web-enabled device (e.g., tablet) and internet access. Additionally, panel membership tends to correspond well to demographic characteristics (i.e., benchmarks) from the U.S. Census. Probability-based internet panels were chosen as an appropriate sample frame for this data collection because of their inclusion of cell phone-only households, households without prior access to the internet at home, and the overall statistical representativeness of their research panel relative to U.S. Census benchmarks.
Probability-based internet panels offer several other advantages over alternative sample frames:
 Data available for non-response bias analysis: Demographic data for panel members selected for this study, whether or not they choose to participate, will be available to EPA. This information will be used to evaluate the representativeness of the sample responding to the main survey. Each sample that is drawn will be compared with U.S. Census benchmarks such as age, gender, race, ethnicity, educational attainment, employment status, and household income. In addition to demographic data, Internet panels are typically able to provide data that may be important in assessing the representativeness of our sample across non-demographic dimensions, including variables describing internet access, smartphone and computer use, and employment. More importantly, EPA will likely have access to variables that may be correlated with preferences towards environmental improvements, and thus willingness to pay, including information on environmental organization membership, interests in outdoor sports (e.g., fishing), and political affiliation. This will allow EPA to evaluate non-response bias in terms of possible systematic differences between panel members who are selected for this study and participate and those who are selected but do not participate (see Part B Section 2).
 Less likely to encounter self-selection bias: Self-selection bias is a concern when potential respondents agree to complete a survey because they have a particular interest in the subject matter.  If this tendency is pervasive in the sample, the result could be a non-representative sample producing a biased characterization of our variables of interest.  Compared to mail-push-to-web and opt-in samples, members of internet panels take surveys on a wide variety of subjects on a regular basis and are less likely to self-select into the sample for strategic reasons (i.e., to influence the outcome of the survey).  Further, Dillman (2014) finds that households tend to be skeptical of unsolicited mailings inviting potential respondents to an Internet survey. This leads to low participation rates in mail-push-to-web surveys compared to established Internet panels.
 Less fossil fuels and paper resources: A probability-based internet panel survey will not consume any additional paper or printing resources or burn additional fossil fuels delivering recruiting materials, whereas a mail-push-to-web approach would.  The energy consumed to complete the electronic survey is equal between sampling approaches. 

4(b)	Information Requested
(i)	Data items, including recordkeeping requirements
EPA developed the survey based on findings from a series of ten focus groups and 24 cognitive interviews conducted as a part of the survey instrument development process (OMB Control Number: 2090-0028). Focus groups provided valuable feedback that allowed EPA to iteratively edit and refine the questionnaire and to eliminate or improve imprecise, confusing, or unnecessary questions. In addition, later focus groups and cognitive interviews provided useful information on the approximate amount of time needed to complete the survey instrument. This information informed our burden estimates. Cognitive interviews were also used to assess and improve the appearance of the survey on computers, tablets, and smart phones.  Focus groups and cognitive interviews were conducted following standard approaches in the literature, as outlined by Desvousges et al. (1984), Desvousges and Smith (1988), Opaluch et al. (1993), Schkade and Payne (1994), and Johnston et al. (1995). 
EPA has determined that all questions in the survey are necessary to achieve the goal of this information collection (see Part A, section 2 for the list of objectives).  The current draft of the survey is included in Attachment 1 and is described in more detail in Part B of this ICR. EPA will conduct extensive nonresponse analysis including benchmarking, which uses demographic information to compare the respondents and non-respondents to the general U.S. population and evaluate potential non-response bias. Other data available on respondents and nonrespondents that is available from the sample frame and believed to be correlated with our key variables of interest in the analysis will be used to estimate response propensity and compared among subgroups. 
The following is an outline of the major sections of the survey.  Screen numbers refer to the numbered screenshots in Attachment 1.
Survey purpose and description. (Screen 1) This screen describes the purpose of the survey and what the respondent can expect as they complete it.  This screen also informs the respondent that the EPA is conducting the survey to collect data that may be used to inform future policy decisions affecting water quality and household expenses.  Knowledge that the data collected could be used to influence an agency's actions is required to satisfy the conditions of consequentiality and incentive compatibility (Carson and Groves 2007).  When met, these conditions imply that respondents answer stated preference questions truthfully.
Describing outdoor water quality. (Screen 2) The next screen introduces the respondent to two different categories of water quality  -  water recreation and aquatic biodiversity.  Text is also provided to distinguish between surface water quality and drinking water quality, the latter of which is not addressed in this survey. 
Questions about visits to waterbodies. (Screens 3-4) Next a series of questions regarding visits and recreational use of waterbodies are presented. These questions are meant to prime respondents for thinking about surface water quality and provide data that can be used to compare to other national surveys, which serve as a benchmark to assess the representativeness of our sample.  Question 1 asks if the respondent has taken a trip to a lake, river, or stream in the last 12 months and will be used to identify users of lakes, rivers, and streams in the data analysis.  Question 2 asks if they have gone fishing in freshwater. These questions were borrowed directly from the National Survey of Fishing, Hunting, and Wildlife-Associated Recreation which is a national survey conducted by the U.S. Fish and Wildlife Service and the U.S. Census Bureau.  Question 3 asks how many single-day trips to a river, lake, or stream the respondent has taken in the past 12 months.  Question 4 asks for the main purpose of the last trip taken.  Question 5 asks how many miles the respondent traveled for the last single-day trip they took to a lake, river, or stream.  Questions 3, 4, and 5 also appear on the National Survey on Recreation and the Environment, which is a national telephone survey.  Including these questions on the survey described in this ICR will provide a comparison to other national surveys administered via different modes and using different sampling strategies, helping to assess the representativeness of our sample regarding characteristics that are key to our main study objectives.  
Features describing hypothetical policy options.  (Screens 5-17) The next screen introduces the four features or attributes describing the hypothetical policy options presented to respondents. This is followed by a series of screens describing each feature in more detail. The four features are: 
    How much water would be affected? The following screen explains that surface area, measured in square miles, is the metric that the survey will use to convey the quantity of water affected by the policy.  Respondents are shown two illustrative figures to explain how surface area of lakes, rivers, and streams is calculated.  
    What improvements in water quality you could expect? The next five screens describe the two water quality metrics. The first three screens focus on the Recreation Score, which is intended to capture human use aspects of value. The following two screens describe the aquatic biodiversity score, which is intended to capture the value a respondent may hold that is independent from their use of waterbodies, (i.e., existence value).  For both water quality scores, the survey first presents some basic background information, followed by additional details about the various factors that impact each metric.  Then text is provided that describes how to interpret the scores, which is supplemented with a graphic showing the scale and linking it to a graphical interpretation. These visual aids help reinforce interpretation of the scores and allows respondents to more easily make the cognitive link to specific water quality attributes. 
    Where policies would be implemented? The Policy Regions shows a map of the major watersheds of the contiguous U.S. and explains that the areas in which the policy or policies that could be implemented would include one or more of these regions.  The following two screens use a map of the major watersheds to show people the water surface area, current Recreation Score and current Biodiversity Score in each watershed.  
    Household cost of implementing each policy? The last policy attribute presented to respondents is the cost to their household if the policy were put in place.  On the first of two screens, respondents are given several examples of how pollution would be reduced.  The following screen describes the payment vehicle as an increase in annual federal, state, and local taxes to fund those policies.  Focus group testing and comments from experts led to the conclusion that framing the payment vehicle as a somewhat abstract bundle of taxes minimizes respondent concerns regarding the feasibility of the payment vehicle and effectiveness of government spending (i.e., whether the tax revenue would really be spent to improve water quality as described). The increase in annual taxes is stated to be in effect for five years.  Five years was chosen because a longer time period was judged by focus group participants to be too constraining to future generations and fraught with uncertainties regarding the future political climate and government priorities. At the same time, a one-time payment was judged to be not enough to sustain the specified environmental improvements over time.  Based on focus group testing and cognitive interviews, a five-year tax increase seemed to be a broadly acceptable middle ground between these two conflicting concerns.  The final screen in this section shows a simple line graph to help respondents visualize how improvements would occur gradually over time and eventually level off to sustainable, permanent levels.  
      
Range of improvements and costs. (Screen 18) This slide presents a table that shows the range of attribute and cost improvements a respondent will see in the subsequent choice questions.  Referred to as a visible choice set, recent literature has suggested that presenting respondents with the full range of attribute changes helps provide a better frame of reference and can reduce anchoring and ordering effects often found in stated preference studies (Bateman et al., 2004; Johnston et al., 2017). The environmental attribute levels chosen in the experimental design for this study are meant to ensure coverage of the range that one may reasonably expect from actual regional or national water quality policies. 
Choice questions. (Screens 19-28 Directly before the choice questions is a screen that reinforces consequentiality and reminds respondents to consider their budget constraint when answering the questions.  Emphasizing these points has been shown to effectively reduce hypothetical bias in stated preference surveys (Carson and Groves 2007).  Respondents are then presented with six choice question scenarios.  Each choice scenario uses a map to indicate a policy region where the changes would take place.  Below respondents are shown a comparison of the status quo option for which the Recreation and Aquatic Biodiversity Scores remain unchanged, and no additional costs are incurred by the household, and a "policy" option, where one or both Scores improve, and some positive cost is incurred by the household. According to conventional economic theory, respondents will choose the regulatory option that they prefer based on their preferences. The status quo option is always available, something that is necessary for appropriate welfare estimation (Adamowicz et al. 1998). Following standard approaches (Opaluch et al. 1993, 1999; Johnston et al. 2002a, 2002b, 2003), each question is separated by a reminder to consider each choice scenario independently, disregard previous questions, and to not add up costs or benefits across scenarios.  This reminder is included to avoid biases associated with sequence aggregation effects (Mitchell and Carson 1989). 
Debriefing questions.  (Screens 29-35) The last section of the survey presents a series of debriefing questions. The first set of questions entail Likert scale responses (1 through 5) of how much a respondent agrees/disagrees with a statement, or to what degree various factors affected their vote in the choice scenarios.  The responses to these questions will be used to econometrically assess various potential biases, alternative econometric models, etc.  More specifically, responses to these questions will be used to assess hypothetical bias, consequentiality, warm-glow, protest responses, existence and bequest motivations for non-use values, and motivations for use values (e.g., option value). This is followed by a series of questions regarding employment, housing tenure, and language spoken.  Responses to this final set of questions are directly comparable to questions in the U.S. Census Bureau's American Community Survey, and thus provide additional variables for assessing the representativeness of the sample. 
	

(ii)	Respondent activities
EPA expects individuals to engage in the following activities during their participation in the survey: 
 Go online to answer a web survey
 Review the brief background information provided in the beginning of the survey document 
 Complete the survey questionnaire.
A typical subject is expected to take 20 minutes to complete the survey. These estimates are derived from focus groups in which respondents were asked to complete the current draft of the survey.
5.	The Information Collected  -  Agency Activities, Collection Methodology, and Information Management
5(a)	Agency Activities

The draft survey was developed by EPA's National Center for Environmental Economics with contractor support provided by Abt Associates Inc. (55 Wheeler Street, Cambridge, MA 02138) under EPA contract EP-C-13-039 and EP-W-17-009. EPA will require further contractor support to administer the survey and perform quality control on the data. 
Agency activities associated with the survey consist of the following: 
 Developing the survey questionnaire and sampling design
 Programming the web survey
 Pretesting the on-line survey instrument
 Emailing or mailing pre-notification and survey invitation 
 Sending a reminder email or postcard 
 Placing a reminder phone call or mailing a final reminder letter 
 Data cleaning
 Analyzing survey results 
 Conducting the non-response bias analysis to test and (if needed) correct for such bias 
EPA will primarily use the survey results to improve the Agency's existing benefit transfer methodologies by better informing assumptions regarding distance decay of household WTP, the marginal rate of substitution between water quality and quantity, and whether households value water quality improvements independently of improvements in designated recreational activities. (See Part A, sections (2) and (3) for details). EPA will also examine how responses change as the respondents advance through the valuation questions and how certain design features can reduce associated biases (framing and ordering effects).  

5(b)	Collection Methodology and Information Management
EPA plans to implement the proposed survey using a voluntary, web-based survey. A web-based electronic survey reduces burden because respondents would only see questions relevant to them, based on their responses to prior questions (i.e., skip patterns). An internet survey will also use checks and prompts to minimize missing and/or incorrectly entered information. Most importantly, an electronic survey allows for the complex experimental design that is needed to inform assumptions on distance decay of WTP and the marginal rate of substitution between water quality improvements and the quantity of waters impacted. More specifically, an electronic survey allows EPA to automate the various survey versions respondents receive, thus facilitating a design that entails sufficient variation in the location of a policy region, the quantity of waters impacted, baseline water quality, and the stated improvement in quality. Additionally, minimizing framing and ordering effects requires that respondents be prevented from changing their answers to previous policy questions. This cannot be done with a paper survey but can easily be implemented in an electronic survey.  

5(c)	Small Entity Flexibility
This survey will be administered to individuals, not businesses. Thus, no small entities will be affected by this information collection.

5(d)	Collection Schedule
A typical collection schedule for an Internet panel survey is shown in Table A1.

                 Table A1: Schedule for Survey Implementation
                                       
                              Pretest Activities 
                           Duration of Each Activity
Random sample will be drawn from panel participants
                                                                  Weeks 1 to 3 
Advance email sent to notify potential respondents
                                                                         Week 4
Email notification sent to alert respondents when survey becomes available
                                                                         Week 5
Email reminder for those who have not completed the survey
                                                                         Week 7
Phone reminder 
                                                    3 days after email reminder
Cleaning of data file
                                                                        Week 13
Delivery of data
                                                                        Week 14
                          Full Survey Implementation
                                                                               
Random sample will be drawn from panel participants
                                                                Weeks 15 to 17 
Advance email sent to notify potential respondents
                                                                        Week 18
Email notification sent to alert respondents when survey becomes available
                                                                        Week 19
Email reminder for those who have not completed the survey
                                                                        Week 21
Phone reminder
                                                    3 days after email reminder
Cleaning of data file
                                                                        Week 26
Delivery of data
                                                                        Week 27



6.	Estimating Respondent Burden and Cost of Collection
6(a)	Estimating Respondent Burden

Subjects who participate in the survey during the pre-test and main surveys will expend time on several activities. EPA will use similar materials in both the pre-test and main stages of the survey.  It is reasonable to assume the average burden per respondent activity will therefore be the same for subjects participating during either pre-test or main survey stages. 
Based on focus groups and cognitive interviews, EPA estimates that on average each respondent taking the survey will spend 20 minutes (0.33 hours) reviewing the introductory materials and completing the survey questionnaire. Assuming that 120 respondents complete the survey, the national burden estimate for respondents to the pre-test survey is 40 hours. During the main survey stage, the national burden estimate for these survey respondents is 2,000 hours assuming that 6,000 respondents complete the survey. 
These burden estimates reflect a one-time expenditure in a single year.
Table A2 Respondent Time Burden

Time per Survey
Number of Respondents 
Total Time Burden
Pretest
20 Minutes 
120
40 hours
Main Survey
20 Minutes 
6000
2000 hours
Total
40 Minutes
6120
2040 hours

6(b)	Estimating Respondent Costs
(i)	Estimating Labor Costs
   	According to the Bureau of Labor Statistics, the average hourly wage for private sector workers in the United States in February 2021 is $30.00 (U.S. Department of Labor, https://www.bls.gov/news.release/empsit.t19.htm). Assuming an average per-respondent burden of 0.33 hours (20 minutes) for individuals completing the survey and an average hourly wage of $30.00, the average cost per respondent is $10.00. The total time cost for the 6,120 individuals expected to complete the survey in both the pre-test and main implementation would be $61,200.  
EPA does not anticipate any capital or operation and maintenance costs for respondents.
Table A3 Respondent Labor Costs 

Respondent Time Burden
Labor Cost ($30.00 per hour) 
Pretest
40 hours
$1,200
Main Survey
2000 hours
$60,000
Total
2040 hours
$61,200

(ii)  Estimating Capital and Operations and Maintenance Costs
No capital or operations and maintenance costs will be incurred. 

(iii)  Capital/Start-up vs. Operating and Maintenance (O&M) Costs
No capital or operations and maintenance costs will be incurred. 

(iv)  Annualizing Capital Costs
No capital or operations and maintenance costs will be incurred. 

6(c)	Estimating Agency Burden and Costs

Agency costs arise from staff costs and contractor costs. Based on the GS pay schedule, EPA estimates an average hourly cost of $42.02 for GS12, $49.96 for GS13, $59.04 for GS14, and $69.06 for GS15. EPA then multiplied hourly rates by the standard government benefits multiplication factor of 1.6. EPA staff have expended 1,200 hours developing and testing the survey instrument to date and are expected to spend an additional 2,000 hours finalizing the survey instrument, analyzing data, writing reports, reviewing intermediate products, and managing the project more generally. EPA staff costs are shown in Table A4 below and total $295,794. 

Table A4: Agency Burden Hours and Costs
                                    GS Level
                                      Hours
                                   Hourly Rate
                           Hourly Rate with Benefits
                                     Total
                                       12
                                       400
                                     $ 42.02
                                    $ 67.23
                                  $26,892.00 
                                       13
                                       800
                                     $ 49.96
                                    $ 79.94
                                  $63,952.00 
                                       14
                                       1000
                                     $ 59.04
                                    $ 94.46
                                  $94,460.00 
                                       15
                                       1000
                                     $ 69.06
                                    $ 110.49
                                 $110,490.00 
                                       --
                                       3200
                                       --
                                       --
                                 $295,794.00 

 Abt Associates Inc. (55 Wheeler Street, Cambridge, MA 02138) provided support in developing the draft survey under EPA contract EP-C-13-039 with funding of $85,328 for 234 hours. EPA will require further contractor support to administer the electronic survey and perform quality control on the data collected.  EPA anticipates the additional contract support to require funding of $256,000 for 394 hours. Total contractor support is 628 hours, with a total cost of $341,328. 

Contractor Tasks
Contractor Hour Estimates
Contractor Cost Estimates
Project Management
160
$27,393
Focus Groups and Interviews
172
$75,735
Survey Programming
59
$18,500
Coding Plan Development
40
$4,300
Survey Pretest
91
$18,400
Main Survey
106
$197,000
Total
628
$341,328

Contractor cost estimates include non-labor costs such focus group facility rental and purchase of the sampling frame.

6(d)	Respondent Universe and Total Burden Costs
See 6(a) and 6(b). 

6(e)	Bottom Line Burden Hours and Costs

The following tables present EPA's estimate of the total burden and costs of this information collection for the respondents and for the Agency. The bottom-line burden for these two together is $698,322.


Table A5: Total Estimated Bottom-Line Burden and Cost Summary for Respondents
                               Affected Individuals
                                  Burden (hours)
                                     Cost 
   Pre-test Survey Respondents
                                        40
                                     $1,200
   Main Survey Respondents
                                       2000
                                    $60,000
   Total for All Survey Respondents
                                       2040
                                    $61,200




Table A6: Total Estimated Burden and Cost Summary for Agency
                              Affected Individuals
                                 Burden (hours)
                                  Cost (2012$)
 EPA Staff
                                      3200
                                    $295,794
 Contractor Support for Survey Development
                                      234
                                    $85,328
 Expected additional Contractor Support
                                      562
                                    $256,000
 Total Agency Burden and Cost
                                     3,996
                                    $637,122


6(f)	Reasons for Change in Burden

This is a new collection. The survey is a one-time data collection activity.

6(g)	Burden Statement

EPA estimates that the public reporting and record keeping burden associated with the survey will average 0.33 hours per respondent (i.e., a total of 2,040 hours of burden divided among 120 pre-test respondents and 6000 main survey respondents).  Burden means the total time, effort, or financial resources expended by persons to generate, maintain, retain, or disclose or provide information to or for a Federal agency. This includes the time needed to review instructions; develop, acquire, install, and utilize technology and systems for the purposes of collecting, validating, and verifying information, processing and maintaining information, and disclosing and providing information; adjust the existing ways to comply with any previously applicable instructions and requirements; train personnel to be able to respond to a collection of information; search data sources; complete and review the collection of information; and transmit or otherwise disclose the information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB control numbers for EPA's regulations are listed in 40 CFR part 9 and 48 CFR chapter 15. 
 To comment on the Agency's need for this information, the accuracy of the provided burden estimates, and any suggested methods for minimizing respondent burden, including the use of automated collection techniques, EPA has established a public docket for this ICR under Docket ID No. EPA-HQ-OA-2019-0292, which is available for online viewing at www.regulations.gov, or in person viewing at the Office of Water Docket in the EPA Docket Center (EPA/DC), EPA West, Room 3334, 1301 Constitution Ave., NW, Washington, DC. The EPA Docket Center Public Reading Room is open from 8:30 a.m. to 4:30 p.m., Monday through Friday, excluding legal holidays.  The telephone number for the Reading Room is (202) 566-1744, and the telephone number for the Office of the Administrator Docket is 202-566-1752.An electronic version of the public docket is available at www.regulations.gov.  This site can be used to submit or view public comments, access the index listing of the contents of the public docket, and to access those documents in the public docket that are available electronically.  When in the system, select "search," then key in the Docket ID Number identified above.  Also, you can send comments to the Office of Information and Regulatory Affairs, Office of Management and Budget, 725 17th Street, NW, Washington, D.C. 20503, Attention: Desk Officer for EPA.  Please include the EPA Docket ID Number EPA-HOA-2019-0292 and OMB Control Number 2090-NEW (previously marked as 2080-NEW in the 60-day public notice) in any correspondence.



REFERENCES 
Adamowicz, W., Boxall, P., Williams, M. and Louviere, J., 1998. Stated preference approaches for measuring passive use values: choice experiments and contingent valuation. American Journal of Agricultural Economics, 80(1), pp.64-75.
Anderson, G.D. and Edwards, S.F., 1986. Protecting Rhode Island's coastal salt ponds: an economic assessment of downzoning. Coastal Management, 14(1-2), pp.67-91.
Bateman, I.J., Mace, G.M., Fezzi, C., Atkinson, G. and Turner, R.K., 2014. Economic analysis for ecosystem service assessments. In Valuing Ecosystem Services. Edward Elgar Publishing.
Bateman, I.J., Cole, M., Cooper, P., Georgiou, S., Hadley, D. and Poe, G.L., 2004. On visible choice sets and scope sensitivity. Journal of Environmental Economics and Management, 47(1), pp.71-93.
Bateman, I.J., Carson, R.T., Day, B., Hanemann, M., Hanley, N., Hett, T., Jones-Lee, M., Loomes, G., Mourato, S., Pearce, D.W. and Sugden, R., 2002. Economic valuation with stated preference techniques: a manual. Economic Valuation with Stated Preference Techniques: a Manual.
Bennett, J. and Blamey, R. eds., 2001. The Choice Modelling Approach to Environmental Valuation. Edward Elgar Publishing.
Bishop, R.C., Boyle, K.J., Carson, R.T., Chapman, D., Hanemann, W.M., Kanninen, B., Kopp, R.J., Krosnick, J.A., List, J., Meade, N. and Paterson, R., 2017. Putting a value on injuries to natural assets: The BP oil spill. Science, 356(6335), pp.253-254. 
Cameron, T.A. and DeShazo, J.R., 2013. Demand for health risk reductions. Journal of Environmental Economics and Management, 65(1), pp.87-109. 
Carson, R.T. and Groves, T., 2007. Incentive and informational properties of preference questions. Environmental and Resource Economics, 37(1), pp.181-210.
Carson, R.T. and Mitchell, R.C., 1995. Sequencing and nesting in contingent valuation surveys. Journal of Environmental Economics and Management, 28(2), pp.155-173.
de Bekker-Grob, E.W., Donkers, B., Jonker, M.F. and Stolk, E.A., 2015. Sample size requirements for discrete-choice experiments in healthcare: a practical guide. The Patient-Patient-Centered Outcomes Research, 8(5), pp.373-384. 
Desvouges, W.H. and Smith, K.V., 1986. The Conceptual Basis of Benefits Estimation in Measuring Water Quality Benefits. Ray Perryman, ed.
Dillman, D.A., Smyth, J.D. and Christian, L.M., 2014. Internet, Phone, Mail, and Mixed-mode Surveys: the Tailored Design Method. John Wiley & Sons. 
Dillman, D.A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J. and Messer, B.L., 2009. Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. Social Science Research, 38(1), pp.1-18.
Dutwin, D. and Buskirk, T.D., 2017. Apples to Oranges or Gala versus Golden Delicious? Comparing Data Quality of Nonprobability Internet Samples to Low Response Rate Probability Samples. Public Opinion Quarterly, 81(S1), pp. 213 - 239.
Greene, W.H., 2012. Econometrics Analysis. Seven Edition. 
Greene, W.H. and Hensher, D.A., 2010. Does scale heterogeneity across individuals matter? An empirical assessment of alternative logit models. Transportation, 37(3), pp.413-428.
Groves, R.M., 2006. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), pp.646-675.
Halbesleben, J.R. and Whitman, M.V., 2013. Evaluating survey quality in health services research: a decision framework for assessing nonresponse bias. Health Services Research, 48(3), pp.913-930. 
Hanemann, W.M., 1984. Welfare evaluations in contingent valuation experiments with discrete responses. American journal of agricultural economics, 66(3), pp.332-341. 
Hanemann, M. and Kanninen, B., 2001. 11 The Statistical Analysis of Discrete-Response CV Data147. In Valuing environmental preferences: theory and practice of the contingent valuation method in the U.S., EU, and developing countries (Vol. 302). Oxford University Press on Demand.
Hanemann, W. Michael. Some further results on exact consumer's surplus. No. 1557-2016-132808. 1981. 
Hensher, D.A. and Ho, C., 2016. Identifying a behaviourally relevant choice set from stated choice data. Transportation, 43(2), pp.197-217. 
Hensher, D.A., Rose, J.M. and Greene, W.H., 2012. Inferring attribute non-attendance from stated choice data: implications for willingness to pay estimates and a warning for stated choice experiment design. Transportation, 39(2), pp.235-245. 
Johnson, F.R., Lancsar, E., Marshall, D., Kilambi, V., Mühlbacher, A., Regier, D.A., Bresnahan, B.W., Kanninen, B. and Bridges, J.F., 2013. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value in health, 16(1), pp.3-13.
Johnston, R.J. and Abdulrahman, A.S., 2017. Systematic non-response in discrete choice experiments: implications for the valuation of climate risk reductions. Journal of Environmental Economics and Policy, 6(3), pp.246-267. 
Johnston, R.J., Swallow, S.K. and Bauer, D.M., 2002. Spatial factors and stated preference values for public goods: considerations for rural land use. Land economics, 78(4), pp.481-500.
Johnston, R.J., Boyle, K.J., Adamowicz, W., Bennett, J., Brouwer, R., Cameron, T.A., Hanemann, W.M., Hanley, N., Ryan, M., Scarpa, R. and Tourangeau, R., 2017. Contemporary guidance for stated preference studies. Journal of the Association of Environmental and Resource Economists, 4(2), pp.319-405.
Johnston, R.J., Besedin, E.Y. and Wardwell, R.F., 2003. Modeling relationships between use and nonuse values for surface water quality: A meta‐analysis. Water Resources Research, 39(12).
Johnston, R.J., Swallow, S.K., Allen, C.W. and Smith, L.A., 2002. Designing multidimensional environmental programs: Assessing tradeoffs and substitution in watershed management plans. Water Resources Research, 38(7), pp.4-1.
Johnston, R.J., Swallow, S.K. and Weaver, T.F., 1999. Estimating willingness to pay and resource tradeoffs with different payment mechanisms: an evaluation of a funding guarantee for watershed management. Journal of Environmental Economics and Management, 38(1), pp.97-120.
Johnston, R.J., Weaver, T.F., Smith, L.A. and Swallow, S.K., 1995. Contingent valuation focus groups: insights from ethnographic interview techniques. Agricultural and Resource Economics Review, 24(1203-2016-94999), pp.56-69.
Johnston, R.J., Besedin, E.Y. and Stapler, R., 2017. Enhanced geospatial validity for meta-analysis and environmental benefit transfer: an application to water quality improvements. Environmental and Resource Economics, 68(2), pp.343-375.
Layton, D.F., 2000. Random coefficient models for stated preference surveys. Journal of Environmental Economics and Management, 40(1), pp.21-36. 
Louviere, J.J., Hensher, D.A. and Swait, J.D., 2000. Stated choice methods: analysis and applications. Cambridge university press.
Maddala, G.S., 1983. Methods of estimation for models of markets with bounded price variation. International Economic Review, pp.361-378. 
McConnell, K.E., 1990. Models for referendum data: the structure of discrete choice models for contingent valuation. Journal of environmental economics and management, 18(1), pp.19-34. 
McFadden, D. and Train, K., 2000. Mixed MNL models for discrete response. Journal of applied Econometrics, 15(5), pp.447-470. 
Mitchell, R. and Carson, R., 1989. Using surveys to value public goods. Resources for the Future. Washington, DC.
Montaquila, J.M. and Olson, K.M., 2012. Practical tools for nonresponse bias studies. SRMS/AAPOR Webinar, 24. 
Newbold, S.; Walsh, P.J.; Massey, D.M and Hewitt, J. 2018. "Using Structural Restrictions to Achieve Theoretical Consistency in Benefit Transfers," Environmental and Resource Economics 69:529-553.
Pew Research Center, 2018. "For Weighting Online Opt-In Samples, What Matters Most?"
Poe, G.L., Welsh, M.P. and Champ, P.A., 1997. Measuring the difference in mean willingness to pay when dichotomous choice contingent valuation responses are not independent. Land economics, pp.255-267. 
Schkade, D.A. and Payne, J.W., 1994. How people respond to contingent valuation questions: a verbal protocol analysis of willingness to pay for an environmental regulation. Journal of Environmental Economics and Management, 26(1), pp.88-109.
Smith, V.K. and Desvouges, W.H., 1988. Contingent Valuation Methods and the Valuation of Environmental Risk. draft, Resource and Environmental Economics Program, North Carolina State University at Raleigh.
Train, K.E., 1998. Recreation demand models with taste differences over people. Land economics, pp.230-239. 
Train, K.E., 2009. Discrete choice methods with simulation. Cambridge university press. 
EPA, U., 2010. Guidelines for preparing economic analyses. EPA 240-R-10-001. Washington, DC, US Environmental Protection Agency.
U.S. EPA, 2015. Benefit and Cost Analysis for the Effluent Guidelines and Standards for the Steam Electric Power Generating Point Source Category. September 2015. https://www.epa.gov/sites/default/files/2015-10/documents/steam-electric_benefit-cost-analysis_09-29-2015.pdf
U.S. EPA, 2020. Benefit and Cost Analysis for Revisions to the Effluent Limitations Guidelines and Standards for the Steam Electric Power Generating Point Source Category. August 28, 2020. https://www.epa.gov/sites/default/files/2020-08/documents/steam_electric_elg_2020_final_reconsideration_rule_benefit_and_cost_analysis.pdf
Van Houtven, G., Powers, J. and Pattanayak, S.K., 2007. Valuing water quality improvements in the United States using meta-analysis: Is the glass half-full or half-empty for national policy analysis?. Resource and Energy Economics, 29(3), pp.206-228.
Vaughan, W.J., 1986. The RFF water quality ladder. Appendix B in Mitchell, Robert Cameron and Richard Carson, The Use of Contingent Valuation Data for Benefit/Cost Analyses in Water Pollution Control, Washington, DC: Resources for the Future.
Viscusi, W.K., Huber, J. and Bell, J., 2008. The economic value of water quality. Environmental and Resource Economics, 41(2), pp.169-187.

