U.S. ENVIRONMENTAL PROTECTION AGENCY PRIVATE  

+ + + + +

HUMAN STUDIES REVIEW BOARD (HSRB)

+ + + + +

PUBLIC MEETING

+ + + + +

WEDNESDAY,

OCTOBER 24, 2007

+ + + + +

	The meeting convened at 8:30 a.m. in the Conference Center at One
Potomac Yard, 2777 South Crystal Drive, Arlington, Virginia, Celia B.
Fisher, PhD, Chair, presiding.

HSRB MEMBERS PRESENT:

CELIA B. FISHER, PhD, Chair

STEPHEN BRIMIJOIN, PhD, Vice Chair

GARY L. CHADWICK, PharmD, MPH, CIP, Member

JANICE CHAMBERS, PhD, DABT, Member

SUSAN S. FISH, PharmD, MPH, Member

SUZANNE C. FITZPATRICK, PhD, DABT, Member

DALLAS E. JOHNSON, PhD, Member

KYUNGMANN KIM, PhD, CCRP, Member

KANNAN KRISHNAN, PhD, Member

MICHAEL D. LEBOWITZ, PhD, Member

LOIS D. LEHMAN-MCKEEMAN, PhD, Member

JERRY A. MENIKOFF, MD, Member

REBECCA PARKIN, PhD, MPH, Member

SEAN M. PHILPOTT, PhD, Member

RICHARD B. SHARP, PhD, Member

HSRB STAFF:

PAUL I. LEWIS, PhD, Designated Federal Officer

HSRB CONSULTANTS:

GERMAINE BUCK LOUIS, PhD, Division of

	Epidemiology, Statistics & Prevention

	Research, National Institute of Children

	& Human Development

P. BARRY RYAN, PhD, Department of

	Environmental and Occupational Health,

	Rollins School of Public Health, Emory

	University

EPA STAFF:

JOHN CARLEY, Office of Pesticide Programs

LARRY CUPITT, PhD, Office of Research and

	Development

DEBBIE EDWARDS, PhD, Director, Office of

	Pesticide Programs

	ROY FORTMANN, PhD, Office of Research

	and Development

	GEORGE GRAY, PhD, EPA Science Advisor

	WILLIAM JORDAN, Office of Pesticide

	Programs

	NANCY McCARROLL, Office of Pesticide

	Programs

	WARREN LUX, MD, Human Subjects Research

	Review Official, Office of the Science

	Advisor

PUBLIC COMMENT:

JUDITH HAUSWIRTH, PhD Keller & Heckman, LLP

DOUGLAS T. RICHARDS, American Pacific

	Corporation

	TABLE OF CONTENTS

Convene Meeting and Identification of

	Board Members, Celia Fisher, Ph.D.

	(HSRB Chair)	4

Welcome, George Gray, Ph.D. (EPA Science

	Advisory)	7

Opening Remarks, Debbie Edwards, Ph.D. 

	(Director, Office of Pesticide

	Programs [OPP] EPA)	13

Meeting Administrative Procedures, Paul 

	Lewis, Ph.D. (Designated Federal 

	Officer [DFO], HSRB, OSA, EPA)	17

EPA Follow-up on HSRB Recommendations, Mr.

	William Jordan (OPP, EPA)	23

Scientific and Ethical Approaches for

	Observational Exposure Studies

	EPA Draft Document Scientific and

	Ethical Approaches for Observational

	Exposure Studies, 

	Larry Cupitt, PhD, 

	Office of Research and Development	40

Public Comments	64

Board Discussion	93

Science and Ethics of Sodium Azide Study	268

	Nancy McCarroll (OPP, EPA)

	John Carley (OPP, EPA)

Public Comments	310

Board Discussion	318

Adjournment

	P R O C E E D I N G S

	8:39 A.M.

		DR. FISHER:  Okay, so let's get started with our meeting today.  A
couple of our members are in transit, so I'm sure they'll be here any
minute.  

		So why don't we go around.  I'm Dr. Celia Fisher, chair of the HSRB
and I think we'll go round, Steve, do you want to introduce yourself
first? 

		DR. BRIMIJOIN:  I'm actually delighted to see that I am finally after
15 years of volunteer service for the US EPA, I am finally known by my
correct name.  Today, after 15 years, and I thank you, Paul, and whoever
else was involved in this, I'm Stephen Brimijoin.  I'm at the Mayo
Clinic in Pharmacology and I'm vice chair of this Board.

		DR. FISHER:  And I wanted to thank Steve for doing an excellent job,
from what I read in the transcripts, as well as from what everybody else
said in terms of leading the meeting last time.

		DR. PARKIN:  I'm Rebecca Parkin.  I'm a Professor of Environmental and
Occupational Health at the G.W. University School of Public Health and
Health Services.

		DR. FISH:  I'm Sue Fish from Boston University, Schools of Public
Health and Medicine.

		DR. CHADWICK:  Gary Chadwick from the University of Rochester.

		DR. BUCK LOUIS:  I'm Germaine Buck Louis.  I'm a reproductive and
cranial epidemiologist and Branch Chief of the Epidemiology Branch at
NICHD and currently acting director of FE Biostats and Prevention.

		DR. RYAN:  I'm Barry Ryan, I'm Professor of Exposure Assessment and
Environmental Chemistry, Rowland School of Public Health at Emory
University.

		DR. KRISHNAN:  Kannan Krishnan, toxicologist, Professor at University
of Montreal.

		DR. JOHNSON:  My name is Dallas Johnson.  I'm Professor Emeritus,
Kansas State University.  I'm a statistician.  I'm a Visiting Professor
at Texas A&M University this fall semester.

		DR. KIM:  My name is KyungMann Kim.  I hope it doesn't take 15 years
for people to pick up my first name.  I'm a statistician from the
University of Wisconsin at Madison.

		DR. LEBOWITZ:  I'm Michael Lebowitz, a retired Professor of Medicine,
Epidemiology, Bio Stat. and Environmental Health Sciences in Tucson,
Arizona.

		DR. CHAMBERS:  I'm Jan Chambers.  I'm Professor in the College of
Veterinary Medicine at City State University and I'm a pesticide
toxicologist.

		DR. SHARP:  I'm Richard Sharp.  I'm Director of Bioethics Research at
Cleveland Clinic.

		DR. PHILPOTT:  I'm Sean Philpott.  I'm Policy and Ethics Director for
an NGO here in Washington called PATH and in deference to poor Steve
there, what most people don't even realize is I actually do have two
names because my entire family calls me Jean-Michel.

		DR. FISHER:  And young man, I will stop calling you Kim, I promise.

		(Laughter.)

		I wanted to especially welcome Dallas and Rebecca to join us.  We're
looking forward to your contributions.

		Dr. Gray, would you like to make some remarks?

		DR. GRAY:  Well, sure.  I would like to extend my good morning and
welcome to everyone here.  We really do appreciate your being here
together with Debbie Edwards who is the Director of the Office of
Pesticide Programs.

		We, as we say every time, and I think Rebecca and Dallas, you'll find
out, we appreciate your hard work.  This is a very hard working board
and we appreciate the time and effort that you put into this, the
requirement of your thought and your effort and you all have day jobs
and other things to do and we appreciate very much your service to the
Agency.

		I also want to thank everyone in EPA who worked hard to get ready for
this meeting.  I think we'll find it to be both an interesting and I
hope a very productive meeting.  So I want to thank them for all of
their hard work.

		To go back, I do want to welcome our three new board members, only two
of which are with us today to our meeting.  You've already met Rebecca
Parkin and Dallas Johnson.  Our other new member is Dr. Ernest Prentice.
 Dr. Prentice isn't able to join us this week, but he will be 

-- he's looking forward to being part of this Board and I think you'll
find him to be a terrific contributor and Rebecca and Dallas, we look
forward to the benefit of your expertise.  So thank you very much for
being willing to serve.

		I do also want to thank Dave Bellinger.  You know, he's had to resign
from the Board for a variety of just conflicting professional
commitments and just a lot of those calls on his time, but I want to
thank him for all of the effort that he put in.  He was an important
contributor and brought a very useful perspective to the Board.

		Now today does mark kind of a new milestone for the Board in the
roughly year and a half now that the Board has been working, almost all
of our effort has been on the review of these third party intentional
dosing exposures of non-pregnant, non-nursing people who are and these
are for submission to the Agency for use primarily in the pesticide
program.  Now today, we're going to be asking you to bring your
expertise and your knowledge and your perspective to a new topic, a
product prepared by the Agency's Office of Research and Development. 
It's of particular interest to me.  

		I think most of you know that in addition to being the Agency's
Science Advisor, I wear another hat in that I am the Assistant
Administrator for the Office of Research and Development.  So this has
been prepared in that group.  So this is our effort to develop a
framework, to develop procedures, to help with our investigators
thinking about, just thinking about ethical and scientific issues around
human observational studies.

		Unfortunately, I'm going to be in and out a little bit today, but this
is something that has the attention at the highest levels of ORD. 
Sitting immediately behind me is Kevin Teichman.  He's our Acting Deputy
Assistant Administrator for Science.  He has been very involved in
watching this project, understanding this project and is taking a close
personal interest in it.  

		So I am and Kevin is and in fact, all of the Office of Research and
Development are interested in your view of this topic and the thoughtful
advice that I know that you'll provide.  So I think it will be an
interesting new challenge for you and I really do appreciate your
willingness to take that on.

		This is a big thing for the Agency, to ensure that our observational
human exposure research continues to be done with both the most
up-to-date scientific and ethical standards.  And our National Exposure
Research Laboratory and in EPA speak, of course, we have an acronym and
we make it something you can say, so NERL is our group that both does a
lot of this work and is a leader for the -- really for the exposure
assessment community in these observational sorts of studies.  But what
we want to do is make sure that we have a document that looks at the
particular issues that EPA researchers need to consider as they plan and
as they implement these observational studies.  It's not meant to be an
official agency guidance document, but the idea is is that it's a
reference, it's a resource, it's something that people, that
investigators and scientists can look at to help understand the way in
which they can approach these observational studies which are an
important part of our exposure assessment portfolio.

		Of course, there will also be some of that regular business where
you'll be looking at some already proposed and some completed human
studies, and again, there we look forward to your thoughtful review. 
This is an important part of helping to move things through to stay with
the particular framework that we've set for ourselves, for looking at
and using these sorts of studies and we appreciate your thought there.

		So Dr. Fisher, thank you for the opportunity to say a few words. 
Again, my thanks to the Board, not only for your on-going service, but I
am especially interested in this topic that you'll be taking on today. 
I look forward to your remarks and I want you to know that they'll be
considered very seriously by the Agency.  So thank you very much.

		DR. FISHER:  Thank you very much, Dr. Gray.

		Dr. Edwards, would you like to say something?

		DR. EDWARDS:   Just a few words, thank you, Dr. Fisher.

		I'd like to join Dr. Gray in welcoming the Board to EPA for this, what
is the eighth meeting of the Human Studies Review Board.  I was actually
very surprised to see that when I realized it was the eighth meeting. 
It seems like just yesterday the Board formed and began its work, but a
lot of good work has been done over those past seven meetings which we
appreciate very much.

		For most of you, we welcome you back to this conference facility and
this Human Studies Review Board meeting.  It's our pleasure to host this
meeting here in our very own building.  Please let us know -- you see a
lot of EPA people around here, please let us know if there's anything we
can do at all to support you during the meeting.

		I especially want to welcome the new members also.  Dr. Prentice
couldn't make it today.  We're sorry about that, but I hope to be able
to meet him at the next meeting.  

		We're looking forward to working with Dr. Parkin and Dr. Johnson. 
Welcome, again, to the Board.  I think you'll enjoy it.  I hope you will
anyway, very interesting issues.

		Once again, we'll be presenting a variety of topics on which we're
seeking advice and Dr. Gray has just explained the research to do with
observational exposure on which we're seeking advice.  That will be very
important as well to the pesticide program, so we'll be looking forward
to that advice from you.

		We'll also be presenting, the pesticide program, new protocols and
completed studies.  You'll be looking at diverse types of human research
from field studies on insect repellents to a clinical study of a drug
from which we're deriving information on the potential toxicity of a
pesticide active ingredient.

		And finally, we're going to provide you with a progress report on our
work with the Exposure Task Forces.  

		I want to thank all of you for the effort you give to helping us
address these scientific and ethical issues around the conduct and
evaluation of human research.  We appreciate the time you devote to this
important work.  It is very important to us.  It is very important to
the public at large, not only here in the meetings where you
participate, but also in all the preparation that you do and in the
drafting on the final reports outcomes of the meetings.

		Finally, I also want to thank Dr. Paul Lewis.  Paul, there he is, and
Crystal Rogers Jenkins.  They've done a huge amount of work in handling
all the administrative details of the meeting and also in helping my
staff prepare for our part of the meeting.  

		So there is a lot of work ahead and I don't want to delay you any
further from getting into the agenda, but just to reiterate my thanks
for your participation today.  Thank you.

		DR. FISHER:  Thank you, Dr. Edwards, and I did want to mention that
this time around we've really, based on the recommendations of the
Board, established a work group that works with us for agendas.  And
Bill Jordan, as always, has been especially helpful and responsive in
terms of those work groups and I think that it's been very helpful in
terms of being able to identify consultants, identify how we would best
address the kind of issues that Dr. Gray has been talking about as well
as the issues that you have.  So I think it's working very well and I
did want to thank Mr. Jordan for his great work.  

		Dr. Lewis?

		DR. LEWIS:  Thank you, Dr. Fisher, and I want to thank you first for
serving as chair for this week's meeting and also members of the Board
for attending.  Also, I would like to echo what we've heard from Dr.
Gray and Dr. Edwards welcoming our new members to this Advisory
Committee, Dr. Parkin and Dr. Johnson.  

		The Agency appreciates the time and diligent work of Board Members in
preparing for this meeting, taking into account your busy schedules and
other professional obligations.  And I also want to thank the public for
being here today observing this meeting and for my EPA colleagues for
all their hard work preparing for the challenging topics that we have
ahead.

		As the Designated Federal Officer for this meeting, I serve as a
liaison between the Board and the Agency and I am also responsible for
insuring provisions of the Federal Advisory Committee Act are met.  And
HSRB meetings are subject to all Federal Advisory Committee Act
requirements.  As a Designated Federal Officer for the Board, a critical
responsibility is to work with appropriate agency officials to ensure
that all necessary ethics regulations are satisfied. 

		In that capacity, Board Members and consultants, and I'll touch upon
that in a moment, are briefed with provisions of the federal conflict of
interest laws.  In addition, each participant has filed a Standard
Government Financial Disclosure Report and I, along with our Deputy
Ethics Officer for the EPA Office Science Advisor and in consultation
with the Office of General Counsel, have reviewed these reports to
ensure all ethics requirements are met.  

		The Board will be reviewing challenging issues over the next several
days and we have a full agenda from today until Friday and meeting times
are approximate.  Thus, we may not keep to exact times as noted due to
Board discussions and public comments.  We strive to ensure adequate
time for Agency presentations, public comments presented, and for Board
deliberations. 

		For Board Members, presenters, consultants, and public commenters, we
would appreciate you speaking into the microphone since this meeting is
being recorded.  In addition, copies of meeting materials and public
comments will be available on regulations.gov, the Agency website for
all materials associated with this meeting.

		For members of the public requesting time to make a comment, I must
remind you that your remarks are limited to five minutes.  And for those
that have not pre-registered, have not contacted me previously, please
notify myself during a break or during lunch if you would like to make a
public comment in the course of the discussion. 

		As I mentioned, there is a public docket for this meeting and all
background materials, questions posed to the Board, and all reference
materials are available in the ORD docket via regulations.gov, and the
agenda outside this meeting lists the contact information for Board
documents.

		For members of the press who have questions about this week's meeting,
please contact me during the break or during lunch and I will direct you
to the appropriate person to address any questions that you may have.  

		I want to spend a moment talking about consultants for today's and
tomorrow's meeting.  We have several consultants joining the Board.  As
an example for observational discussion this morning and afternoon, I
would like to welcome Dr. Ryan and Dr. Buck, who will be serving as the
consultants in associate discussions for our meeting.  

		As consultants to the Board, they are being asked to provide
specialized information or assistance to the Board concerning specific
topic or topics for review.  Consultants are not considered members of
the Board, and do not participate in the Board's decision making
process, but I want to again welcome them and I'm looking forward to the
help they can provide the Board in terms of the advice that we are
seeking for today.

		As per the Federal Advisory Committee Act, minutes of this meeting are
prepared.  The minutes will be a description of matters discussed and
the conclusions reached by the Board.  As the Designated Federal
Officer, I have prepared such minutes and they are certified by the
chair, Dr. Fisher, within 90 days of our meeting.  They'll be available
on both the Human Studies Review Board website and on regulations.gov.

		The Board will also prepare a report as a response to the charged
questions that are listed on the agenda over the course of the three-day
meeting.  The Agency anticipates announcing Board review and subsequent
approval of this report via Federal Register notice.  We anticipate that
some time later on this year and when the draft report and the final
report is made available, will be available both on the regulations.gov
website and the HSRB website.

		I'd like to close by thanking the Board for its participation and for
again my colleagues for helping us in terms of preparing for discussion.
 I'm looking forward to a very interesting and enlightened discussion
over the next several days.

		Dr. Fisher?

		DR. FISHER:  Thank you, Dr. Lewis.

		Mr. Jordan, would you like to give us a summary?

		MR. JORDAN:  Certainly, thank you, Dr. Fisher, and I also want to
welcome everybody to the Human Studies Review Board.  It's become a
tradition and I think a very good one for someone from EPA to provide a
report back on what EPA has done with regard to the topics that were
considered at the previous meeting.  In that way, the Board Members know
how EPA responds to the advice that we have gotten and what actions we
are taking as a consequence.

		At the June meeting, there were five broad topics:  two insect
repellant protocols, one from the Carrol-Loye Biological Research
facility.  One from ICR in Baltimore.  There were also two completed,
actually more than two, a completed toxicity study on acrolein and
several clinical studies on 4 amino-pyridine.  And finally, we spent a
lot of time on a range of science and ethics issues related to handler
research, handler exposure research protocols that were proposed by the
task forces.  I'll be talking a little bit about each of those very
briefly.

		For the Carrol-Loye protocol LNX-001, the Board concurred with EPA's
conclusion that the studies were scientifically sound.  The protocol on
scientifically sound to be used to assess the repellant efficacy of the
tested products and concurred with EPA's conclusions that the studies
would meet applicable standards of the human studies regulation.

		We have heard from Dr. Carroll that he has not initiated this
research.  We expect he will conduct the research and submit a report to
us and based on that we would proceed to review the study report.  

		The second protocol that the Board looked at was from Insect Control
Research.  The Board recommended several changes to the protocol for
both scientific and ethical reasons.  One of them was to use the first
confirmed landing with an intent to bite as evidence of failure of
efficacy.  The Board recommended performing serology testing on
aspirated mosquitos providing an explanation for the statistical
analysis plan and adding a dosimetry study.  The Board's conclusion was
that if appropriately revised, the protocol should produce
scientifically reliable data.  The Board also concurred with EPA's
assessment that the submitted protocol would not meet applicable
requirements and recommended changes that would make it compliant with
our human studies regulations.

		Following the Board's review, ICR undertook to revise the protocol and
submitted it to EPA.  We looked at that and determined that further
revisions were necessary to address the recommendations that the Board
had made and our own assessment as well.  And after reviewing our
comments, ICR informed us that the sponsor decided not to pursue the
research.  So we do not anticipate that the proposed research will be
conducted and as far as we're concerned that's sort of represents an end
of that one.

		We presented to the Board a completed inhalation toxicity study on
acrolein, a study that we found in the public literature.  And the Board
reviewed it with us and concluded that the study was sufficiently sound
from a scientific perspective to be used to estimate a safe level of
acute inhalation exposure to acrolein.

		There were a number of limitations on the study in terms of how much
we understood about it.  It was a very old study and fortunately Dr.
Lebowitz was familiar with the facility and was able to provide some
additional background information, but there was still quite a gap in
the information about the study.  There were some aspects of the
reported research that were troubling to people.

		The majority of the Board, however, concluded that there was not clear
and convincing evidence that this study was fundamentally unethical or
significantly deficient relative to the ethical standards prevailing
when the study was conducted.  

		So based on that advice and consistent with our own view of the same
nature, we have relied on the results of the Weber-Tschopp inhalation
toxicity study as the basis for estimating a safe level of acute
inhalation exposure to acrolein and use that as a point of departure for
our regulatory decision making.

		Then we presented a series of several clinical studies on 4
amino-pyridine and these studies were conducted to evaluate the
effectiveness of the compound as a pharmaceutical.  The Board concluded
that the clinical studies taken together were sufficiently sound from a
scientific perspective to be used to derive a point of departure for
estimating risk to humans.

		The Board also concurred with EPA's conclusion that there was not
clear and convincing evidence that the studies were fundamentally
unethical or sufficiently deficient with respect to the ethical
standards prevailing when the studies were conducted.  And as you might
expect, EPA has relied on the results of these three studies to evaluate
the potential human risk of 4 amino-pyridine. 

		As you remember, we spent a lot of time in the last meeting on a
series of scientific and ethical issues related to the proposed handler
exposure research.  The Board's report, draft report, goes into
considerable detail about those issues.  At the risk of not doing
justice to the depth and content of the report, I'm going to attempt to
summarize the highlights of the Board's discussion.  I recognize that it
is still in draft and tell you a little bit about where I think we are
headed with regard to those particular issues.

		In general, the Board concluded that the government documents provided
by the task forces adequately addressed the potential risks and benefits
of the proposed research and agreed with EPA that as the particular
protocols were developed for specific scenarios, there would be a need
to particularize those discussions, those general discussions of risk
and benefits to make them appropriate for the specific exposure
scenarios.  And the Board, I'm sorry, the task forces understand that
responsibility and when they proceed with protocols, I'm confident that
they will do that.

		The second area that the Board discussed was the need for data on the
efficiency of residue removal procedures.  As you may remember, the
studies are conducted with passive dosimetry and in order to collect
information on the residues on the parts of the body that are not
covered by the passive dosimeters, the investigators proposed to do hand
washes and to wipe the subject's face and neck and measure the residues
in the wipes and washes.

		Not all of the residues are successfully removed through those
procedures and to the extent that that is true, there is potential for
underestimating potential exposure.  What EPA will do and we've
communicated this to the task forces is that we will review the results
from these procedures on a study-by-study basis to determine the need
for data on the efficiency of the residue removal procedures and the
need for possible correction of exposure estimates.

		If the hand washes and face and neck wipes are a relatively small
component or contributor of the overall exposure, we think that the need
for corrections and testing the efficiency would be much smaller and so
that's the reason that we will wait until we see the results of the
particular studies.

		There was also a discussion by the Board of whether to include
concurrent biomonitoring to complement the information gained from the
passive dosimetry measurements and the Board agreed with the EPA's
conclusions that routinely including concurrent biomonitoring was not
necessary.  Of course, on a case-by-case basis we will always have the
option to look at that and recommend that and so in the context of
particular protocols, it may be possible to revisit this issue but we
think in general that it is going to be a very unusual circumstance.

		The task forces provided their standard operating procedures that
would be relatively consistent across all of the different scenarios,
thanks to standard operating procedures to ensure data quality.  The
Board's review of those SOPs found them to be reasonably complete. 
There were recommendations to add a couple of additional SOPs to address
data quality, sample integrity, compliance with the standard operating
procedures.  And it's our understanding, the task forces expect to add
such SOPs.

		Okay, the next topic is one relating to both statistics and the kinds
of data that would be collected, specifically, whether or not the task
force research should collect data from repeated measurements on
different days of the same subject in order to be able to characterize
within worker variability.  The purpose, as indicated in the Board's
recommendation that EPA collects such data, is to allow the Agency to be
able to estimate what was called a usual exposure at the high end of the
distribution of individual exposures.

		And by usual exposure, we understood that the HSRB means the
distribution of the means of workers multi-day exposures across the
handler population.  Some workers may be characteristically much more
careful in the way that they perform a particular handler activity. 
Other workers may be somewhat sloppier and so if you looked at those
workers day after day after day, you would find that they have, when
they're doing the same activity, they'd have differences in their
average over several days.

		We agree with the understanding and agree with the Board's conclusion
that measures, repeated measures would be necessary in order to
characterize within worker variability and therefore would be necessary
to be able to characterize the distribution of usual exposures.  EPA,
however, has decided that we don't need such information to assess the
risks associated with multi-day exposures.  We will use data collected
on individuals, multiple different individuals, to characterize the
distribution across individuals and use values from that distribution in
our estimates and multi-day exposures.  And so we're not going to ask
the task forces to collect repeated measurements.

		The next topic was actually a very broad one involving the task
forces' proposed approaches to recruiting and enrolling subjects.  We
had recommended some improvements in the way that that be done, but
basically thought the approach sounded as though it was compliant with
the investigators' responsibilities under the Human Studies regulations.
 The Board generally agreed, made a few other recommendations for
improvements and we will work with the task forces to address those
recommendations and we're optimistic that they will be addressed to the
satisfaction of both EPA and the Board.

		And then the last topic that I want to mention, but not discuss in any
great detail is the task forces' proposals to use a purposive diversity
sampling strategy.  The Board reviewed that and recommended that
sampling strategies that included more elements of randomization be
considered.  That is a specific agenda item that appears later in the
meeting and so I'll go into that in more depth when it appears on the
agenda.

		So thanks.

		DR. FISHER:  Thank you very much, Bill.  And I think we all appreciate
it's been so helpful to get this kind of summary to inform us as to how
well we're doing in terms of advising you and covering those issues.

		Are there any questions or comments from the Board?

		DR. JOHNSON:  I have a question.

		DR. FISHER:  Sure.

		DR. JOHNSON:  I've come across this term several times before in
reading some of the materials and I don't quite understand what it
means.  So if you'd explain what we mean by a point of departure?  Thank
you.

		MR. JORDAN:  Sure.  To answer your question, I'll need to describe
fairly briefly the approach that EPA uses to assess the risk of
pesticides.  

		The way that the Agency does that is to identify the types of hazards,
the types of adverse effects that a pesticide may have.  For the vast
majority of pesticides, the largest type of data that most of the data
come from animal toxicity studies.  And we identify out of all of the
different types of toxicity studies, the first effect that would occur
as a consequence of exposure.  And so by determining that effect and the
dose level at which it occurs, we are then confident that if that effect
never occurs, none of the other adverse effects observed in many of the
toxicity studies would occur either.

		That dose level is referred to as a point of departure.  And the way
that we endeavor to make certain that nobody ever experiences that
effect is that we apply uncertainty factors to that point of departure. 
Usually, when we're dealing with animal toxicity data, we apply an
uncertainty factor of 10x to take into account the possibility that
animals, that humans may be more sensitive to the adverse effect than
animals are.  And then we apply an additional uncertainty factor of 10x
to ensure that we're protecting the most sensitive subpopulation within
the broader human population and then we apply an additional 10x
uncertainty factor to account for the possibility that we may have
uncertainties with regard to the sensitivity of children or our
understanding of the exposure profile.

		And so in some cases we are therefore wanting to make sure that no
human -- that human beings are not getting exposed to a level greater
than one thousandth of the toxicity value associated with the first
toxic endpoint.  And we refer to that toxic endpoint as the point of
departure.

		When we're dealing with human data, it operates essentially the same
way, but we don't use typically the uncertainty factor for the
extrapolation from animal toxicity to human toxicity.

		DR. FISHER:  Thank you very much, Bill.  As usual, your explanations
are always phrased in a way that everybody can understand and I'm sure
the rest of the Board is happy to be reminded of what these mean.

		I think now we're going to turn to Dr. Cupitt and Dr. Fortmann.  

		Dr. Gray, once again, I think the Board is very appreciative of this
opportunity not only to comment, but to learn.  I think that one of the
reasons the Board was so eager to look at this document was that it is
related to the kinds of decisions that we have to make in terms of
recommendations and we have learned a lot just from reading the
document.

		And I also know that Dr. Cupitt and Dr. Fortmann worked really hard to
get the document in shape for this meeting and not only is it so well
written, but it's beautiful.  So I want to thank you, again, for that
and we're looking forward to your comments.

		Dr. Gray?

		DR. GRAY:  I just want to say that I really do think that this is a
great opportunity.  Part of what made us work very hard and have this
push was that we thought this Board was the perfect group to bring this
document to because of the mix of expertise and the ability to look at
both the scientific and ethical approaches here.

		So it did put us under a bit of a time squeeze and it was an awful lot
of work that was done by Larry and Roy and other folks at NERL and we
appreciate that.  And we also appreciate your willingness to take on
something a little new, a little different and we're really looking
forward to your advice.

		DR. FISHER:  Thank you.  So I don't know who's speaking first.  Okay,
Dr. Cupitt.

		DR. CUPITT:  Hi.  I'm Larry Cupitt.  It's pronounced a lot like the
little boy with the bow and arrow.  We just changed the spelling on the
end.  And if you want to call me Stupid Cupitt, it's okay, I'll answer.

		(Laughter.)

		I'm here with Roy Fortmann, and we're from the National Exposure
Research Laboratory and ORD, as Dr. Gray had explained.  And again, I'd
like to express my thanks to the Board for taking the time to look at
this document and to advise us because we take seriously both the
scientific and ethical standards that we need to live up to as we do
this and we think that the work that we do is critically important to
the Agency, as it proceeds.

		And part of that is defined here where we've got this picture and
identification of what the Agency's mission is, to protect human health
and the environment.  And the Agency in many cases adopts a risk
assessment/risk management paradigm for addressing and understanding the
challenges that may provide risks to human health and the environment. 
And exposure is critically important to both the risk assessment
paradigm and the risk management paradigm, because understanding the
distribution of the chemicals throughout the environment and
understanding how people come into contact with those is critically
important for the Agency in doing its job to protect human health and
the environment.  So we think that our role is really important.

		The document itself, we put it together to serve as a resource
principally for our own researchers and for ourselves in the National
Exposure Research Laboratory.  We wanted to look at what the standards
were today in terms of the scientific and ethical issues that we had to
consider in designing and implementing observational studies.  And we
wanted to provide both a resource for people to read and references so
that our scientists could go get more information as they tried to put
together a study design and a human subjects protocol so that we could
ensure that not only was the science of the highest quality, but that we
understood what the ethical standards were that were dealing with
because we go into people's private homes and those kinds of things and
we have real ethical issues that we have to understand and adapt to and
deal with and we wanted to make sure that those ethical standards were
upheld at the highest possible level.  So that's the purpose for this
document.

		Why did we want to do it?  Well, frankly, we take this seriously. 
I've been involved personally with human subjects research now since
basically the rules first came out and we take seriously the issue of
making sure that we safeguard the privacy, the integrity, just we do
honor to those people who join us in this research, because they're
really volunteers to participate in this research with us and we really
want to make sure that we respect them in a way that is appropriate.

		But the other reason why we are motivated to do this, I've already
sort of talked about and that is exposure science can make a difference.
 We have listed here and you saw in the document some cases where
understanding people's exposure has come out with agency actions or
actions in other agencies that really can make a difference in
protecting human health and the environment.

		I wanted to talk a little bit about some of the fundamental concepts. 
We define exposure as the contact of a chemical or a pollutant or some
kind of thing, but I'm going to use the word chemical here just as a
symbolic term for any kind of pollutant that we might be dealing with. 
The contact of an individual with that chemical through the air that we
breathe or the food or the things that we rub up against or the water
that we drink, and understanding and characterizing that exposure means
that we have to do two things.  We have to understand and characterize
the distribution of that chemical throughout the environment, especially
where people are likely to come into contact with it.

		And then we have to understand the activities that people undertake
that bring them into contact with those chemicals in the environment, so
that putting that together we can understand what that exposure is.  And
understanding this is both critical for assessing the risk.  Dr. Jordan
just talked about how there was a factor of ten that was added for some
uncertainty associated with exposure. 

		But it is also critically important to the Agency as they undertake
the risk management aspects of their role, because it is oftentimes
through changing the source of that chemical, that pollutant, or
changing the way that people come into contact with it that the Agency
actually reduces exposure and thereby reduces risk. 

		In the National Exposure Research Laboratory, we collect data through
exposure studies that are observational.  So what are observational
human exposure studies?  Studies in which we observe and measure
people's contact with chemicals, the chemicals that are already present
in their environment, and the study is undertaken under real world
conditions.  We go into people's homes and their offices.  We follow
them in their cars or the school bus as the child goes to school and
outdoors as they walk to school or do things and play in their yards as
they undertake their normal day to day activities.

		So I've already  mentioned that we characterize both the distribution
of the chemicals in the environment and the people's activities.  The
kinds of samples that we collect are shown here on this slide.  Now we
won't collect every one of these samples in every study necessarily.  It
sort of depends on what the objective of the study is.  But, you know,
it is not unusual, for example, with air samples to collect -- am I
doing it the wrong way?  That way?  There it is.

		To collect outdoor samples, either at a central site or even in a
person's backyard, to collect samples indoors with things or to put
samplers on people so that they can actually then go about their daily
activities and we get some idea of what they encountered as they went
through their daily activities.  We collect samples of food, water, and
beverages.  We'll do hand wipes and surface residue transfer studies. 
We'll collect dust, soil, biological or urine samples so that we can
understand both the distribution of the chemicals throughout the
environment, the chemicals in contact with the individuals, and we also
collect data on what the people are doing --  no, you went too far,
okay, thank you -- including time activity information, personal
activities, what kind of products they use, what kind of foods they eat,
and what things are present in their house.

		For example, do they open their windows a lot so there is a lot of air
exchange if we're interested in air pollution impacts on individuals. 
These describe what we do for characterizing the environment or in
people's homes and offices, etcetera, and what they do.  But I thought
one thing that, we put together basically a video for an internal use of
on what we kind of do.  It's basically a visual tour of one of our
divisions and I have spliced parts of that together to show you, to give
you a little more explanation of what we do in an observational exposure
study.  

		And there are two scientists here --

		(Video presentation begins.)

		DR. TULLVE:  "The role of our research program is to understand what
chemicals people come into contact with, at what levels, what the
sources of those chemicals are, and where and when, how often, and why
they come into contact with those chemicals in the every day
environment."

		DR. WILLIAMS:  "One of my jobs is to collect information that helps
the US EPA collect the necessary data to protect Americans.  This can be
very difficult work.  We require a multitude of talents.  We use teams
to go around with talented exposure scientists, physical scientists,
statisticians, modelers, and data specialists.  Our instrumental
monitoring capabilities enable us to detect and analyze environmental
pollutants at very low concentrations.

		How do we accomplish our mission?  We identify human populations
living in original or refined exposure assessment for airborne
pollutants.  We go to where these people and investigate how they live
their lives.  The second involves developing the data collection tools
needed to better define human exposures.  This may require the
development of novel, low burden, passive monitors, surveying
instruments that collect time and activity information or sophisticated
source of portion and statistical analysis procedures.  

		Lastly, we perform grid studies by developing the necessary human
subjects protocols, the information collection requests, and peer review
study design to ensure the general precepts will be useful to the Agency
and to the scientific community at large for monitoring personal
exposures, residential indoor, residential outdoor, community-based
measures.  This represents a unique situation for NERL can develop the
tools and approaches to help communities and regions better understand
the usefulness of using data for central community monitors to assess
human health risks by being able to collect data where individuals live,
understand where they go and what they do and be able to determine the
impact of their exposure sources, how people live their lives.  We have
a unique ability to put together the puzzle."

		DR. TULLVE:  "To do this, we study the locations where people spend
significant amounts of time and for children, this includes their homes,
daycare centers or schools.  We collect the best sampling using a wet
and wipe.  We collect it from different locations on the floor, on
surfaces, and toys, and any other location that the child may come in
contact with.  Or we would use the HDS3 to collect dust samples or in
some instances we would actually collect the dust sample by asking the
care giver to provide us with the vacuum cleaner bag.  In addition to
the environmental samples that we collect while in the home, we also
collect personal samples.  

		The personal samples include a cotton garment, so that we can estimate
the amount of chemical and dust that ends up on the child's skin during
normal play behavior.  We also collect a duplicate diet plate.  This
allows us to investigate the amount of chemical that a child may ingest
from what the child eats during the day.

		The last type of sample we collect is from the child himself.  We
collect a urine sample and of course, we need to know what the child has
been doing during the time that we're collecting these samples and we
collect this information in a time activity diary.  We use all of this
information, provide data to support our clients, to fill existing data
gaps for exposure and risk assessments for children, identify new and
emerging chemicals of interest, refine our methods, provide inputs to
the exposure end dose finals and develop future research directions for
new and emerging exposure issues."

		(End of video presentation.)

		DR. CUPITT:  I just wanted to mention that you were going to see two
principal investigators from the NERL laboratory, Dr. Nicole Tullve was
the female, and she just last week was co-chair of the International
Society for Exposure Analysis that was held in RTP in Durham.  And Mr.
Ron Williams was the second PI.  He's been with us for quite a while and
they're very much involved in observational exposure studies.

		Now having talked a little bit about what we in the NERL actually do
in observational exposure studies, I wanted to turn my attention to the
document itself that we're asking for your comments on and your review.

		We began drafting this in 2005 even, I think maybe, but certainly in
2006 with a series of stakeholder conversations, talking to people in
EPA program offices, talking to people in the regions and a variety of
other interested parties about what might go into such a document and
how we might use it as the Exposure Research Laboratory.

		In November, late November of last year, we convened an expert panel
to have a workshop to explore the issues that needed to be in such a
document and that expert panel advised us in terms of what the content
should include and address and those kinds of issues that we ought to do
it and a little bit of the structure in terms of what, how we ought to
go about devising this document.

		Based on that, we first prepared a draft of the document in six, seven
chapters.  Depends on whether you count the introduction or not, that
kind of stuff.  And it underwent internal review within EPA.  We sent it
out to people in the regions.  We sent it to program office people and
to scientists in ORD.

		Based on their comment that internal document was revised into this
external review draft and we are now at the next two bullets, here
asking for your review and your input and comment, but the document has
also been put out for public comment.  There's a docket and it will be
open until November 19, 2007 so we're also seeking public comment on
this document.

		Our hope is to take your comments and advice and the input from the
public and to revise this document to publish it as a final document for
RPIs to use so that we can move forward looking at conducting
observational human exposure studies in the future.

		So the charge, you got three charge questions and let me first say the
first question talks in specifics about ethical considerations.  We were
not excluding science in doing that.  We adhere to the concept that bad
science is unethical and therefore this ethical consideration, does each
section identify the major areas and issues were ethical considerations,
and obviously that overlaps with the scientific considerations too that
need to be addressed.

		We also asked the second charge question, are there additional
sources, have we missed anything?  Are there additional sources of
information that should be considered for inclusion in the sections. 

		And then finally, can we improve how it's presented and do it clearly
so that our scientists can benefit from this.  So these were what we
were specifically trying to understand, basically.  Did we get it all in
there?  Did we get the right references?  And did we do it sufficiently
well, that it will be useful.

		The next slide goes through briefly the organization of the document. 
The first section is the introduction, purpose and scope.  It talks a
little bit about what observational human exposure studies are and talks
about the ethical issues underlying observational human exposure
studies.  It sets out the purpose and what we're going to do for the
rest of the document.

		It's in section two where we begin to get into the meat of this and
here we talk about elements to be considered in study conceptualization
and planning, how we put this thing together, how we decide whether or
not it warrants moving forward with an observational human exposure
study.  What we have to do in terms of that problem conceptualization,
how we go through the planning to put together both a scientific study
design which also then is a major component in the IRB review or the
human subjects protocol, how we put these things together jointly so
that they can be reviewed both for scientific quality and for the
ethical standards which in this case is through an IRB.

		I wanted to note also that at the end of that all of this is still
subject to the Agency's human subjects research review official, Warren
Lux, Dr. Lux, has the last word in terms of making sure that all of this
is appropriate before we can ever move forward.  And then we talk about
how we need to put together an organization, a procedure for
implementing and monitoring the study as it proceeds and how we move
forward from there.

		So that's what's in Chapter 2.

		Section three talks about ensuring protection of vulnerable groups and
who is a vulnerable group, basically looking at a number of the human
subjects rules and other considerations, justifying -- how you might
justify when you want to involve or exclude vulnerable groups; the
issues in the human subjects rules of minimal risk and vulnerable risks;
and then finally specifically talking about children, women, and other
potential vulnerable groups in this section.

		Section four deals with privacy and confidentiality in particular. 
There are real privacy issues.  We invade people's privacy in essence. 
We go into their homes and as a result, this is really critically
important to us in conducting these studies, in making sure that we --
they have volunteered to let us go into their homes.  So it's not an
unwarranted or an unacceptable or invasion of privacy, but it is one
nonetheless.  It's their exercising their will to allow us into that
private space, but we have to take that seriously.  And the issues of
confidentiality that derive from that, how we honor their privacy by
keeping things confidential in terms of information about them or even
that they're participating whenever that's possible is really a concern
to us.

		But then there are also these observations, things that will come
along where we may be obligated by law to report certain things that we
would observe and we have to be very clear in terms of dealing with
these individuals about making explicit those things that we can be
confidential about and those things that we cannot, and that gets
discussed in here.

		And of course, there are third party issues.  I mean, if we're going
into people's apartments and it's the landlord who is spraying with
pesticides or doing something else and we are observing that, our
observations may have an impact on that third party and we have to
understand those too.  So section four in this document goes into all of
those kinds of issues.

		Section five talks about basically creating an appropriate
relationship between the participant and the researcher and how to
actually respect them appropriately.  Here we've included discussions on
informed consent and the three pillars of informed consent: 
information, comprehension, and a voluntary participation and there's a
long discussion in section four about all of this and about how you do
it through communication that is both, it's two-way communication.  It's
both giving out information but it's also listening so that you can
understand what their concerns are and that kind of stuff.  So informed
consent is not a document.  It is a process.  We need to get beyond just
looking are the pages correct, but are we doing this in a way that
respects our participants appropriately.

		We get into the issue of payments.  Now payments clearly in the
literature has been very controversial.  We try to quote and show both
sides in the document and to discuss this in terms of where the latest
thinking is and where the advice might be.  And then, of course, we talk
about research rights, grievance, supportive environment, recruitment
strategies, those kinds of things.

		But it is all really fundamentally built on this issue of
communications and doing the communications well.  In section six, we
then extend it from the participant to the community and to the
stakeholders.  There's a good bit of discussion here on community, who
is the community.  How do you get somebody who is actually a
representative of the community as a whole to participate with you to
help you understand the community concerns.  Building relationships and
trust, the use of community advisory boards, etcetera. 

		I'll be blunt.  We talk a lot about community.  We don't talk a whole
lot about stakeholders and frankly, some advice and input on
stakeholders here would be very useful to us in this section.  But
again, it's built on this issue of communication.  

		And finally, chapter seven, section seven, rather, talks about
communication.  But it talks about it mostly from the researcher
perspective and we recognize this.  We've already talked about building
these relationships with this bi-directional communication.

		Here we're talking about as a researcher, what kind of approaches do
you need to do, what do you need to think through  because it is
actually intended here for our researchers to use.  What do you have to
work through in developing a communications strategy and an
implementation plan?  What kind of educational materials, what are
really effective for use in such a study?  And so that's where the
document ends.  It ends with section seven.  

		In summary, we think observational human exposure studies are
important because they collect real world information that determines
what chemicals people are coming into contact with, what the
concentration of those chemicals is, what the most important sources,
pathways, routes of exposure might be, and when, where, how often, and
why people come into contact with those chemicals.

		We think that's important for the agency in doing its job.  We think
that, we take seriously as managers and as scientists the protection of
our research participants.  Without them, we couldn't do our job and we
want to respect them appropriately.  So we want to live up not only to
the regulatory requirements, but also the spirit of the ethical
standards that motivate those ethical requirements.

		And I look forward to your comments and questions and observations and
your input to help us make that document one that really will be worth
while for our scientists as they try to plan and move forward on
conducting and implementing, designing and implementing observational
human exposure studies.  Thank you.

		DR. FISHER:  Thank you very much, Dr. Cupitt.

		Were you going to say anything?  Did you have a presentation, Dr.
Fortmann?

		DR. FORTMANN:  No, Larry has done an excellent job on presenting.

		DR. FISHER:  Okay, very good.  Then I think we'll go to Board
discussion and we'll do so within -- we've divided up the different
sections with lead discussants and secondary, so I think that's the way
to proceed.

		I'm sorry, are there any clarifying questions?  And then I'm going to
ask for public comment, sorry.

		DR. LEBOWITZ:  Mike Lebowitz here.  In the over 40 years I've been
working first with NAPCA and then EPA and seeing how this information is
used within the Agency and the different centers within ORD and also in
the different offices as I worked with different AAs and so forth, I am
constantly struck by the -- as it were the overlap of interest and the
importance of the use of this information in setting policy and setting
rules, regs, guidelines, using the information even to establish
international guidelines which in representing the Agency
internationally and so forth in developing some of the same principles
elsewhere.

		The thing that strikes me is that the -- that I'd like to have
clarified is in fact, how we clearly separate observation from other
activities in which we may engage.  Now it's very easy for me to do so
when I did animal tox.  It was obviously intentional when I did
controlled human exposure studies, inside and outside the Agency.  It
was very clear.  

		What we observe outside though and how we observe it and what the
participants go through the in process is really, has always been a
serious issue from the scientific standpoint as well as the ethical
standpoint.  And I think that -- I mean I'd like some clarification, if
I might on that, because I view most of what is done within the Agency
prior to risk assessment and risk management suggestions is
observational in that it's only when we put people or animals in
chambers, as it were, that it's intentional.  And I wondered if you
could spend a little bit more time, Larry, in clarifying that from an
Agency standpoint?

		DR. CUPITT:  Well, I would first begin by quoting Bill Jordan who took
the lead at writing the human subjects regulations, the new ones that
were added recently for the Agency where the definitions involving
observational studies show up.  And that is there's a definition
basically in the law, 26406, I think, that talks about what intentional
exposure is.  And then it says everything else is observational. And so
if it doesn't fall in that, it is observational, by EPA and its
regulations.

		Now that decision, frankly, is not mine.  It's not the researchers. 
It is indeed that of Dr. Warren Lux.  And so I would have to defer to
him in regard to that, but that's all I would really say at this point. 
Maybe ask Bill Jordan and Dr. Lux to comment.

		DR. LUX:  Sure, I'd like to say a couple of things.  First of all, I
think it's important that the Board distinguishes between the regulatory
definition of observational exposure studies and your intuition on what
it means to observe somebody in the natural environment.  And your
intuition about that is closer to what NERL is trying to do here.  And
all of those studies also meet the regulatory definition of
observational studies.

		But the regulatory definition was designed and here I will defer to
Bill, but my understanding is that the regulatory definition was
designed specifically to identify what it meant to be an intentional
exposure study so that other things then would fall under this broader
rubric of observational, including all of those things that NERL does
that are intuitively observational, plus other kinds of studies in which
there are interactions and interventions with individuals but those
interactions and interventions, although they might go beyond what's
intuitively observational, are not intentional exposure.

		Intentional exposure in the regulations is the introduction of a
substance, any substance into the research by the research and it
involves the study of that substance.  So that all kinds of studies in
which we are studying and which the Agency is studying ambient
exposures, exposures that would normally occur, but they are controlling
that exposure and introducing it at that mean ambient level, that's an
intentional exposure study.

		If the Agency does not do that, or if the researcher does not do that,
does not introduce the substance, does not control the introduction of
the substance into the research, then it involves, then it meets the
regulatory definition of observational.

		Bill, would you like to expand a bit?

		MR. JORDAN:  I'll try.  I'll certainly talk a little bit.  The
definition that appears in various places, it's in a couple of locations
in the regulations says research involving intentional exposure of a
human subject means a study of a substance which Warren focused on in
which the exposure to the substance experienced by the subject
participating in the study would not have occurred but for the subject's
participation in the study.

		So if you do an observational research and wipe people's hands with an
alcohol-water mixture, they're being exposed to an alcohol-water
mixture, that's intentional exposure to the alcohol-water mixture, but
that's not the substance being studied.  So that fact doesn't -- that
kind of exposure does not make an observational study an intentional
dosing study.  I think that's the point that Warren was focusing on. 
It's only if the substance being studied is where the focus ought to be.

		The second piece of that definition that I would like to talk a moment
about is the last part, the test that says the exposure would not have
occurred but for the subject's participation in the study.  So that's
phrasing that's intended to be fairly expansive in our thinking at EPA.

		The idea here is that if the researcher determines that the subject is
going to be exposed in Dr. Lebowitz' example, clearly, if the
researcher/investigator says to the subject you go sit in that air
chamber and we'll pump some particulate matter into the chamber and
you'll breathe it, that's exposure.  Or the insect repellant studies
where we have -- researchers proposing to apply a test material to the
subject's skin.  That's also a study that's an intentional exposure.  So
too skin absorption, metabolism studies where the researcher applies a
known amount of material to the subject skin.  What gets trickier are
the kinds of studies that we're hearing about in the handler exposure
area.  And what EPA will do when we look at a protocol is ask ourselves
has the subject made all of the decisions about exposure, all of the
decisions that would influence or determine what exposure he or she
would get.  If the answer to that question is effectively  yes, and if
those decisions are not influenced or directed by the researcher, then
it's an observational study.

		But when we're seeing, as we do in the handler exposure studies for
very good scientific reasons, that the researchers are scripting the
behavior, saying you will apply this amount of pesticide or you'll mop
the floor with this particular chemical, then it is an intentional
dosing study.

		DR. FISHER:  Can I just ask a question about that?  

		I think the blurry area which is, of course, the most interesting, I
thought it was really very informative to say that the subject makes
decisions about the exposure, where in the ag. handler I can see that in
some sense it's not occurring.  They're employees.  They're not making
that decision.

		How have you interpreted, if you have, these kinds of home
observations where a researcher may give a particular product or
formulation that's approved by EPA to a family who uses that type of
product, but then the family is told use it as you would use it.  So
that's kind of like on the border of your -- and I'm just wondering if
that's come up, if that's something that needs greater discussion at
some point.  This may not be the point, but I think that's that blurry
area that people are wondering about.

		MR. JORDAN:  I think that we have not done that.  Let me confirm that.

		DR. FORTMANN:  Right, point of clarification.  In the observational
studies that we do that EPA does, we do not ask anyone to use any
particular product.  We definitely do not provide any type of products
or make any suggestions of products that people would use in these
studies.  And that's why when we were preparing the presentation, we
wanted to make very clear what our observational studies are and that is
as people go about their normal activities in their normal environments,
we do not make any attempt to change the participants' behavior or the
participants' environment in the studies that we do.

		And this particular document is limited to the observational studies
that NERL does.  At ISEA last week I think some of the people in this
room were at the session that we had on ethical issues, and we made very
clear that observational -- this document does not address, for example,
intervention studies.  It strictly addresses observational studies.  So
no, we do not.

		DR. FISHER:  That's very helpful.  Thank you.

		Warren, did you want to say something?

		DR. LUX:  Sure.  Were I to see a study in which we gave a family a
substance to use, I would regard that on the basis of Bill's prior
discussion as an intentional exposure study, not an observational study,
because the researcher would be introducing into the research a
substance that otherwise would not have been there.

		With regard -- just one other slightly, somewhat related observation
for you, there are intervention studies where we alter exposures without
introducing substances.  For example, the placement of certain types of
filters in rooms to reduce exposure to certain kinds of air pollutants. 
Those sorts of studies I also would regard as observational studies. 
They do not entail the introduction of a substance into the research,
although the substance under study is reduced by the intervention.

		DR. FISHER:  Thank you.  These are very helpful.

		Dr. Fish, and then Dr. Krishnan.

		DR. FISH:  After this very elaborate discussion, I think my question
is going to be a five word answer or so.  

		In my many years of teaching research design courses, I've never heard
the term "scoping", in your planning and scoping.  Dr. Cupitt, could you
explain what scoping is, different from planning?  Differentiate those
two?

		DR. CUPITT:  I'm going to defer to Roy.  Those were his words?

		DR. FORTMANN:  And those are words that are commonly used in our
planning process.  The scoping involves the size of the study, the --
it's very integrated into the planning process.

		DR. FISH:  So is it differentiated in -- how is scoping different from
planning, I guess -- or are they redundant terms?

		DR. FORTMANN:  Scoping would be -- to look at -- they're complementary
terms, I guess.

		DR. FISHER:  Let me just try -- I mean my understanding of scoping is
that you have a research plan, but you have to scope out what the
practical environment -- what's the practical issues within the
environment that you need to study?  What is the house like?  What can
be done?

		So you might have a generalized plan, but until you go in and actually
know what's going on, you can't really finish your design.  Is that --

		DR. FORTMANN:  It really is conceptualization of a study that's needed
based on the research questions.  Scoping is the next step in terms of
addressing how you might address those research questions and the
planning would be the more detailed design of the study.  And Mike may
be able to help me out here.

		DR. LEBOWITZ:  Yes, I would add to that, the conceptualization is
extremely important.  Conceptualization of specific environments,
environmental concerns, even community concerns.  Prior to focusing down
on the goals, objectives of the research, and then that may lead to a
planning situation and then in which scoping becomes a practical aspect
of the planning.  But then there's also the scoping of what you do with
the data or the results once collected in terms of communication, in
terms of use by other members of the Agency or other groups in the
Agency or the -- or the scientific community in general.

		Sometimes that kind of scoping about other intra-agency interests may
also influence what is scoped out before the goals and objectives are
set out because there are overlapping interests of the different centers
and also in the different offices, so the Office of Water might say we
have a specific interest and so we understand if you're going to do
these kinds of studies looking at water, this might help our mission as
well within the office.  I've seen that both prior to, a priori and a
posteriori, where we helped scope out for some of the offices where they
might use information collected by NERL.

		DR. FISHER:  Thank you, Mike.

		Dr. Krishnan?

		DR. KRISHNAN:  My question relates to the intentional versus
observational studies.  The discussion has been useful, but I have a
clarifying question in that regard.

		The focus has been very much on the exposure, when you define or when
you look at the observational versus intentional studies.  How about the
subject manipulation in those studies?  For example, when we think of
biomonitoring studies?  I mean particularly blood sampling, those are
done for good reasons, but the blood sampling, for example, when it's
done -- well, it's an invasive procedure essentially.  To me,
non-invasive measurements in people as they go about their normal
activities is observational, but if it's invasive sampling, do you
consider that observational or intentional?  To me, it's not just the
exposure that defines, but also any manipulation of the subject.

		If the blood is obtained from a blood bank as they do a normal
activity of giving blood and then if you look at the concentration of
chemicals, that to me is observational, but if a sampling is actually
done in people using invasive procedure, that shouldn't be a part of the
observation.  That's what I'm thinking.  I want to get some
clarification from both of you.

		DR. LUX:  This is why I suggested that you try to separate the idea of
what is intuitively observational from the idea of what's observational
under the regulatory definition, because what you cite with invasive
procedures which is not intuitively observational, nonetheless will meet
the regulatory definition of observational, so long as you're not
creating an exposure to a substance that would not otherwise have
occurred and in which that substance is under study.

		So that the simple incorporation of invasive procedures does not
necessarily in and of itself get it out of the observational camp.

		DR. FISHER:  I wanted to say that my understanding of this has evolved
and I really think that Dr. Lux has put it in a way that I think that
we've been struggling with, that it may be that observational is not the
same as naturalistic in the sense that you do absolutely nothing.  All
you do is observe.  And I think what Dr. Lux is trying to say and what
I'm beginning to understand is that we have to avoid conflating
experimental control procedures or data collection procedures from
whether or not a particular chemical has been introduced into the
environment.

		So the regulations speak to whether or not that chemical has been
introduced by the researcher.  The regulations do not speak to whether
or not other experimental measures like urine collection or blood
collections are or are not observational.  That's part of the research
methodology, but it doesn't have to do with the introduction of the
chemical.

		And I think, you know, I think we've been maybe everybody has been
somewhat confused, but I think little by little it's coming together as
where those distinctions lie, and what becomes so important is what is
or is not permitted to be done under regulations if, in fact, it's an
intentional exposure and we have to be very careful that we are clear
that you can do certain things to measure exposure and also I think we
have to become clear that an observational study as I think this
document is pointing out, is not waiving ethical issues.  You still have
to make sure that the risks involved in terms of data collection, urine,
blood use, putting on these outfits that may have heat exposure, but
which doesn't increase your exposure to the chemical itself, you still
need to evaluate the risks and benefits of that, but it doesn't come
under the definition of intentional.

		DR. LUX:  I think Dr. Fisher is precisely correct about that and
that's the way we would look at it.  I would add with regard -- you
asked about manipulating the chemical versus manipulating the behavior
of the person and you can manipulate the behavior of the person in such
a way as to produce an intentional exposure as well, so we would take
that into account.  As Bill Jordan pointed out, the definition was
broadly set, so that if you manipulate the behavior of an individual
such that they would be exposed to a pesticide that they wouldn't
otherwise have been exposed to, even though you didn't introduce the
pesticide yourself in, that would still meet the definition of
intentional exposure research under the regulations.

		DR. FISHER:  And I have one more question.  Because you gave an
interesting example, Warren, with respect to let's say you put in
various types of filters in various homes.  So basically what you're
doing is you may be testing it, but you may be reducing exposure.

Right?

		DR. LUX:  Correct.

		DR. FISHER:  And so I think what you were saying is is that a
reduction of exposure would not fall under intentional exposure, if in
fact, the chemical itself wasn't introduced, but the exposure may be
reduced by some kind of experimental manipulation that that also would
not be considered intentional exposure?

		DR. LUX:  Right, that's the way I would interpret it.  That one
requires some interpretation, but I think if you're mitigating an
exposure, that one would keep that in the category of observational. 

		As Bill is pointing out, mitigating an exposure that would otherwise
have occurred.  We're not introducing the exposure that we're mitigating
and I think that indeed is a very important point.

		Most of those studies though actually that I've seen in my year here
that have come across my desk have not been human studies anyway because
the data are being acquired about the air and there are no data being
acquired from the humans anyway.  But in any event, were one to acquire
some collateral data from the humans, I think it is likely that such a
study would be regarded as observational.

		DR. FISHER:  Two things.  One I wanted to make clear that some of our
interest is also in terms of how we will be evaluating some third party
research that comes.  So some of our questions are not addressed to the
kinds of research that EPA may not be doing.

		My final question has to do with clinical trials.  And if, in fact, a
substance is introduced for a clinical trial purpose, but also has
information that could be used as we've seen in some of the studies that
we've looked at, is that intentional exposure or because the purpose was
not to measure the compound from an EPA perspective, it is or is not or
has that not been decided yet?  Because I saw in the minutes from the
meeting, Warren, that you mentioned something about how it was not.  But
I didn't know if that was circumscribed to a particular type of
intervention study.

		DR. LUX:  The regulation as I read it, does not distinguish among
different types of substances.  It does not distinguish among toxic
substances in different substances or therapeutic substances.  So if
we're introducing any substance then it is an intentional exposure
study, as I read the regulation.

		That can lead to what I think may have been some unintended
consequences with regard to what is and isn't permitted and that what is
going to be done about that going forward is under study.

		DR. FISHER:  So for example, if there's been a clinical trial with
children for a particular substance that was used for medical purpose at
this point it's unclear whether EPA could ever use that data, is that
the case?

		DR. LUX:  Correct.

		DR. FISHER:  Are there other questions? 

		Yes, Suzanne.

		DR. FITZPATRICK:  Yes, I just have a simple question.  I was wondering
how many of these studies you run every year and how you set your
priorities?  And on top of that, if you -- I noticed you could do a
CREDA or something with an outside stakeholder, how do you make sure
that they don't prejudice the results or you know, if they have a vested
interest in the outcome of the results to make sure that's, at least it
doesn't have the appearance of a conflict?

		DR. CUPITT:  The number of studies we conduct is actually very small. 
Oftentimes, we'll conduct a number of very small studies looking at a
methodology to see if a methodology would work.  If we then translate
that into a very large study that happens very rarely because it takes,
for example, in the study that's going right now which is in Detroit,
looking at air exposures of adults, we've had three years of sampling. 
And after you do three years of sampling, you need the time to make sure
you've got all the data analysis done.  So it happens very rarely. 
Unless we're doing small-scale studies to look at a sampling protocol,
one way to figure out how that -- how either the chemical is distributed
throughout the environment or how people's activities bring them into
contact with that.  Those we would -- we haven't done any children's
studies now for the last few years, but we might do one a year or maybe
one every two years, something like that.  The number of studies is
actually very small, simply because of the logistics of getting into it
and defining it.

		We tried to discuss in Section 2 how we set those priorities.  And
historically, one of the things that we did was we brought together
outside panels of peer reviewers to talk, for example, about if we
looked at children's exposure what are the various pathways and what do
we already know -- we went to people within the Agency and experts
outside, what -- where do we think that the weaknesses are, what are the
weaknesses, and how would we go about looking at each of those kinds of
components?  And based on that, based on the availability of the funding
and on the needs of the program offices within EPA, we set some
priorities in terms of what we would do and we actually set about doing
a series of studies to address those weaknesses.  That was going to be
culminated then in a larger study involving putting all of those
protocols and what we learned together to try to test it then on a
larger sample size.

		DR. FITZPATRICK:  Do you have -- does EPA have the ability to require
sponsors to do these kinds of studies if you're concerned about a
compound?

		DR. CUPITT:  ORD generally doesn't get into that, but Bill can speak
about other parts of the Agency which do have regulatory authority
associated with that.

		MR. JORDAN:  EPA's pesticide office is unusual in that we do have the
authority to require pesticide companies to generate data necessary to
support the continued registration of a pesticide.  When it comes to
other regulatory programs in the EPA, the authority is not as strong. 
Air pollutants, water pollutants do not have the authority that we have
for pesticides.  For industrial chemicals, there is such authority, but
it can only be invoked through the use of a fairly cumbersome process
and is fairly infrequently used.

		DR. CUPITT:  And ORD has no regulatory authority.

		DR. FISHER:  Michael?

		DR. LEBOWITZ:  Yes, I would like to expand that though in terms of the
Agency as a whole.  I mean within ORD the centers work together so we're
sponsored by and serve actually, in fact, as is often utilized and
there's some interaction with NERL, with HERL, etcetera, so that there
are other observational studies going on, funded in a different
mechanism of a second party.  This document doesn't speak to second
party or a third party research, but at the same time that information
corrected through ORD as used by the offices for regulatory purposes and
in fact, Clean Air Science Advisory Committee is part of ORD, but in
fact, is recommending through both Office of Air and most importantly
through the ORD AA to the Administrator of EPA in terms of the Clean Air
Act as an example and similar things happen with water, etcetera.  

		So the Agency -- there's actually a lot more interaction within the
Agency in which NERL participates and its research is used as well by
the other offices within EPA to help it in its regulatory policies.

		Is that a fair statement, Larry?

		DR. CUPITT:  We certainly do collaborate with other academics and
people who have received grants.  We have done that historically.

		Mike, I did want to make one minor correction.  We do indeed intend to
use this document for second party research that we sponsor as NERL
because our PIs will use this as an internal resource, whether they are
conducting the work themselves or whether they are overseeing work or
collaborating with work that we in NERL have indeed sponsored.  So it is
for the NERL scientists and PIs to use to make sure that the work that
we might sponsor through a contractor.  For example, if we need to go
and do a lot of field sampling in some specific location, we may not
have the travel funds or the ability to get people out there to do the
field sampling.  We have to hire a contractor to do that.  That work,
under our guidance would still meet all of the requirements that we have
to meet and this would still be a useful tool for those PIs who are
involved in that research.

		DR. FISHER:  Thank you.  Are there any other Board comments?  

		Are there any public comments?  Excuse me?

		Okay, we're going to -- the Board comments, if it's on the particular
sections, we'll do after the break.  But if it's about this general
discussion, then we're having additional Board comments now, but we are
going to have a long time after.

		Steve, did you want to comment or you can wait?  Okay.

		Anybody else?  I am assuming there are no public comments.  And please
remember that if there are public comments at other points, to come and
tell Paul about that so that we can put on the docket.

		Okay, so we're going to take a break.  We're a little early which is
great, so let's come back at 10:30 and that will give us 15 minutes. 
Okay?  Thank you.

		(Off the record.)

		DR. FISHER:  Larry I think had a couple of slides.  He was trying to
save us some time, but I think they would be helpful to us.  And so he
is going to present them.

		DR. CUPITT:  Okay.  Could you go to slide 20 in my presentation,
please?  I wanted to take an attempt at talking about the difference
between scoping and planning ever so briefly.  Scoping is basically
looking at what the big objectives are.

		For example, this shows our modeling schematic for how we might choose
which pathway would be important for exposure.  We might want to as
scoping look at all of the pathways and determine which one do we have
the least information about or the greatest uncertainty, which one could
have the greatest impact.

		These were hidden slides.  They are not in your copies.  I'm sorry. 
Basically the scoping would be deciding which of these things we would
want to choose to do.  And the planning would be deciding how we would
want to go about doing it.

		Now, if you could go to the next slide, 21, this actually is so that
you can read the definition that Bill Jordan read from earlier in terms
of what the regulations actually say involving intentional exposure and
basically that these sections say that everything else is observational.

		And Bill had made the point about the various components of that.  I
just thought for those people who understand things visually you might
want to be able to read it yourselves.

		DR. FISHER:  Thank you very much.

		I think a question that came up, I know Michael asked this question. 
There were two sentences I think that we used.  One is up there, which
is the introduction of a substance that would not have occurred.  And
the other is introduction of a substance into the research by the
researcher.

		And so I want to clarify if, in fact, it's a product that might have
been used or formulation that might have been used anyway by the
subject, but it is provided by the researcher.

		Is that intentional if, in fact, the subject has the right to use it
in any way they -- you know, is asked to use it in any way they would
use it normally?

		DR. LUX:  If we have asked them to use it as a condition of the
research, then I would regard it as meeting this definition of
intentional exposure.

		DR. FISHER:  Okay.  And I have one other question that some of the
Board members have been asking.  If a third party conducts an
observational study to which -- and I think as we have been talking
about and, as the document that Larry and Roy have prepared beautifully
points out, you still have to evaluate the ethics and the risks and the
benefits of observational study, according to the regulations, the third
party does not have to submit that a priori, right, in terms of its
plan, as it does for intentional exposure?  It has to submit it, and
then the Board makes some recommendations?

		However, is there an ethical review when third party observational
study is submitted or in the future if it's planned to EPA to which it
may or may not accept the data based upon failure of ethical procedures?

		DR. LUX:  Bill can answer how OPP reviews it in terms of EPA's
internal review, but all such research is also required to undergo
external IRB review as well.

		Bill?

		MR. JORDAN:  Yes.  EPA's Pesticide Office will review any results of
observational research conducted by a third party that is submitting
under the pesticide laws for an ethics --

		DR. FISHER:  For ethics.  And so even if the data is good, might the
data not be used if, in fact, the research was not conducted in
conformity, even if it was IRB-approved but not conducted in conformity
with ethical standards and let's say 45 CFR 46, which are related, like
under A?

		MR. JORDAN:  It's a possibility.  We haven't had that situation arise
yet.

		DR. FISHER:  Okay.  So now we're going to begin with the Board
discussion of the document itself.  And I'll start with Steve, who is
the lead discussant.  We are going to start with section 1.  So, Steve?

		VICE CHAIR BRIMIJOIN:  Thanks, Celia.

		So I think this part, at least my part of this discussion, will be
brief so we can get to the meatier sections of the document.

		But I'll just start out by saying that this is possibly the single
best written, most gracefully and clearly written federal document I
have encountered.  That doesn't mean it's perfect.

		(Laughter.)

		VICE CHAIR BRIMIJOIN:  But I really can see that a great deal of
effort must have gone into the writing and the editing of this document.

		The introduction of this document is organized in a straightforward
fashion.  And I think it does what an introduction is supposed to do. 
It clearly points the way toward the specific sections that are going to
follow and gives an outline of the approach that will be taken in each
section.

		It begins itself with scoping.  And, actually, I think we have
probably heard enough about this, but scoping I think is best regarded
as a jargon term.  And I think one possible tweaking for the document
would be to spend a sentence defining that a little bit more clearly for
especially those investigators outside of NERL who may be a little
confused.

		It turns to the issue of what -- well, it defines the scope of this
document as pertaining specifically to observational exposure studies,
not intentional dosing.  And it makes a start at defining observational
exposure.

		I think the sense maybe all of us have from the discussion that
preceded the break is that it is still not a matter of crystal clarity
in the average person's mind, even the average Board member, or hadn't
been up to this point precisely what is meant by this.

		And I think a good start has been made here.  I think it would improve
matters if this went a little bit further.  It has become clear.  I
think Dr. Lux has done really an excellent job of separating these
concepts of sort of ordinary language usage of observational and
regulatory language usage.  And it has become clear that observational
does not necessarily mean benign.  That is, of course, why there are
ethical issues involved in observational studies.  And an observational
study might be carried out by someone who was doing what they always do
with things they always do.  And at the end of the study, they had
volunteered to give up a kidney.  Obviously there would be intense
ethical issues associated and can be.

		So I suggest two things.  One is just a little bit more, maybe a
paragraph, following up on Dr. Lux's analysis to define the issue.  And
then it might be a text box with some examples because it does come down
to examples.

		For example, we know that with the ag handlers task force, that that
is going to be treated as an intentional exposure, even though "these
guys are doing their normal jobs," but it is intentional because we are
scripting their activities.  We are asking them to do things that they
may do sometimes but not specifically today.  And they may wind up using
similar but not the same chemicals that they would have otherwise used.

		So I think it would be helpful again, especially as this document is
made available to third party PIs, who are looking for some background
to have that kind of explanation and to give them guidance.

		So I understand this document makes very clear that it is not a
guidance document.  A guidance document is a coercive document.  This is
the document that actually will offer guidance.

		And I am not suggesting that it should be stated in those terms, but
that is the way I understand it.  It is to be helpful, much as I heard
yesterday coming down to Metro, instead of telling people, "Stand to the
right in the escalators," I heard a voice coming over the PA saying, "If
you're new to the Metro, you'll notice that most people will stand to
the right on the escalator."  That's the kind of guidance we need.

		So those are really the essence.  That's all I have to say about the
introduction.

		DR. FISHER:  Thank you, Steve.

		I wanted to point out in New York, the instructions are
"Jurah-jurah-jurah" on the Metro.

		(Laughter.)

		DR. FISHER:  Gary?

		MEMBER CHADWICK:  I agree that this is one of the best written
government documents I have seen in a long time, although that is,
admittedly, a low bar.

		(Laughter.)

		MEMBER CHADWICK:  I do think this is good.  This is really good.  I
was very pleased in reading through this.

		I guess just a couple of general comments that I would make, sort of
in line with what Steve was saying.  I think it does provide a solid and
succinct description of the content and also some background information
for the document.

		I think, as I was reading through the document, not so much in this
section but throughout the document, references are made to
publications.  And it basically says, "Go take a look."  And I think
there could be some more attempt at summarizing what the key points are
and why people should go take a look in this section at table 1-4 that
actually does that pretty well for this particular section.  But I found
that lacking in other sections, that sort of approach.

		So I would commend you for use in this particular section and suggest
that it really would strengthen the document, particularly as you have
said that you wanted it to not only be a stand-alone document but even
the sections itself to be stand-alone to take that opportunity to really
make it so that once I have read a section, I am pretty well-grounded in
what at least I should know, not necessarily -- and I agree with the
approach taken that you are not answering the questions, you're not
telling exactly what needs to be done.  But at least you're saying,
"Think about these things.  These are the things you need to think
about."  So I would make that comment.

		I do think that the document itself addresses the major areas and
concerns and so forth.  I think there was a comment made earlier about
-- actually, I think it was your lead-off comment about the fact that
you're not ignoring the science in this document.  But I think to some
extent you are.

		And it might be useful at least to make some acknowledgement of how
science and ethics fit in, again not necessarily to talk about all of
the different study designs and scientific principles and that sort of
thing, but I think there is an assumption that particularly those around
this table and perhaps the writers as well have made about how science
and ethics interplay that I think is not necessarily clear, particularly
to new researchers in a field, how it works and how good science and
good ethics feed each other.

		One of the things that I noticed throughout this document is there are
references -- and I'll probably make a comment in another section later
-- there are references that are more I think appropriate to a
journal-style document, rather than this non-guidance guidance document
-- I'm not sure how I would characterize this -- that basically
summarizes what was found, as opposed to taking ownership of the
decisions.

		An example that I found in this particular section was on page 10,
lines 1 through 3, talks about how the document is limited to
observational studies, but it basically says the panel recommended that
it be, rather than saying based on a panel recommendation, we have
limited, so just a little difference in flavor.

		Again, I think this is an extremely well-written document.  I think
this has the opportunity or the potential to be one of the most
important documents since the IRB guide book that came out over 15 years
ago.  So I am really, really excited about this.

		So I'm done.

		DR. FISHER:  Any other Board comments about section 1?  Rebecca?

		MEMBER PARKIN:  One thing I noticed as early as this section that
comes up throughout the document, I'm not quite sure where we bring up
those things that are over-arching, but this is one that struck me here
in this section -- and it also struck me in the slide presentation --
was that the abstract talks about, for example, that you're going to be
addressing chemicals and other stressors, but many times the document
flips back and only focuses on chemicals.  In fact, I didn't even find
any example other than a chemical example in the entire document.  So it
leaves the reader with the impression that you don't really mean it
except for chemicals.

		I did notice that on your slide, you did add radon as an example.  So
I am sure you have them, but a reader comes away with a sense of, well,
they really only care about chemicals.  And I think it is important that
the language and the examples reflect that throughout the entire report.

		DR. FISHER:  Thank you.

		MEMBER KRISHNAN:  Celia?

		DR. FISHER:  Yes, Kannan?

		MEMBER KRISHNAN:  I just want to reemphasize the importance of
expanding on the observational studies themselves.  The definition right
now, they just got both of them.

		The chapter starts out with the common or lay term definition of, you
know, the measures in people as they do their daily activities.  And
then the regulation-related definition is subsequent.

		In the bottom here is a note that I think would be better if you can
really take it up into the main text and capture some of the discussions
we have had.  I think that would be very useful, particularly because in
the first page around line 34-35, where it says that the observational
studies do not involve any manipulation of a person's behavior, I think
based on what we heard, somehow the invasive sampling, for example,
which, in part, depending on how broadly we interpret that would be
considered as an intentional study but does not go along with the
description here, like the scripted exposure, invasive sampling, and
things like that.

		I think we just have to be bold and say some of those things.  And it
makes a lot of sense, essentially capture several of our own comments as
well as Dr. Lux.

		DR. FISHER:  Thank you.

		Okay.  So we will go to section 2.  And, Susan, you will begin that?

		MEMBER FISH:  Thanks, Celia.

		I would like to also reiterate the comments that have been made by
Gary and Steve that this is quite a remarkable document.  And since I
teach many classes, courses, on designing research in humans, I looked
at this section and thought, "Oh, this will be one that my students will
have to read."  I mean, I think it is just a remarkable document.

		So to begin section 2, which is really the scoping and planning or
conceptualizing and planning of the study -- I think "conceptualizing"
might be a better term than "scoping" because I do realize that it is
jargon.  But if this is meant to be an internal document and that is a
term that is used here, then I don't feel strongly about that.

		My first concern had to do with what is in figure 2-1, which is
separating the study design document from the human subjects protocol
document.

		Now, in the text, you explained that these are integrated or
overlapping documents.  My concern with having two documents is that
this is a setup for inconsistencies.  And it seems that throughout the
rest of the document, you say, "Science and ethics are interrelated." 
But to have two separate documents is of great concern to me.

		It might be that the study design document is part the human subjects
protocol, but I caution you about having two different descriptions
because it is a setup for inconsistencies if that makes sense.  And that
may be what you were intending along the way.

		The next item is in the text book 2-1, the elements to be considered
in justifying a study.  I would like to suggest that the third bullet
down, a discussion of why human participants are required in the study
should include a discussion of alternative designs or models that had
been considered to help justify the use of humans in this particular way
that is being proposed.

		Section 2.2.1, which is entitled "Innovative Study Designs," I think
this section either needs to be retitled or the content needs to be
redesigned because I think the section to me doesn't speak to innovative
study designs but, rather, speaks to adding direct benefit for
participants into a research protocol, where the direct benefits are
being added just for the benefit, not affecting the research protocol;
for example, providing educational material to the participants.

		I think that is a noble cause, and I think that is a good step, but I
am not sure that I would entitle that "Innovative Study Designs."  I was
thinking more of Bayesian designs or along that line.  So it either
needs to be retitled or rewritten.

		Section 2.2.4 talks about conflict of interest, a subject near and
dear to my heart.  There is a statement that says there are many sources
of conflicts of interest, but those related to project funding -- that's
on lines 7 and 8, I believe -- or most likely to occur.

		Now, I'm not familiar necessarily with the kinds of research and
funding that this document is intended to apply to, but in my experience
in an academic setting, it's not that conflicts of interest related to
project funding are not the most common to occur but are the easiest to
identify and that the conflicts of interest that are non-financial that
have to do with personal promotion and willing the Nobel Prize and those
kinds of things are actually probably more common but much more
difficult to identify and to manage, reduce, or eliminate.

		Text box 2.3, "Elements to be Included in a Study Design."  I would
just like to expand on a few of these.  One, about halfway down that
bulleted list, one of the items is "Environmental conditions, factors,
or endpoints to be measured, including sampling and analysis
approaches."  I wanted to just suggest that you include something about
precision and accuracy in that description.

		And in the next bullet down, the "Survey Design and Questionnaires and
Other Survey Instruments," I think I would suggest you include whether
or not those instruments are validated instruments or new instruments
that need to be validated.

		In addition, around items in this text box, I'm sorry.  I don't have
the -- I didn't do this homework because I read the article on the way
down yesterday on the plane, but in last week's Public Library of
Science Medicine, October 16th -- I think that was last week -- there
are two articles relating to strengthening the reporting of
observational studies in epidemiology listing items that should be
included in a report of epidemiologic studies, meaning observational
studies.

		I think some of the tables in these two articles -- and I can give you
the references -- may be helpful to you and certainly should be
referenced.  I understand it's last week's article.  And you may not
have read it yet.  I read it yesterday.  So I will give that to you.

		2.3.1.1.  Standing in for Alicia, the point on sample size
determination, I applaud you tremendously for having a section on sample
size determination.

		And I think it's wonderful.  I just take exception to one of the
sentences.  And that is that the author that you cite, Dr. Lenth, notes
that there is a surprisingly small amount of literature on sample size
determinations.

		And I defer to my statistician colleagues, but I don't find that to be
true.  I think there's a tremendous amount of literature on sample size
determinations.  So that's probably a small point, but I felt like I
needed to say that just for Alicia, who is not here.

		Text box 2.5, which is the potential topics for the human subjects
research protocol.  And I think you did a great job describing that each
IRB has its own form and its own requirements and all of that.  And I
think that what you have listed here is really along the lines of best
practices, well beyond what any single IRB might require.  So I think
it's really quite wonderful.  there are just a few small points on this.

		In number 22, you discuss adverse events.  And in the absence of Dr.
Prentice, whom some of you don't know but some of us do know, if he were
here, he would say that the terms should be changed to "unanticipated
problems affecting subjects and others" because, in fact, that is the
language in the regulations.  And it is broader than adverse events.

		So if, for example, a laptop were to be stolen and it contained
subject identifiers, that might be an unanticipated problem affecting
subjects or others.

		Number 34 on this list is that the human research protocol, human
subjects research protocol, should include all foreseen uses of personal
data or biological materials.  And I just would urge you to consider
whether or not future unforeseen uses might be included in a discussion
of potential unforeseen uses or unplanned at this time uses that are not
planned yet.  You know what I mean.

		And then number 42 on that list is procedures for dealing with
falsification of data.  I think one of the other pieces of that concept
is procedures for preventing the falsification of data, which would be
as important as dealing with it once it has occurred.

		And I believe that's everything.  Those are all the small points that
I had.  Again, it's a fabulous document.  And I found a few typos.  And
I will give those to you just because they pop out at me.  It's a
character flaw I have.  But thank you for drafting this document.

		DR. FISHER:  Thank you, Sue.

		Mike?

		MEMBER LEBOWITZ:  I'm going to take a parallel that is a somewhat
different tact on this.  Now, I will say that this section is very
strong, especially in its consideration of the overall conceptualization
of steady planning, especially the ethical component often
insufficiently conceptualized by the scientists in the initial approach.
 And the planning and scoping include both, as does the review process. 
And that strong emphasis is beneficial.

		The major areas of deficiency given the purpose of the document is in
the paucity of information, materials, and references regarding the
purpose, design, and conduct of exposure studies.

		Even the initial paragraphs place more emphasis on the ethical issues,
which are covered extensively elsewhere, than on the scientific ones. 
And I think there should be more of a balance.

		There is an interesting aspect of review as well, where the ethics
reviewers have the assignment also to be reviewers of the science
component and vice versa, and that this may pose specific problems of
expertise.

		There is little explanation or coverage of exposure study designs and
methods, as I mentioned.  And specifically there are different
attributes.  Sufficient examples are not provided in references to such
discussions as well as to such studies are inadequate.

		And there are excellent sources of materials to guide researchers,
starting with EPA documents; for example, actual examples and references
to TEAM, PTEAM, ENHEXAS, the pesticide studies, et cetera.

		There are also excellent NRC, NAS, and also WHO documents that could
be utiliized.  There are many lessons learned from such that could be
culled to provide research.  I use the term "guidance" here, realizing
it's not a guidance document but still appropriate.

		Some of the sources include the NAS 1991 report on human exposure
assessment for airborne pollutants, the WHO EHC series, including number
27 on guidelines on studies in environmental epidemiology, and the WHO,
Euro, and ECEU documents on exposure assessment from the ECEHes and the
EU labs, detailed comments.  There are still contradictions, actually
starting in section 1.1 as to whether epi studies are included or not. 
That is easy to clarify.

		Section 2.1.1 doesn't define types of study problems.  It questions
scientifically and doesn't provide the specific EPA or other references
where such can be found.  Likewise, 2.1.2 doesn't provide any basis or
criteria for justification of the science component or, again, specific
references, only provides the ethical component.

		2.2 doesn't outline the steps in planning the study scientifically or
where that information can be found.  2.2.1 has only one sentence about
the scientific aspects and, again, doesn't really discuss innovative
scientific aspects.

		Text box 2.3 is the closest this section comes to delineating the
components of the study that are relevant also to its planning.  Section
2.3 doesn't provide some of the more important specifics of the scope
and technical approaches or again provide the specific EPA or other
references where such can be found.

		2.3.1 doesn't include all the important and appropriate questions
regarding scientific feasibility.  It is one bullet that doesn't even
include questions about the feasibility of measurement methods.  2.3.1.1
is an insufficient discussion of sample size, as mentioned.

		And I must say given all the textbooks which talk about include some
specific survey statistical textbooks -- and there are plenty of such
references.  Also the issue of design factors can be found elsewhere
that isn't mentioned here.  And one can see Clickner's book or other
books on the topic.

		Then discussion of expected refusals and losses that need to be taken
into account isn't mentioned.  Issues of intra and inter-participant and
observer variability aren't discussed.

		The subsections also include monitoring of observer errors and biases,
participant reporting biases and reliability, inappropriate as well as
inadequate selection criteria, et cetera, or references were that can be
found.

		In response to the charges, to me this section appears to have
identified the major areas and issues where ethical considerations
should be addressed in the stated conceptualization.

		There are many additional sources of information, as I mentioned, that
should be considered for inclusion.  And that was part of our charge to
mention that.  And I've given some examples.  In addition, the current
section 2 is insufficient from a scientific standpoint as a resource of
information that researchers should have and address in designing
implementation of observational exposure studies, that the information
appears adequate.

		I would like to emphasize, too, that there are, in fact, specific
examples that can be included, not just referenced to them but specific
examples of NERL studies, like the pesticide exposure studies that I
think would be very useful to researchers, even within NERL, since not
all of them have the historical knowledge or perspective to appreciate
how neuroscientists and consultants have addressed those problems
previously.

		And there are also methods of measurement that are appropriate to
discuss in that regard.  I understand from one of the EPA NCEA people,
Mike Dellarco, that there are even new patch methods that have looked at
passive dosimetry for dermal exposure as an example, which is very
important, I think, in general discussion, showing that, in fact,
exposure assessment continues to be expanded and developed.

		So none of these comments are critical as much of what is put in the
document as much as what has been left out in my perspective, but, then,
as you know, I have a very specific scientific perspective on the
issues.  And so I think you will take it with that framework in mind and
understand that I am not trying to be too critical.

		As I say, I think the charges are answered by saying you have done a
great job.  It just means that you can have more to benefit your own
researchers and second party researchers.

		Thanks.

		DR. FISHER:  Thank you, Michael.

		Dallas?

		MEMBER JOHNSON:  Thank you.

		My comments will probably fall a little bit on the line of Mike's in
not that what is there, anything that is wrong with what is there, but
what still seems to be missing from my point of view.

		And, as a statistician, worrying about getting samples that represent
the population that you are trying to make an inference to, there's
nothing in this section that I expected to see in terms of talking about
selection of participants and/or selection of sampling units that you're
going to measure.

		And so I thought, well, maybe I just missed it.  So the nice thing
about having a .pdf document, you can do a search for words.  And so I
searched for the word "random" to start with and found that the word
occurred twice:  once on page 73 in section 6.2, where the word was
randomized; and then the word "random" also occurred in section 7.3.  So
then I looked for sample.  And sample actually occurred in the same two
locations.  So I'm assuming that it was used with the word "random."

		I think that the document needs to say something about what are the
ethical ways in which you select participants, that we stated that bad
science is unethical.  And I think in science, one of the main criteria
is to make sure that the participants that you are observing are
representative of the population that you want to make an inference to.

		So that's my main point, I guess, and I hope that you can address it
as you do a revision of what is there or maybe you don't intend it to be
there.  But it just seems like it's missing, that it should be there.

		DR. FISHER:  Thank you.

		Suzanne?

		MEMBER FITZPATRICK:  Addressing the science, I'm assuming that NERL
has other guidance on how to set up these kind of studies, like the how
tos and stuff.

		So it might be that's why it wasn't included in this, you know, type
of guidance.  I don't know.

		DR. CUPITT:  We have a great deal of expertise in terms of doing that.

		MEMBER FITZPATRICK:  Yes.

		DR. CUPITT:  And clearly our focus was on those areas that, frankly,
we have not had so much expertise on.  I appreciate the comments.

		MEMBER FITZPATRICK:  Yes.

		DR. CUPITT:  And you are quite right.  Those are important things that
are missing.  And we will have to address them in the revision.  So
thank you.

		MEMBER FITZPATRICK:  I mean, I have thought more you could just
reference the other documents than adding it to this one because then
it's going to be even like twice as big.

		So if you already have guidance, you probably have SOPs on how you set
up all of these, just referencing them, rather than putting them.  At
least in my opinion, this might help because if this is twice as big,
you run the risk of no one reading the whole thing.

		So the other thing I had just a quick note, Susan had talked about
putting in that there might be some unforeseen uses of data that people
might consent them to.  Our lawyers have told us that under HIPAA, you
can't do that.  I'm just wondering what people think about it.

		We're not allowed to do that anymore.  We can't put that in that your
data might be used for some unforeseen uses.  That might be just our
lawyer's interpretation or something.

		DR. FISHER:  But that's only for -- it has to be identified as PHI.

		MEMBER FITZPATRICK:  Right.

		MEMBER FISH:  And it's also a HIPAA-covered entity.  And I suspect
these researchers are not HIPAA-covered entities.

		MEMBER FITZPATRICK:  Well, we're not either.  So I'm just saying that
we're not a covered entity either, but that's just something I'm
throwing out.

		The only other thing I am concerned with, Warren, your role as the
final arbitrator of it, I am wondering if they bring you in at the
concept area, too, because otherwise they have gone through all of this
work and then you'd say no.  Then you're going to look like everyone is
going to be mad at you.

		You know, so I'm wondering if they do some kind of concept review with
you to decide whether you are going to agree to the concept of what they
are doing.

		DR. LUX:  There's not a formal mechanism that's been set up for that. 
Yet, my office as a full-time office is very new to the agency.  And
these are the kinds of things that are under evolution.

		I have an open phone line.  And so they do talk to me frequently.

		MEMBER FITZPATRICK:  We do that at FDA.  If someone is going out, we
have a concept review before to make sure that we agree with it
ethically.  And then it comes back for a more formal review.

		DR. FISHER:  Thank you, Suzanne.

		Are there any other comments about section 2?  Yes?  Okay.  Sure.  Dr.
Kim and then Steve.

		MEMBER KIM:  Thank you.

		I just want to follow up on earlier comments made by Dr. Johnson and
Dr. Lebowitz.  I did the same kind of exercise that Dr. Johnson
mentioned in terms of looking at the .pdf file search capabilities.

		I mean, I consider these observational studies essentially survey
sampling.  And I looked for a reference to a survey sampling.  I
couldn't find a single one.  So I suspect that there wasn't really a
sort of connection in terms of the methodological issues.

		And also the reference to the sample size calculation is not the type
of sample size calculation that you would like to do for a survey type
of study.  And I think the closest thing that I can think of is the type
of studies that National Center for Health Statistics does all the time.

		And I found a reference to the NHANES survey that is the only
reference to the survey sampling mentioned in this entire document.  So
I think there is something missing here that I think needs to be
included to make this really useful documentation.

		DR. FISHER:  Thank you, Steve.

		VICE CHAIR BRIMIJOIN:  I just wanted to go back to Sue Fish's one
small comment about replacing adverse events with unanticipated
problems.

		What would you think, Sue, about adverse events or other unanticipated
problems?  Adverse events stands out.  It leaps to mind what you're
talking about.  The other unanticipated problems is a very nebulous
category.

		MEMBER FISH:  Steve, I think that that's a wonderful suggestion.  The
research, ethics in research regulatory world, has moved away from
adverse events and more toward unanticipated problems, which confuses
researchers.  And so I think your suggestion, in fact, is a fabulous
one.

		MEMBER CHADWICK:  However, on point, you wouldn't really expect a lot
of adverse events in this type of research.  And, again, this document
is discussing observational research, not clinical trials.  And,
therefore, I think unanticipated problems in this is actually more
descriptive of what we are asking about, rather than adverse events.

		DR. FISHER:  Well, let me ask you a question.  Maybe this is covered
later on in monitoring and communication, but in observational studies
-- and I'm not sure this would come under either adverse or
unanticipated, but what if you find in a child's urine sample an
unanticipated high concentration of toxic substance that you had not
anticipated or may not be in all the subjects?  What would that be? 
Okay.  A health alert.  Right.

		In other words, those are things that may happen.  And whether or not
that is adverse or unanticipated is probably a good example to put that
in.  And it's probably tied to the communication issues with
participants, as it is when you look at a DSMB.

		Is that going to be discussed later on, later on in another section,
right?

		DR. CUPITT:  It is somewhat discussed later on, but it probably needs
to be discussed again in talking about the 2.7, "Establishing Criteria
and Standards for Monitoring Scientific and Ethical Issues During a
Study."

		What you are talking about is something that perhaps we didn't
anticipate reasonably, but there needs to be a process in place by which
when that shows up we have a process to go through to establish what we
will do.

		DR. FISHER:  Yes, Rich?

		MEMBER SHARP:  So on that point, I would be disinclined to regard that
as an adverse event, first of all, because I think it is predictable and
so forth and the sort of thing that we just need to have in place a plan
for addressing that should it come up.  But that is not to deny that
there wouldn't be other types of adverse events that really could come
into play in purely observational studies.

		So, for example, if you go into the home and start asking personal
questions, somebody in that home finds that very objectionable and
attacks a study coordinator or something like that.  Clearly that's the
sort of thing that would be considered an adverse event, reportable
event, and so forth.

		And so, even though there may not be any intervention, the mere
presence in the home could be associated with some very unusual adverse
events.

		DR. FORTMANN:  Let me just add that section 7.8 is reporting
unanticipated results or observations.  And that specifically addresses
if you identify an elevated concentration and procedures for determining
what would be action levels or triggers for reporting back to
participants about unanticipated results.

		In other parts of the document, we talk about how to address
collateral observations and reporting of hazards in the home that aren't
part of the study.

		It is in different parts of the section in different sections.  And
one of the difficulties, obviously, with the document is there are a lot
of issues that cut across the different sections.  And, to the extent
possible, we have tried to link those sections, but we may not have
accomplished that in all cases.

		DR. FISHER:  I just want clarity.  It says here aren't there adverse
events and unanticipated adverse events in regulation?  So there could
be an anticipated adverse event.  I just wanted to clarify that so that
there are really three categories:  anticipated adverse events,
unanticipated, and then those unanticipated events, so just to clarify.

		What I thought right now is maybe we are going on to ensuring
protection of vulnerable groups.  So usually I summarize what we have
been talking about and then just get discussions.

		So I thought stopping here for at least what I have heard from
sections 1 and 2 would probably be helpful so we don't have to go back
and remember what we are talking about.

		So everybody said this is a fantastic document.  And I think the real
contribution, as you may see, is that a lot of what Board members are
responding to is not simply how it is going to be used internally but
the educated value it will play externally.  So we thank you for that. 
And also please understand that some of our comments are really directed
at that.

		So I think we have been talking about and something that really
defines what scoping is would be helpful.  And obviously you have a
slide.  So you can certainly do that.

		I think the other suggestions, some of them had to do with increasing
the clarity of what observational studies are under regulatory.  I mean,
that was really I think important for us and maybe for other people in
terms of distinguishing between the regulatory language of observational
versus what Warren called kind of the intuitive notion and distinctions
between invasive measurement procedures that still under regulations are
still observational studies because they did not include any intentional
exposure to the substance itself and highlighting which -- I mean, the
whole point of your document highlights it, but it may need to be
highlighted that observational studies do not mean no risk, no ethical
evaluation.  And that's the point, obviously the whole point, of the
document.  But it's probably important to underscore, especially for
those reading it who are not in EPA and people that had asked for
examples.

		I agree that bulleted summaries or something like that for each
chapter, some of what you have done in other chapters, but just say,
"This is what the chapter said.  And these are the highlights" would be
very, very useful.

		Then there were clarifying statements regarding scientific validity
and ethics.  And I think it was pointed out section 2, box 2-1, you
know, maybe reflecting on whether or not having two different boxes are
in some sense undermining your own argument or whether there is a way to
link those boxes, also that there were just some minor language issues
in terms of that not just panel-recommended but that based on the panel
recommendations, you have included some things, then the issue of
providing other examples of stressors that are not just isolated to
chemical components and also maybe a statement or a limited discussion
on whether or not epidemiologic studies are included in this type of
research.  I assume they're not if they're survey research, but I have
no idea.

		Yes?

		DR. CUPITT:  Did you want me to respond?

		DR. FISHER:  Sure.

		DR. CUPITT:  No, we're not planning to include epidemiological
research in terms of what we are discussing.  But we do learn from the
ethical issues that arise in epidemiological studies.

		DR. FISHER:  So maybe just a point of clarification there.

		DR. CUPITT:  And we learn from the science that arises in
epidemiological studies.

		DR. FISHER:  With respect to section 2, I guess the broad opinion that
is being expressed by the Board is that ethics is not tangential to
science and science is not tangential to ethics.

		And, although Larry, in your presentation, you made that especially
clear, I think the Board is kind of resonating to the fact that people
outside of EPA or maybe even some of your own investigators may not be
as clear to them.

		And so to kind of in some sense articulate that and what the criteria
for science is, you know, I'm sure we all agree with Suzanne that you
don't want to double the document and have a methodological design
document, but I think the Board is expressing that more is needed,
especially in the areas where we have seen weaknesses in terms of the
validity.

		As you said, there is no scientific validity.  I mean, there is no
ethical value if there is not scientific validity.  And sample size has
been an issue, sample selection, population representativeness.

		These are issues that we have seen quite frequently.  And so this is
why we're presenting them.  And if that is a window as to where some
investigators are confused, then, you know, maybe it's helpful for you
to put that kind of information in.

		Also, people were talking about perhaps to underscore the purpose and
value and scope of observational studies.  These are all things I'm sure
you take for granted.  And so it's just that outside reviewer is saying,
"Gee, I'd like to learn more about that."

		And examples of more studies that are conducted I think people are
saying that examples are always helpful.  Then also in box 2-1, if there
could be a little more articulation about justifying the use of human
participants and the notion that it should be articulated in terms of
why alternative populations or models do not provide the kind of
scientific information that is necessary but some kind of comment about
alternatives.

		There was also a comment about maybe changing the terminology from
innovative design to those areas where you're putting in benefits to
subject.

		And I had a question for the ethicists.  When you put in at the end of
a study, for example, something that is educative or you do it because
you want to benefit the participant, is that considered part of the
risk-benefit analysis?

		MEMBER CHADWICK:  Yes.

		DR. FISHER:  Okay.  Just wanted to clarify that.  Okay.

		MEMBER FISH:  It wouldn't be happening if it weren't for the study. 
So I think it is a benefit of participating in the study.

		DR. FISHER:  So when you are weighing the risks, you can weigh it
against that educative value?

		MEMBER FISH:  I think so.

		MEMBER PHILPOTT:  Well, to give you an example in the research that I
frequently examine and oversee in HIV prevention and specific
interventions and knowledge about false HIV modes of transmission are
considered an educative benefit to the study participants.  And what we
found is that their rates of infection, even in control groups, is much
lower than the community at large.

		DR. FISHER:  Yes, Michael?

		MEMBER LEBOWITZ:  The other benefit might be highlighted in the
communication section but referenced here.  And that is the benefit of
providing the individual and their community knowledge about their
exposures and the risks, either not associated with it or associated
with it as well as, where appropriate, means of risk management or
mediation of exposures to minimize risk.  And I have always thought of
that as a benefit to the participant.

		DR. FISHER:  Okay.  The other points that were made were in terms of
maybe providing some other examples of the more non-obvious examples of
conflicts of interest.

		And then I think we discussed the adverse events.  And obviously, as
you were saying, it is going to be it is in section 7.8.  And I'm just
wondering if in chapter 2 this is the place to relate it to science
because one of the issues really is in terms of if you are going to need
to identify a particular toxicity level as an adverse event.  Well, the
science has to in some sense inform that.  So I think this is where you
could reference 7.8 but also link it to the value of the science.

		And then there is the issue of the unforeseen uses of biological
material.  Now, that maybe goes maybe more in the informed consent kind
of chapter.  I'm not sure.  But it was certainly brought up here.  And
that tricky issue with respect to if at any point does any of this data
become PHI, so that it might become under HIPAA.

		And it may never or if it's included in some -- see, as soon as you
find the toxicity level and then it's reported to a hospital, does it
then become something of PHI, whether or not you want to address that or
not or even have a paragraph that says it is still yet to be determined?
 You know, I think that Suzanne raised an interesting but perplexing
point.

		And the final point had to do with maybe adding in the section on
procedures for falsification, more about preventing falsification.

		Yes?  Okay.  So is there anything I missed from these two?  Yes, Jan?

		MEMBER CHAMBERS:  A couple of us over here are wondering what PHI is.

		DR. FISHER:  Private health information, which is protected under
HIPAA.  And what it is, it is any kind of health information that is
going to be used for your diagnosis, your treatment, or for insurance,
third party insurance.

		MEMBER CHAMBERS:  Only if you are a covered entity.

		DR. FISHER:  Right, only if you are a covered entity, right, or
affiliated with a covered entity.

		MEMBER CHAMBERS:  And covered entities are those who bill
electronically for Medicare, specifically Medicare?  So it's Medicare
providers?

		DR. FISHER:  Hospitals, practitioners, all those people.

		So are there any other comments on these two sections?  Yes, Dr. Kim?

		MEMBER KIM:  As I mentioned earlier, I think, I mean, this exercise is
all survey sampling.  I mean, what you are trying to get at is the
exposure assessment.  And I think the fact that the whole document is
silent on that very aspect seems to indicate that there is a lack of
recognition because, as I mentioned, the sample size determination
method is completely irrelevant to the types of studies that we are
talking about.

		And there is a wealth of literature and published books on how to do
the survey sampling right from a sort of a statistical sampling
perspective.  None is mentioned, even referenced in this document.  And
I'm wondering about that.

		DR. CUPITT:  Well, we would certainly welcome your input in regard to
what you think we should address in that in this section.

		Roy wants to speak.

		DR. FORTMANN:  Another point of clarification is that these studies
are not all surveyed type studies.  They're not all large random
population-based studies.  Some of these studies are very small. 
They're methods evaluation-type studies.

		For example, we do studies, for example, to evaluate methods for
collecting time activity data, where we compare written forms,
electronic means, video, and so on.  And so for those types of studies,
you know, those are purposeful sampling.  And it's a very small number
of people.

		I agree we will expand that section and make it clear that, you know,
this document applies to all different size studies and studies that
have different purposes and that require different sampling designs and
sample selection.

		DR. FISHER:  But I think what our members are saying is that sampling
issues are central to this.

		DR. FORTMANN:  Yes.

		DR. FISHER:  And I guess we want more of the centrality, irrespective
of the type of population or the type of study that is being done, but
if, in fact, a sample is not generalizable, the power isn't there, et
cetera, et cetera.  It's not well-controlled.  And it doesn't follow the
statistical techniques out there for a priori determining what's going
to be needed as well as planning for subject attrition, that, in fact,
the study is not scientifically valid and, therefore, ethically moot.

		DR. FORTMANN:  Right.  And we agree.  We'll expand that section.

		DR. FISHER:  Good.  Any other comments on these two sections?

		(No response.)

		DR. FISHER:  Okay.  I think what we will do is we will give Jerry a
little time to settle down.  And so we're going to go to section 4,
which would be confidentiality.  And then we'll come back to the
protection of vulnerable groups.

		So, Richard, would you begin?

		MEMBER SHARP:  I would be happy to.  We were just chatting here, too. 
If you would like me to proceed with section 3?  I am also the secondary
on that.  Just in terms of sort of order, we could do it either way.

		DR. FISHER:  Okay.  Why don't you do that?

		MEMBER SHARP:  Okay.

		DR. FISHER:  That's great.  Thank you.

		MEMBER SHARP:  So sticking with section 3, then, -- this is the one on
vulnerable populations -- overall, again, very much like the rest of the
document, very comprehensive, reasonably comprehensive here, very
well-balanced presentation of the key issues that are involved,
particularly with regard to the involvement in children in these
observational studies.

		My own assessment here was that this was a very moderate position that
was being presented, not extreme with regard to over-inclusion or
under-inclusion in observational studies.

		Two themes that clearly come through in this section of the document
have to do with the importance of justifying the participation of any
vulnerable population in observational study and, second, being sure
that if the study could be carried out in a population that didn't have
those special vulnerabilities, that that ought to be done and that
preference should be given to that type of study design, as opposed to
one that would involve a vulnerable population, which is clearly a theme
that you see in the broader research ethics literature more generally.

		Two or three perhaps smaller suggestions in this section.  The first
has to do with more clearly differentiating between what the federal
regulations consider to be a vulnerable population and what might be
considered by many lay persons or many readers of this document to be a
vulnerable population.

		So, for example, economic disadvantage, advancing age, things of that
sort, are widely regarded as vulnerabilities that might impair your
ability to provide rich and robust informed consent as a research
volunteer.  Diminished education is another example.

		A lot of people would consider those to be vulnerabilities, but the
federal regulations don't regard those as vulnerabilities or I should
say that they're not vulnerabilities that are sufficient to classify
them as vulnerable populations in the regs, so some more clear contrast
between ordinary understandings of what it means to be a member of a
vulnerable population and what the regulations define as a vulnerable
population.

		The second suggestion I have for this section, to make it a big more
comprehensive, is that with regard to these observational studies that
we discuss as a Board, workplace studies are of special importance,
thinking about the agricultural studies that we talked about at our last
meeting.  And I think this section would be strengthened if there were a
special way of pulling that out and discussing individually the unique
vulnerabilities that come with employment.  And so that was just a
suggestion for an addition, again, in that spirit of making the document
as comprehensive as it could be.

		Last comment here is that there is a bit of a difference with regard
to the citations that are provided in this section in comparison to the
others.  This one seemed like it was not as well-documented as a couple
of the other sections.  So it might be something just to take a look
back at.  And if there are specific recommendations that any of the
other reviewers have, this is perhaps a place where more of that would
be needed as an introduction to a reader who might otherwise not be
familiar with that literature.

		DR. FISHER:  Thank you, Richard.

		Jerry?

		MEMBER MENIKOFF:  Actually, I agree with everything Richard said.  I
thought it was a strong section.  I don't have a lot to say.  And,
actually, this comment may have shown up in your earlier discussions of
this when I wasn't around.

		But general point, in terms of specifically applying the rules
relating to vulnerable subjects to the type of studies that you're
talking about here, whether it benefit this section getting a little
more specific about the sort of issues that have previously come up in
terms of this sort of thing and, in particular, the broader issue of how
being a vulnerable subject might tie in to the broad issue of when is it
appropriate ethically in terms of duties that we do or do not create
toward these subjects, the fact that your particular type of vulnerable
subject may be relevant, in fact, to some extent it probably is relevant
in determining what duties we have, in particular, what duties a
researcher has to that subject.

		So, for example, if we are talking about children, again, what is
appropriate for a researcher to be doing, for example, in disclosing
risks that exist that the parents and/or child might not be aware of?

		These are relatively specific types of risks that show up in these
sorts of studies.  And the question is, is the fact that we're dealing
with vulnerable populations a specific element to be included in terms
of perhaps modifying those duties?  And I think it might be nice to
address that.

		But other than that, again, I generally think it is a very strong
document.

		DR. FISHER:  Sean?

		MEMBER PHILPOTT:  And I'll be brief by just supporting both the
comments of Dr. Menikoff and Dr. Sharp.  I think that section could
stand to have a little more discussion, as I think has already been
suggested, of about what some of the other vulnerable groups are and how
they're defined, particularly because vulnerability is often very
context-specific.

		I also think that one thing to consider, particularly because of the
way that the language is written in citing CIOMS and so some of the
other documents, that not only do you need the clear discussion of
exclusion of vulnerable populations but also inclusion of vulnerable
populations.

		And there are a lot of people who will argue that, particularly for
some of the studies that you are considering, by excluding particularly
vulnerable populations a priori simply because of their vulnerability,
you may be doing them a disservice because they may have considerably
greater risks of exposure to these compounds.  And you may not have good
generalizable data if you exclude them.

		So, in addition to having clear inclusion criteria, clear exclusion
criteria I think is important and needs to be addressed in that section
on vulnerable subjects.

		DR. FISHER:  Thank you, Sean.

		Barry?

		DR. RYAN:  There is a distinct disadvantage to coming last in a group
of individuals because all the good stuff is usually taken by then.

		I would like to make two points.  One is going to seem rather
redundant, but it fits into the next point that I would like to make. 
These are observational studies.  And observational studies, in
particular, exposure studies, offer less risk, essentially minimal risk,
when compared to intervention studies of different types, clinical
trials, or other types of investigation.

		One does not modify the natural exposure but, rather, measures it and
perhaps attempts to correlate the effects with observed exposures.  This
is the fundamental thing we're dealing with here, and I want to make
that point because in my next thing I want to say with the emphasis here
that we have on observational studies, it needs to become less
compelling to protect the subjects as risk is expected to be minimal but
more compelling to include vulnerable subjects to ensure that the
effects of exposure on these populations, the elderly, young children,
the pregnant mother and developing fetus, is understood.

		And it's quite interesting, Dr. Philpott's statement here, because
mine reflects it almost word for word.  It is my content that a failure
to include such populations in exposure studies offers a disservice to
them and likely offers more risk than the studies themselves.

		DR. FISHER:  Thank you.

		Okay.  Any other Board comments on this section?  Mike?

		MEMBER LEBOWITZ:  I'm going to go even further on that point.  It is
probably worth mentioning that, in fact, sometimes we actually have to
over-sample such vulnerable populations, often because the "vulnerable
populations" are more highly exposed, as we know.

		This is an environmental justice issue.  And unless we over=sample
through stratified sampling or even purposefully looking in those
populations, we're not actually going to be able to protect the public,
well, first to understand the high end of exposure and risk, which is
extremely important.  And we're not going to be able to protect the
public, as some of our acts require us to.

		So this includes also specifically sick populations with certain
diagnoses as well as by definition of their demographic or socioeconomic
status.  And I think this needs to be emphasized based on the use of the
data so collected by other parts of the agency in setting statutes, et
cetera.

		DR. FISHER:  I think that brings up and excellent point because one of
the things I know that I've written about, one of the issues is making a
distinction between vulnerability and research vulnerability.

		So, for example, somebody with asthma because it's what Michael, you
know, kind of was thinking, somebody with asthma is vulnerable, but
they're not vulnerable to the research if it's observational and one is
exploring how they're affected by their everyday exposure.

		So I think that that is very important to talk about.  On the one
hand, they shouldn't be excluded because they are vulnerable for other
reasons but that they're not more vulnerable just by participating in
the research.

		And the opposite side of that is that if someone is vulnerable because
of the research, for example, maybe impaired capacity to consent, then
what is the research going to do to mitigate that vulnerability?  So
that, in fact, you may, in fact, have ethical measures that then make
that person not vulnerable to that type of research.

		So I think there probably needs to be a little more sophistication so
that people are just not categorizing you're poor, you're vulnerable,
you're sick, you're vulnerable, and not placing the burden of defining
vulnerability only on the participant but also the burden is, what are
we doing to mitigate research vulnerability on the part of the
researcher?

		Richard?

		MEMBER SHARP:  This document doesn't contain an explicit section that
talks about risk-to-benefit ratio assessments.  And so I want to pick up
on a point that Dr. Ryan made about these in general being studies that
are going to be minimal risk studies.

		I would agree with that, but I think we should be a little but
cautious here because there are certainly going to be observational
studies, purely observational studies, that will pose more than minimal
risk.

		And here in this section, where we're talking a little bit about
vulnerabilities, is exactly the type of situation that we could imagine
where during the course of the study, you also may be observing illegal
activities and things of that sort that may create risks that are
extreme that clearly do go beyond minimal risk.

		There are other sort of less exotic examples, I think, that we could
point to as well.  Steven Wing, for example, has reported on the
experiences of subjects that were in studies looking at hog farming
practices in the Carolina region.  And subjects that were participants
in this mostly observational study were subject to pretty extreme
harassment from persons with commercial interests.

		And so there can be purely observational studies I think that do cross
that threshold and really ought to be regarded as posing more than
minimal risk to subjects.

		So, again, the point here is simply to encourage us to consider there
may be places where there is a disconnect there.

		DR. FISHER:  So I think what you're saying is social risk has to also
be included, that we tend to think about biological or physical risk. 
And then there is social risk.  That needs to be included.  I would also
say that if you can prevent that risk, then, you know, that is the role.

		You know, it is very interesting because if you look at federal
regulations, sometimes if you can predict a person's -- if your
procedures are sufficient to protect confidentiality, then even if
breaking confidentiality could be greater than minimal risk, the study
is minimal risk because the procedures are there to protect.  So there
may be something about risk that you want to put in there as social as
well as physical.

		Other comments?  Yes, Gary?

		MEMBER CHADWICK:  Yes.  Actually, as I was reading this section, I
thought they had done a pretty good job of sticking with the meaning of
vulnerability according to the regulations.  I think that is an
important distinction.

		And I know other guidance documents in other agencies has sort of
sometimes floated away from that in talking about vulnerability.  And I
think we are doing that a little this morning around the table here,
talking about vulnerability as it relates to risk.

		And the regulations are not talking about vulnerability as it relates
to risk, the ethical principle of risk.  It's vulnerability as it
relates to respect for persons and the informed consent process or, as
it says in this document, the ability to defend one's own interest.

		And I think that is the correct emphasis.  It is not so much a risk
consideration.  It is, can this individual speak for him or her self? 
Can they protect their own interests?

		And I think I was very pleased with the way the document read and sort
of tried to keep that focus in that document.

		DR. FISHER:  Yes?

		MEMBER FISH:  Gary, correct me if I am wrong, but to expand on what
you said, I think the regulatory language is "vulnerable to coercion or
undue influence" and not the other kinds of vulnerabilities we have been
talking about.

		DR. FISHER:  Good.  So now we have given you many different messages. 
And have a good time.

		Other comments?

		(No response.)

		DR. FISHER:  Okay.  Let's go to section 4.  And that would be Richard.
 You begin there as well.

		MEMBER SHARP:  Okay.  Just think if we didn't think that the document
was good what we would be saying.

		DR. FISHER:  Yes.

		MEMBER SHARP:  So this is the section that has to do with privacy
concerns, particularly the privacy concerns that relate to the conduct
of research that is done in private or semi-private settings, like homes
and schools, nice discussion of the issues that are involved there, also
nice discussion of a distinction that exists between two different types
of privacy threats.

		If I invite you into my home and you happen to witness me engaged in
some behavior that I didn't authorize you to see, it's a privacy threat
that involves my interests.

		If I invite you into my home and you witness my family doing something
that I didn't expect you to see, it's a privacy concern.  But it's not
my privacy concern.  It's their privacy concern and so a nice, important
distinction I think that is drawn in the document, at least implicitly
there.

		So there is nice sensitivity to several different matters, first
having to do with the disclosure of incidental findings, you know, that
you happen to witness some illegal behavior and then you are required in
virtue of having witnessed that behavior, to report that to some
authority, nice discussion of the provision of advance notification of
research visits so that others, third parties, that might be residing in
homes and schools, would know that research staff is coming to visit
those venues in advance, and nice discussion of the possible harms that
might be associated with the public display of personal monitoring
equipment as well, again lots of nice things here.

		Turning, then, to the stuff that I thought was not as well-developed,
I thought it was the document was not quite as sensitive to issues
having to do with the protection of innocence, people who are innocent
in some way who might be abused or victimized in some ways, and that
that behavior might be observed by members of the study staff.

		I think it would be nice, for example, to take a very firm position on
the idea that members of the research staff should be trained in how
best to observe incidents of child abuse or elder abuse so that when
they have the opportunity to observe in these private environments that
type of behavior, that they recognize it for what it is, both to avoid
false accusations but also to recognize that these are opportunities for
intervention.  So some more clear, explicit statement on that to me in
terms of health advocacy I thought was an important addition here.

		In addition, I thought the document didn't say enough with regard to
the observation of situations that we might regard as associated with
imminent harm.  Okay?

		So, for example, if you see a set of combustible materials next to an
open flame or a space heater, if you see a child playing by a pool
unattended, things of that sort, what is a member of the research staff
supposed to do in a situation like that?

		And presumably, even though this is being billed as observational,
that would be a context in which the concern for that innocent person
perhaps, again, to use that language, would be sufficiently compelling
that you would want to see the staff member sort of go beyond that and
take a more active step in terms of providing protection.

		So, again, I thought the document could be strengthened by including
some more specific advice for addressing those types of situations and
offering some guidance for what members of the research staff should do
in that type of environment.

		Two smaller points.  I think if your audience is an audience, in part,
of individuals who might be conducting this type of research, it might
be nice to draw attention to the potential risk to members of the
research staff who are actually engaged in this type of study.

		Again, from personal experience, I have seen studies in which members
of the staff have encountered violence when entering into other homes. 
And so this is a situation that you may want to alert the investigators
to as they are preparing to design these studies and also alert them to
the moral burdens that come with being in an environment that they may
regard as deeply troubling from an ethical point of view.

		If you are present to a situation in which you feel that there is some
morally objectionable behavior taking place, knowing about that and
counseling members of the research staff about that possibility in
advance I think is an important thing as well.

		So that is all that I have.

		DR. FISHER:  Thank you, Richard.

		Let's see.  Sue?

		MEMBER FISH:  Thanks, Celia.  Thanks, Rich.  You have taken away most
of my thunder.

		While I agree with Dr. Sharp's recommendations and the points that he
made, I thought that you at least did address a number of those points. 
Although they can be expanded, I commend you for addressing them at all.
 And I was very pleased to see them in there.

		I only have one small point to add.  And that has to do with the
section on certificates of confidentiality.  And I think that that was
really good to put that in there.  I'm on page 42, just above the next
section, where you say that "The certificate of confidentiality does not
diminish, however, that investigators need to protect the personally
identifiable information as described above."

		I think it would be good there to reiterate nor does it relieve legal
requirements for mandatory reporting.  Although you do discuss that in
another section, I think it is important to state that the certificate
doesn't protect against that.

		And other than that, my comments were all the same as Rich's.

		DR. FISHER:  Lois?

		MEMBER LEHMAN-McKEEMAN:  At this point I am inclined to say "Ditto,"
but I just want to add one additional thing, which I think probably
reflects the level of my ignorance, as it were, around some of these
issues.

		And building on Sue's comment with respect to certificates of
confidentiality, I admit and acknowledge that I had never heard of such
a thing and given that what I don't understand in reading this document
is when I would, in fact, use something like that.

		So the document says that it can be used for sensitive matters.  I
have no idea precisely what you really mean by a sensitive matter.  Nor
do I know when I read this when I would execute a certificate of
confidentiality versus when I would opt not to.

		So all I was looking for in this section, again, with the
clarification that this was new information to me, was when I would
actually use such a device.

		DR. FISHER:  And you could just quote the regulations because they are
very clear.  And I do think it's helpful for people who haven't used it
before to understand who the populations are that come under this as
well as what you have to tell the populations in the informed consent.

		DR. FORTMANN:  Right.  And that's easily referenced by the Web site.

		MEMBER FISH:  The kiosk, the Web site on certificates.

		DR. FISHER:  Okay.  Any other comments on privacy and confidentiality?
 Yes, Jan?

		MEMBER CHAMBERS:  I'm going to claim naivete, just like Lois did,
because I don't know how these studies get conducted very well either. 
But when you bring up third parties and all, who all gets informed that
this is going to be done?

		I know you have to get consent from the person who is going to
actually be studied.  Are all the other members of the household, the
landlord, all these sorts of people?  Do you get permissions from those
before you go into a given environment?  And if not, does that need to
be considered?

		DR. FORTMANN:  It depends on the study.  The whole issue with
landlords and third parties is very difficult.  And it is an issue which
we address.  Actually, some researchers will exclude people in rental
properties simply so they don't have to deal with the landlord issues.

		And so it is very complex because if you find something, a situation,
in the property in which the renter, for example, is exposed to an
elevated concentration of something and the landlord finds out about it,
then you have a very negative impact on the person occupying the
residence.

		The issues are very difficult.  One of the things we say in here is
that we can't provide solutions because all of these studies differ. 
And that is one of the issues.

		And this is also where we try to emphasize that you need to go to your
community when you design these studies and get feedback from the
community as to how to handle these types of issues.  So the community
may have information that will be useful to you as to whether the
landlord of this 100-unit apartment building should be informed that the
study is going on.

		So, you know, I hate to punt on this, but --

		DR. CUPITT:  I remember one study a very long time ago when we tried
to do a workplace analysis.  It was in EPA, actually.  And we planned to
go up and do some measurements indoors, which is a part of our normal
measurements.

		And, lo and behold, that day the ventilation in that building was so
large that everybody was cold because it was cold outside.  And they
weren't heating the air very well.

		So you have this problem from both perspectives.  You may not get a
realistic idea of what the exposure is.  And, yet, you know, landlords
can do what they want to do.

		DR. FISHER:  I'll ask the ethicists again, but one of the issues
really is, who is the participant?  And how do you define that because
federal regulations are really aimed at participant protections.

		And so in some sense the informing and protection of third parties
becomes almost a discretionary issue that may require more guidance, but
at the same time we have to be careful that within regulations, they're
typically not included.

		And so in some sense, what might be required is a priori identifying a
unit as is it a unit of participants, as a family might be, that then
might require everybody's consent?  Is it a single participant?  But I
think it has to be within the regulations that you can't require an
investigator to do something that to some extent they are not required
to do.

		Yes?

		DR. CUPITT:  And specifically to that point we have included under the
third party issues that part of the planning and part of the review is
you have to determine if a third party is a human subject in the study.

		DR. FISHER:  And emphasis?  Tell me if I am wrong, but exposure to
risk, you know, that is not tied to experimental methods may or may not
make that person a human subject, right?  Yes, Rich?

		MEMBER SHARP:  It may not make them a human subject, but it may not
diminish the need to protect them against that possible harm.  So if,
for example, the risk to them is a serious one and it's irreversible,
maybe it's the risk of contracting a disease secondary to something like
a vaccine that's administered to another party, a research subject.  It
still may be the case that that individual has to be notified about that
and alerted to the potential risk.

		So I think we can separate out the need to protect consideration from
the need to consider them as a research subject, two separable issues
for me.

		DR. FISHER:  Well, for me the question really becomes in a document
that's not guidance but something else how much do we require an
investigator to do that is not in federal regulations but would be good
citizenship, so to speak?  And I think there are boundaries there in
terms of kind of aspirational versus what should be enforceable or
standard.

		I think that different studies, different investigators have a range
of opportunity.  In some studies, you really don't have the mechanisms,
the opportunity, or even the field staff to evaluate appropriately what
those risks are.  And, therefore, it might be inappropriate, too.

		So anyhow I think it may be that if this comes up that a distinction
between aspiration and what may be required under regulations would be
important.

		Sean, did you want to --

		MEMBER PHILPOTT:  Well, and I would even cut it a little finer in that
I think there are three levels here.  There is what is legally required.
 There is what is ethically obligated of the researchers.  And then
there is what is morally praiseworthy, where it may not be unethical not
to do something, but you really will sort of look at that person askance
next time you see him in a meeting.

		And I think it is very hard to draw bright lines at times other than
what is legally required and what becomes aspirational, as you so
carefully put it.

		DR. FISHER:  Sue?

		MEMBER FISH:  And also I think, Sean, you said this much more
eloquently than I could, but I also see this partly as a best practices
document.  And I don't know if that's how you intended it, but in the
sections that I read most carefully, I saw some of this as well beyond
average and really as a best practices, which is the morally
aspirational.

		PARTICIPANT:  Morally praiseworthy.

		MEMBER FISH:  Morally praiseworthy.  That's it.  Thank you.

		DR. FISHER:  I also want to put in here we do have to be careful
because not all states -- you know, states are very different in terms
of their child abuse reporting.  And some don't even have elderly abuse
reporting.  But some states' citizens are required.  Some states'
scientists are considered citizens.  Some states' scientists are
actually in the regulations.  Some states they're not.  It's just, you
know, practitioners, school teachers, et cetera.

		And so, you know, I do think it's important in other things that we
have all dealt with that it's very important to understand what is
legally required.  But what is not legally required, whether or not you
are a mandatory reporter or not, is something important to understand
within the state that you're working in.

		DR. CUPITT:  We didn't get into the details of what each state
required simply because we couldn't give the answers, but that is indeed
part of what we were talking about or trying to get to, as we discussed
in section 4.5 this data and safety monitoring and oversight.  It has to
do with looking at some of these other issues that are certainly
ethically compelled for us to do.

		And certainly we did intend this document to be best practices. 
Indeed an early title was "Best Practices."  So we were looking at doing
those kinds of things and at least raising the issue and providing a
mechanism by which we would consider how to effect that in any study.

		DR. FISHER:  Any other comments on confidentiality on this section? 
Should we go on to a more -- I think we can go on to more sections. 
Yes?

		(No response.)

		DR. FISHER:  Okay.  Creating an appropriate relationship.  And that
would be Gary.

		MEMBER CHADWICK:  Okay.  Again, I think this section is not unlike any
of the others.  It is well-written.  I have maybe 10 or 12 specific
changes that I would like to see made that I will probably you offline
because I don't think that we really need to get into that.

		There are a couple that deal with use of the term "compensation" when
you really mean payment or remuneration when you really need
inducements.  And I think it's good to call it what it is, particularly
when you're trying to talk to researchers and have them consider the
ethical considerations that are due from those.

		And, again, I think in government writing, in my own personal writing,
I think we have to guard against using the 50-cent words when the 5-cent
word is clear and better and that sort of thing.  And I think we do tend
to use kind of useless words and platitudes and that kind of stuff.

		So I would just recommend that as you are re-reviewing this and so
forth, you really think about the words that you use in this document
and do they add anything, do they really mean something.

		I think that is where we were getting the talk about the difference
between scoping and planning.  I mean, is there really a functional
difference there?  Can one word suit, instead of the two words?

		I thought this section accurately and clearly discussed the
considerations in the relationship between investigator and participant.
 And I thought, actually, that this was a major strength in this
particular section because it did focus on researcher responsibilities
and gave I thought some pretty good guidance, at least again as far as
things to think about, as opposed to giving answers for the researchers.
 So I thought this was particularly excellent in that area.

		As far as perhaps some weaknesses in this area, I thought there were
too many discussions, descriptions, examples, whatever that were correct
but really applied to general research or other types of research.

		And I think, again, the focus of the document is intended to be
observational.  And I think if we can remember that and when we're
giving examples, to give examples that relate to observational research
and not other types of research.

		For example, one of the areas that I thought this was particularly
rampant in, if you will, was section 5.1.1.  Hear me, though.  I'm not
saying don't take those general things out.  All I'm saying is, again,
make it a better document by giving us more examples, additional
examples, that, in fact, are relevant to observational studies.

		Sort of in a related vein, there are several statements in the
documents that are fairly global, like additional considerations arise
when you do yadda yadda, or numbers of issues have been identified.  And
then there is no real explanation or expansion about so what.  Okay?

		So, again, when you are readying through and looking at this, look at
the sentences and ask yourself, "Does this tell me something?  Does this
mean something to me?" or, else, just take them out.  I mean, why say,
"Additional considerations arise" if you're not going to give us a hint
about what that is?

		I thought you had some good examples of summarizing and getting key
points across in references and so forth.  Text boxes 5.2, 5.3, and I
think appendix C are good examples of not repeating things verbatim but
to say, "Here are the key points.  Here are the things to think about."

		Again, it's a guidance document, as I think Steve was implying,
guidance being useful, as opposed to guidance coming from an agency. 
Okay?  I think this is a very useful, nice document, and I would like to
see more of the direction that you're going on.  I think it's great.

		One of the comments that I think I made earlier was that in some cases
in this particular section, I noticed that it sort of takes the tone of
reporting on what others have said, as opposed to saying, "This is our
guidance.  This is the way we want to see it happen."  Okay?  I mean,
that is what this is intended to be.

		So let's step up to the plate and say, "Hey, right or wrong, here is
our impression.  Here is the way we think we ought to see things done."

		I do think that when you are looking at other sources to inform this
particular section, one of the places that you might look for is
community-based participatory research literature.

		I think there's a fair amount of that out there.  There's more and
more every day.  It's kind of a hot area right now.  So there are some
good things coming out and some thoughtful pieces about, what does it
mean to go into the community; i.e., homes, and do observations in a
research vein and so forth?  So I think that is an area that you might
want to take a look at, do a little more in-depth research.

		And I think there is even some information that you can get from the
good clinical practices literature, the GCP literature that FDA and
others have talked about.  Again, there are general things that all
investigators ought to think about that you could drop into this and
then make it relevant to if you're doing observational studies, "Here is
how this applies."

		I have got some other things.  I will provide you with specifics, but
I think that's --

		DR. FISHER:  Thank you, Gary.

		Jan?

		MEMBER CHAMBERS:  Let me add my compliments to those others that have
already been said that this really is a very, very nicely prepared
document, obviously a lot of work in it and very well-written.  I
particularly like the text boxes as capsulizing some of the concepts in
a very succinct manner.

		I can't add just a whole lot more.  Again, the document is titled
"Scientific and Ethical Approaches."  And the science was a little on
the light side, it seemed.

		Also, a couple of times it's been mentioned that it's long.  It's long
already.  And if you take all of these suggestions, it's going to be a
whole lot longer.  And you need to be a little cautious that it doesn't
get so long that nobody is going to ever look at it.  I think Suzanne
said that earlier.

		A couple of things I picked up on on this particular section are how
do you describe a study without influencing the behavior of the
participants.  It seems like that is a real challenging question.  And I
didn't really see any answers.  And maybe there aren't any.

		There is a statement in there that additional comprehension testing
should be considered, but I didn't see any guidance on what that meant
or what that was supposed to be.

		If the ultimate decisions here rest in Warren Lux, really, the
ultimate decision --

		DR. LUX:  They don't.

		MEMBER CHAMBERS:  No, no.  No, no, no.

		DR. LUX:  These studies are all seen by IRBs.

		MEMBER CHAMBERS:  No, no.  I know.

		DR. LUX:  And I have a supplementary role, but --

		MEMBER CHAMBERS:  No.  I know that.  No.  Really, what the question
was is, if the ultimate decision for any given study rests in an IRB and
those are all independent, not independent IRBs but separate IRBs, how
consistent are those studies going to be in terms of the level of
description, the amount of the remuneration that they're allowing that
sort of thing?

		And so, you know, kind of back to the earlier comment, does Warren
Lux's office then sort of have some leveling approach on all of that or
are you going to end up with varying levels of some of those IRB-type
issues?

		And then the last thing I picked up on is encouraging retention by
providing feedback.  I didn't actually see any explanation of that
either.

		DR. FISHER:  Thank you, Jan.

		Let's see.  Does anybody have notes from Alicia?

		(No response.)

		DR. FISHER:  Okay.  And Michael?

		MEMBER LEBOWITZ:  I'm going to start with really strong stuff.  I
mean, I thought from my standpoint this section actually clearly and
sufficiently explicitly delineated the ethical relationships between
investigator and participant, including within the context of the
participants' community, et cetera.  And one assumes the community also
includes the social, cultural aspects -- the term does -- in which the
investigators are confident and respectful.

		The subsection relating to the ethics, et cetera, renumeration are
highly appropriate and well-written.  Likewise, subsection 5.3 on rights
is well-done.  Creating a supporting environment, section 5.4, as
defined, is very useful and well-stated also.

		Subsection 5.5 provides good discussion of equitable selection and IRB
guidelines, text box 5.3, again, in my opinion, and that's for selecting
subpopulations for study from the ethical standpoints.  Subsection 5.6
on retention issues and ideas, especially in longitudinal studies, is
useful and generally well-done.

		I had some questions.  And I don't think they were all very important
or critical, but I did have some about can the authors define what they
mean by "a strong relationship" and what is a strong "scientific
relationship" or is this just terminology thrown in there?  That is an
aside.

		Are there specific OMB guidelines on renumeration that could be
included?  I know when we did a cooperative agreement with NERL that we
went to OMB and they had very specific guidelines and requirements that
we had to follow.

		Should the participant grievance procedures include also any component
of EPA or the IRBs who approve the study?  Those are questions.  I don't
know the specific answers.

		Okay.  Right now there are some weaknesses.  Subsection on equipment,
5.5, does not address the scientific necessities of sometimes including
over-sampling via stratified cluster methods of subpopulations,
including the under-representative, the vulnerable, and/or overexposed. 
Thus, there should be a similar delineation of when such sampling and
recruitment are necessary, similar to text box 5.3, which discusses
those needs from an exposure science standpoint.

		For instance, it is not surprising or inappropriate to study minority
poor children when evaluating exposures to lead, pesticides, et cetera,
under laws, existing rules, statutes, et cetera.  Thus, some of the
environmental equity and justice issues need to be discussed here.

		Community advisory boards, CABs, need to be involved also in approving
recruitment materials in my mind.  Further, 5.5 doesn't discuss the
limitations, especially statistical in regard to representativeness and
generalizability of non-random sampling of some kind; i.e., the
scientific problems inherent and convenience purposes or other similar
sampling methods.

		There should be a parallel subsection of 5.7 that extols the benefits
of longitudinal follow-up for the participants' communities and
responsible agencies and the decrease of risk that may be attained
therefrom.

		In regard to my response to the charges, I think this section
identifies most of the major areas and issues where ethical
considerations should be addressed.  Questions and limitations are found
in my discussion about there are additional sources of information that
should be considered for inclusion in the section, specifically some of
which I have referred to before, like Academy of Science 1991 report and
a WHO EHC 27 monograph as statistical survey sampling textbooks HSRB
discussions of purpose of sampling, which you probably have not seen,
EPA and NIHS documents on environmental equity and justice and other
references contained in this report on community and CAB involvement. 
Except as noted, what information is presented in this section is noted
accurately and clearly in my mind.

		Thank you.

		DR. FISHER:  Thank you, Michael.

		Other comments on this section, section 5?  Yes, Dallas?

		MEMBER JOHNSON:  I actually hadn't read this section yet.  And so it
has been very interesting, but I think that some of the comments there
that are written there about recruitment and so on could be better moved
to section 2.  But I don't know whether you thought about that or not.

		DR. CUPITT:  May I?  Yes.  We recognize that some of the issues are
relevant to section 2, but they are also relevant here.  They are also
relevant, it turns out, in the community section, which is the next
section.  And trying to make these sections so that they cover
everything without being too long of a document has been a real
challenge.  And in some cases, we just try to reference those other
areas.

		So I recognize that and appreciate it, just recognize it.  Pulling it
off is going to be tough.

		DR. FISHER:  Other comments?

		(No response.)

		DR. FISHER:  Okay.  So I will summarize what we have said so far from
section 3 on.  And then we'll go to lunch after you comment on my
summary.

		So with respect to vulnerable populations, there was a recommendation
to differentiate, which you did but perhaps more so, between federal
definitions of vulnerable populations versus other definitions and to
kind of highlight that within federal definitions, it has to do with
being able, I guess, to defend one's own interest or the kinds of
phraseology that Gary and Sue used, also that the justification of
vulnerable populations need to be emphasized but also the over and
under-inclusion, which is really the important part with respect to you
don't want -- you know, the phrase has been used "research oftens."

		And I think giving examples where the average lay person would think,
"Oh, my God, you know, putting poor people into a lead observation
study, how terrible."  Well, you know, if you didn't put them in and
they are the people that are living in homes with peeling lead paint,
then you are doing them a great disservice, so I think more, you know,
examples that are really addressing the issues that come up for the
population and making those types of distinctions.  And I think, as
Barry and others have pointed out, that this is observational research. 
You're not exposing people to greater risk.

		So issues of how you mitigate vulnerability and distinctions between a
vulnerable population and a population that is vulnerable in research
would be helpful.  Workplace-based studies, maybe expand a section on
that.  And if there are citations or better documentation for this, some
people noted this might be a section weak in documentation.

		In terms of privacy and confidentiality, one cluster of points made by
the Board had to do with staff training issues.  And maybe you want a
section called that because that is where we are talking about how do
you train individuals to identify abuse if they have to so that they're
not over-identifying or under-identifying.

		How do you prepare them for the type of procedures that would be
necessary to report?  How do you prepare them for
anticipated/unanticipated harms and what the protocol would be if
something happened, a child fell in a pool or something like that, and
protocols for protection of staff, especially in some of these in-home
visits?  So that may deserve its own section.

		Just expanding a little bit on certificate of confidentiality, I think
we are always surprised that there are people who don't know what that
is in the area who are working.  And so I think, you know, as Lois
pointed out, it is important to refer and maybe even define at least
that very simple definition as well as how it has to be put into
informed consent.

		And also I think it was Sue's point not to diminish.  It doesn't
diminish the need to protect confidentiality.  It goes hand in hand.  I
mean, the certificate is actually protecting the researcher from having
to release confidential information but doesn't negate his or her
responsibilities.

		Otherwise issues of third party consent may be tieing it into
regulatory and aspirational is probably important and then the kind of
framing the confidentiality issues into legally required, ethically
required, and morally praiseworthy.

		And I think some people have pointed out that if, in fact, these are
best practices that you are suggesting, then if you feel comfortable, be
more explicit.  This is what you expect, even if it goes somewhere
beyond regulations.  Given it's not binding, I guess it's not guidance. 
And I don't know if guidance is binding.

		Okay.  And then for section 5, clarify the distinction between
inducements.  I think Grady has a really good article on that, right,
that distinguishes between inducements, compensation, all those
different phrases.  And you may not have to do that but at least be
consistent and figure out which one you're actually talking about.

		Others have pointed out when do these add-ons, I think, like you
mentioned, Lois, education or whatever will become a form of inducement.
 And maybe that needs to be discussed.

		There need to be more examples in section 5 on observational studies
themselves.  Some terminology you might want to reconsider, like Gary
was pointing out, additional considerations.  What should one do?  Maybe
you want to eliminate that or explicate what one should do.

		How do you describe a study without influencing the behavior of
participants?  That's a tough one, you know, whatever.  I don't know who
wants to write that.

		Then there was something about comprehensive testing.  And I missed
it.  But Suzanne or Jan mentioned something about comprehensive and that
it wasn't clear I think what that term might mean or somebody mentioned
it.

		MEMBER CHAMBERS:  Comprehension.

		DR. FISHER:  Comprehension.  Oh, I'm sorry.  Okay.  Got it.  Got it. 
No wonder I didn't get it.

		(Laughter.)

		DR. FISHER:  I didn't know what it meant.  Okay.  Then there is the
issue of consistency of IRBs.  And, I mean, as we all know, IRBs are
inconsistent.  But supposedly if they're following regulations, then
there is a baseline of protections.

		And I think one of the things that this document and maybe you want to
emphasize is that this document can increase consistency of IRBs because
they can refer to this document as well in terms of best practices.  So
I don't know if you want to go there but whatever.

		And then what is a strong relationship and whether you want to keep
using that language, what to do about a participant grievance.  I think
that was a good point.

		I mean, typically an informed consent it's the IRB that they go to for
the participant grievance, but some people may not know that.  So it may
be important to articulate that.

		I'm not sure.  Mike was talking about recruitment over-sampling
stratified cluster methods.  I'm not sure whether that is that kind of
section 2 emphasizing or where it belongs in section 5 or the benefits
of longitudinal follow-up.  I mean, I think what everybody is pointing
out is that 5 is like this grab bag chapter where it overlaps with what
you think would go into 2, what you think would go into 6 and 7.

		And so the question is, do you want to put into your other chapters
what is in chapter 5?  But that is obviously up to you.  But it seems as
if people reading it are either asking more questions that are then
going to be in another section or think they could have gone somewhere
else.  So, once again, I understand it is very hard to write that, but
that was the recommendation.

		So any other comments thus far about where we are?

		(No response.)

		DR. FISHER:  Okay.  So I think we can come back at, let's see -- okay.
 So we will come back at 1:30.  And then we'll probably another half
hour because there are only two more sections, and they are related.

(Whereupon, a luncheon recess was taken at 12:46 p.m.)

	A-F-T-E-R-N-O-O-N  S-E-S-S-I-O-N

	(1:35 p.m.)

		DR. FISHER:  We're almost done, Larry and Roy.  I'm sure you will be
happy.  So we are on to section 6.  And we are going to start with
Rebecca.

		MEMBER PARKIN:  All right.  Section 6.  I'm not even remembering the
title now.

		DR. FISHER:  "Building and Maintaining Community and Stakeholder
Relationships."

		MEMBER PARKIN:  Terrific.  This building and maintaining certainly is
a very important combination.  And I was glad to see that in the title
of the chapter because establishing relationships and not maintaining
them can do more harm than good, not just to the community but to the
agency.  So I thought that was a very appropriate title.

		I have a number of comments that are specific to this chapter, but
some of them are over-arching.  And I will try to point out the ones
that I think have some impacts on other sections of the report as well. 
And I really have four major points that I wanted to make on this
particular section.

		One is this section has identified many important areas and issues
that need to be considered in addressing ethical aspects of
observational exposure studies.

		Examination of the many ethical issues raised in this section
suggested to this reviewer, however, that the document may benefit from
a text box or table, something like the table of 1.3, summarizing all or
the major ethical principles noted as essential for observational
exposure studies.

		The reason I got into this is when I was looking through section 1, I
started a listing for myself of the various ethical principles that were
stated there.  And then I was looking for them in the rest of the
report.

		Okay.  So they say these things are important.  Well, where do they
show up?  Later in the report.  And there were some that didn't show up
again.  So I thought, well, okay.  Maybe it just got missed in editing
or it got edited out by mistake or something.

		The list became quite long because what I was seeing was both moral
principles as well as best practice principles.  So the list became such
a mixture that I lost a sense of what the real intent was in terms of
how you saw the ethical part of this report.

		So that was one reason I was starting to think about a table or a text
box, something that clarifies what the boundaries of ethics are in your
interpretation of that term for this report.

		And our discussion here has led me to believe that maybe more of what
you really want is best practices, although certainly the foundational
ethical principles of research are in the report as well.

		So that is where this comment is coming from.  And I came up with the
idea that perhaps an advantage for such a summary would be that it would
help your reader set the stage at the beginning of the report, perhaps,
as someone else said, could be an element at the end of each chapter, or
it could be a closing chapter that recaps all of this in a way that,
again, helps your reader pull it all together.

		So that was one thing that troubled me as I was in this chapter.  And
I went through, and I found a lot of the underlying principles that were
mentioned earlier in the report or in chapter 6.  So there is a lot of
good foundational ethical stuff in there.

		The second point I want to focus on is another point that I found in
several parts of the document, not just in section 6.  And that is a
distinction between community and stakeholder.  I think this distinction
could be clearer because stakeholder descriptions on pages 70 and 74
really are not quite sharp enough.

		For example, it says that stakeholders cannot speak for communities. 
Well, the reality is stakeholders can speak for communities, but they
may not be seen by communities as legitimate spokespersons for their
interests.  So it's a question of ability and legitimacy.  And I think
legitimacy is not often spoken to in this document.

		And it may be that particular characteristic that distinguishes
between communities and stakeholders.  I am not sure.  I am not sure how
you would tease it out.

		The other thing is the key issue for me is, you know, has the
community actually or officially delegated any of its spokesperson
responsibilities to some other party, some other "stakeholder"?

		So I like the definition of community.  I thought that that was a
reasonable definition for this particular purpose.  But I think there
needs to be some work on stakeholders.

		Similarly, in the document on page 74, there is reference to other
stakeholders, but because I don't know who stakeholders are, I don't
know who you have already talked about and who these others might be. 
So there was another place I had trouble in reading this particular
section.

		Third point.  Forms of communication, meaning channels is usually what
communicators use, channels of communication are not discussed in this
section.  Just as the level and type of language used is important for
communications, the forms of communication must align with communities'
preferences for receiving and exchanging information.  That point is
never made.

		What is the point of communicating in a written format if you are
trying to reach people who have no written abilities?  They don't have a
written language even.  We have some populations that don't have written
languages.

		So we need to find that linkage between what the communities'
capabilities and preferences are to the ways in which we interact with
them, understanding and using the ways in which communities want to
receive and share information are essential ways of demonstrating
respect for communities' interest and showing that their input makes a
difference.

		This is another place where I would also talk about pilot testing,
communication tools and content.  Pilot testing came up once, I believe,
in section 5.  It comes up nowhere else I found in the report but the
concept of pretesting and making sure, using empirical strategies to
make sure that the communications that you want to pursue are, in fact,
the ones that are going to work.

		I mean, there is evidence out there that says that kind of empirical
testing is absolutely essential for successful community relationships. 
So I would interject that on here.

		The fourth point I want to focus on is the point that one element I
never really saw here in the document is that relationships are dynamic.
 Whatever the relationship is that you establish before the study begins
isn't necessarily the relationship you're going to have by the end of
the study or in the follow-up period.

		We also need to be attentive to how relationships change as you move
through the process.  It's wonderful to engage communities early.  It's
wonderful to have them part of the process.  But we also have to be
attentive to how those relationships change as experience develops
between the participants and the communities and the researchers.

		One tool I have found helpful in monitoring the dynamics of a
relationship is a reference that's cited in here, and that's Mitchell,
et al.  Mitchell is a fairly unknown stakeholder algorithm that's been
used a lot in business.

		It's been used in a lot of other sectors, but I have not seen it used
much in health, although I have used it and found it very effective. 
Looking at the three characteristics of stakeholders that they list,
power, urgency, and legitimacy, really are very helpful in understanding
who your key stakeholders area nd who your peripheral stakeholders are. 
And that status, whether they are key or peripheral, will change as the
content of the relationship is moving, as the event is changing.

		So as urgency maybe becomes a priority because there has been a press
release that has scooped the results, you many have completely different
stakeholders that you have to work with in a very different way than you
would have through the rest of the study.

		So there are ways in which the dynamics can be monitored.  And a
strategy can be built saying, "These are the tools we are going to use
to monitor our relationship as we move through this project" and be
responsive to those dynamics but not be rigid about our relationship
just because on day one, we said we were going to do X, Y, and Z but to
keep a dialogue going so you know if expectations change or roles
change, that you're all agreed to what they are.

		Further, I would just briefly mention -- I've got some other specific
bullets for more minor edits, but there was another section on page 69. 
And I found one point on 71 where I thought the writing could be easily
misinterpreted.  And I wasn't sure whether I was reading it the way the
authors intended or whether it really will benefit from rewriting.

		For example, on page 69, lines 6 through 8, it talks about community
advisory board members.  And this is the text, "The members have to be
educated.  They should represent their communities honestly.  They need
to be willing to interact."

		If I were a community member, I would think to myself, "I am not
educated.  I am not honest.  I am not willing.  What are they saying? 
What do they say?  What do they think community advisory boards are?"

		So I think that it's maybe just the specific choice of words, but I
would hope that the rewording will be something that doesn't sound
potentially prescriptive or judgmental.

		So there were a couple of places like that that I have noted in my
comments that I thought a rephrasing would probably be beneficial.

		DR. FISHER:  Thank you so much.

		Next is Germaine.  Germaine?

		DR. BUCK LOUIS:  Okay.  Just a few additional points from what Rebecca
has already given us.  And I just wanted to disclose two things up front
because this is the way that I viewed things when I read through and
evaluated chapters 6 and 7.

		And one is in virtually all of the chapters, I think the statement
about bad science is unethical is probably stated throughout the
document, which immediately infers that good science can never be
unethical.

		And I think we would argue that we can design something that is
scientifically valid and reproducible that is unethical with regard to
burden, demands, or things of that nature.

		So, actually, when I read that sentence every time, it sends chills up
and down my spine.  It would be one that I would suggest totally
removing from the document.

		Also, when I read through the science and the ethics for observational
exposure studies, to me that denotes non-experimental design in terms of
this whole notion of typology.  And I think the regulatory definition of
observational design is consistent with a non-experimental design in
that the investigator does not have control or is allocating in whatever
manner the treatment or intervention.

		So, with those two disclosures, a couple of things that I thought were
surprisingly missing from the section -- and it is always easy to say
the things that are missing.  So there are many things that are there. 
And, in fact, I have a very nice write-up of the positive aspects of the
section.

		Clearly the attention to stakeholders in both chapters 6 and 7, quite
frankly, is balanced with regard to community and other issues.  And Dr.
Cupitt had mentioned that they were aware of that.

		But implicitly it gives the impression that it may not be as
important.  And I know that is not the intent here.  And it probably
will come back to you in the form of public opinion or commenting.  So
that really needs some way.

		I really like the community advisory boards.  And this is an area of
research where I think there's actually evaluation work available.  And
especially I'm thinking about the cofunded EPA and NIHS environmental
children's centers and a whole host of other environmentally relevant,
at least epidemiologic-designed, studies in which community advisory
boards have been used and evaluated.  And I think it's important to at
least have a few references to the lessons learned from those.  Given
that this is an advisory type of document, I think it's very relevant
here.

		I just wanted to note I know people like to talk about reading
comprehension and what is a good grade level and IRBs will chime in this
further.  But there are software options available.  You can design your
data collection instrument, and it can actually measure reading
comprehension in grade levels.  And you might want to think about that.

		Section 6.115 on cultural differences, I think I would suggest a
little but of caution using race and ethnicity as the only examples of
cultural differences.  And some people would argue that really may not
even be cultural.  So either list no examples or maybe think about other
types of examples so that race and ethnicity are not the only thing that
we're picking on.

		One pet peeve of mine I will fully disclose is this notion when we
talk about to participants that we're going to give them the data,
they're entitled to see copies of the appears or things like that.  And
then it's years and years and years.  And we say it's because that's how
long it takes for research.

		But I think increasingly, at least from the NIH perspective, which
requires data-sharing plans in all supported grants, both intramurally
and extramurally now, there is some attention with regard to how long
you can leave your study participants lingering for the results.

		And I would argue that, at least in observational exposure designs,
this is even more critical because if you are truly measuring exposure
and if it's taken considerable time to give those data back, you could
argue that all groups that were participating have additional time for
added exposure, which may need to be considered in sort of that
risk-benefit ratio in terms of whether or not people should participate.

		I certainly wouldn't want to say what that timeline should be, but I
think there needs to be some suggestion of what is a reasonable length
of time.

		I guess those are just what I would consider minor issues.  The one
major substantive issue that I was struck by reading this section -- and
it scares me a little, I have to say -- is that as a researcher, it is
quite implicit that the expectation is for the researcher to become the
advocate for the community.

		And while this is an over-arching issue I think in science in general
given the emphasis on translational research, whether it's basic,
whether it's clinical, whether it's epidemiologic, I read that chapter.

		And I kept thinking, is the EPA, number one, training, preparing, and
expecting its researchers truly to become advocates for the population? 
And I think it begs the question, whether or not this is ethical
conduct, moral conduct, being a virtuous investigator.

		But it is there.  And for folks that will strongly advocate for that
position for EPA investigators, I think you have given them a little bit
of language to work with.  And that's all.

		Thank you.

		DR. FISHER:  Thank you.

		Barry?

		DR. RYAN:  I would just like to add my support to some of the
statements that have already been made by my colleagues here.  I have
several points here, most of which have been discussed already.  I will
just kind of restate one, emphasize another one, and throw a new one in
and then turn in my report, which has all of the same type of comments
in it.

		I would just like to emphasize in a different way a problem I saw with
this section.  That is, much of the information is presented in the form
of assertions, rather than with solid scientific data in back of it.

		It would be reasonable, I think, for EPA to give the examples of
community-based studies that have worked better, got better responses,
and so on and so forth that are indicated in here, rather than just kind
of stated as an assertion.  And I just want to add my statement to that.

		I personally think that section 6.1 is quite good, especially for a
neophyte investigator, who would like to find out what kind of community
support, how you go about doing work in the community.  So that section
is really kind of nice.

		On the other hand, section 6.1.1, which follows that, and the
subsections below that, 6.1.1.1 and so on and so forth, rely heavily on
one document, an ERG report of a workshop that took place in January.  I
think that needs to be firmed up somehow, whether it's through an EPA
publication of that workshop or whatever.  Referring back to the
consultant's work is probably not sufficient in that area.

		I think the one other point that is substantive here is that I would
like to add my two cents worth in on the stakeholder thing.  I think
there is much emphasis on the community stuff here.

		We've got six or seven pages of discussion of community input.  But
the stakeholder input is really relegated to almost second class
citizenship, if you will.  It's only a paragraph at the end.  But I
think there needs to be a substantial rework of that, substantial
expansion of that.

		If you don't get stakeholder involvement -- and I'm distinguishing
now, the stakeholders from the community, in my own mind, which kind of
fits what is done here -- then maybe they will be able to bring their
resources to bear to make things a little bit more difficult for
everyone.

		So you have got to get the local government involved.  You have got to
get the local industries and maybe even the greater industry involved
with these things and get them on board as well as the community. 
Otherwise there may be problems.

		So I think that summarizes what I want to say.  I've got some write-up
here.  I've got a couple of sections that I thought, words that I
thought should not be used, maybe some edits, and so on.  We're
essentially adding my support to what has already been said.

		DR. FISHER:  Thank you.

		Other comments from Board members?  Steve?

		VICE CHAIR BRIMIJOIN:  I don't have anything novel to add, but I am
troubled by what I think I heard from Germaine.  Germaine, were you
suggesting that to state flat out, not only once but two or three times,
that bad science in a study is inherently bad ethics should be construed
to imply that good science is good ethics?

		Because I strongly disagree with that.  I don't see how one could leap
to that conclusion.  And I think if this document has a deficiency, it
is an underemphasis on the quality of science, not an overemphasis.

		DR. BUCK LOUIS:  I would agree that it under-represents the scientific
side of the title.  What I was saying is that the phrase that frequently
appears in this document that "bad science is unethical" -- I think
that's what it says.  The reverse of that is good science is ethical.

		VICE CHAIR BRIMIJOIN:  It's not the reverse.  That is simply there is
no logical connection between those two statements, not in logic.

		DR. BUCK LOUIS:  Okay.  Well --

		VICE CHAIR BRIMIJOIN:  I would strongly urge them to keep that phrase,
although perhaps at least once qualify it by stating explicitly that the
converse is not true.

		DR. BUCK LOUIS:  Yes.  I guess the simplest explanation -- actually,
this example made it to one of the key research ethics textbooks.  And
this involved an observational epidemiologic design.  Okay?  And so the
design was viewed to be scientifically valid and reproducible.  And it
had to do with crib death as just sort of an example.  This immediately
came to mind when I read this.

		And the investigators argued to minimize recall bias on the part of
the parents' reporting of sleep position in the bed that the dead infant
would not leave the home, that the parents would be given the dead
infant and asked to put the dead infant back into the crib, instead of
reporting.  And they argued this design and won it through their IRBs on
minimizing recall bias.

		And the ethicists that cited this example in the text said, "This is
valid scientific methodology."  It is better, no question, but it is
unethical with regard to what was done.

		So any time I read that bad science is unethical, while I may agree
with that, it immediately puts someone in a position of saying, "I can
design the Cadillac," the scientifically most valid design, which is
automatically ethical.  And I'm not sure that's what we would agree.  I
certainly wouldn't agree with that.

		VICE CHAIR BRIMIJOIN:  I don't think anyone here would agree with
that.  I just can't imagine anyone agreeing with it.  That's why this
Board exists.  That's exactly why this Board exists.  That's our whole
purpose of our review and all the other ethical reviews that precede us.

		So, I mean, we might take that as a given.  People need to be reminded
that you can be as concerned as you wish for the welfare of the human
subject.  But if you haven't designed a study that meets rigorous
scientific standards, you will waste their efforts.  You will put them
at risk for no gain.  And that is in itself unethical.

		DR. BUCK LOUIS:  I agree with the statement that bad science is
unethical.

		MEMBER KRISHNAN:  I think it is too many times and does give the
impression, at least in part, of what you said.  That is exactly what I
thought as well.

		DR. FISHER:  I think that, at least from what people have said, not
just here but when they are talking about this, there is an underlying
concern about this section that we all share.  And I think you do as
well.

		It's a difficult section.  I think there are some -- especially
written, you know, by an agency, even though it's not guidance, it can
be there are not enough specifics to be useful.  And there may be a lot
of pressure on researchers to kind of engage the community in ways to
which the pros and cons of different methods and approaches or the best
practices of that are not as articulated.

		So I think one of the issues raised by Germaine, for example, has to
do with, are you saying -- I don't think you are, but is this document
going to be interpreted that scientists should be advocates for
communities?

		My personal belief is scientists have to be advocates for the data and
that if a community feels that data that some part of the environment
has polluted would ruin their housing costs, you're not going to not say
that there is data there.

		So in principle, it seems really nice, but in practice, that is not
what a researcher is there for.  They're there to respect the community,
to inform the community, to learn from the community, to involve the
community when appropriate because advocates of a community can have
very different meaning.

		And I do think sometimes -- I know in places that I have given talks
to, sometimes there is a confusion about will the community -- and I
don't think the complexity of the issue needs to be addressed.  For
example, what if a community just doesn't want the results disseminated?
 What do you do?

		And this has been addressed.  I have addressed it.  Other people have
addressed it.  But one of the things is that do you not disseminate data
that exists because a community is concerned about its dissemination? 
Do you provide them with the opportunity to critique?  What if you don't
agree with their critique?  Do you then provide them with an opportunity
to have a section to which it says within your dissemination, but the
community disagrees with that interpretation?  And how do you lay this
all out in the beginning so that everyone is on board in terms of how
they are going to be responsible?

		The other issue I think everybody has raised -- and I am not
summarizing, I am just saying my own point of view at the moment -- is
the problem with stakeholder.  I think what is not addressed
sufficiently is vulnerable populations within communities.  And what I
mean is communities don't necessarily represent who will be our
participants, especially if the study is focused on vulnerable
participants.

		So I think the kind of ethical ordering of who is our primary
responsibility to as an investigator and what is not addressed is what
happens when the needs of the desires of the community conflict with the
potential welfare or the needs of the participant.

		And so, you know, I think in some sense fleshing out, that would be
important.  So these are just my own personal views.  And just the
notion that having a community advisory board is not a panacea or a
substitute for ethical decision-making, that communities should inform
scientists so that they can make better ethical judgments, but it
doesn't take away our moral responsibility to make those judgments. 
That's a personal perspective.

		Yes, Mike?

		MEMBER LEBOWITZ:  Well, I think you have raised some very important
issues.  There are times when we could not study part of the community
because that part doesn't want the results known.

		And an example was some of the American Indian tribes, who might have
fallen into our design, sample design, but who refused to let the
results leave the reservation.  And so they couldn't be part of a
federally funded study.  On the other hand, they funded work we did for
them.  So they knew the results themselves.  But they own the data.  I
mean, it's basic policy on all the American Indian tribes that I have
worked with.

		So there are a number of issues like that that are difficult to get
around.  Another one is to emphasize, you know, who you get on a
community advisory board is not well-defined.  You hope and you strive
to get true representatives of the community.  But very often as
volunteers, only certain people come forth.

		And not all stakeholders are represented.  One of the issues, one of
the things that has come up very often is that, in fact, the agencies
that deal with that community are stakeholders.  The agencies that deal
with the populous for health or environmental concerns are stakeholders
also and need to be represented.  And that's not often the case either.

		And the community members may have a certain kind of education or
knowledge, which is better stated as knowledge, that we don't have, but
they may not have even a high school degree.  And, you know, that sort
of language puts off a lot of communities, especially minority
communities, where that might not exist.

		So I think that these issues are very difficult.  And I would suggest
the best way of dealing with many of them is by using information
already out there in many publications, books, articles, et cetera, or
referring to them for the examples and the kind of discussions we have
here and just making sure that the language in the chapter is not going
to either be contradictory to what we know exists in the real world or
obtains in the real world and not going to consult either the researcher
or anyone else who might encounter the document.

		And I think the major emphasis is that we want to have good relations
with communities, often through community advisory boards and with
stakeholders, and have good working relations.  We may not be able to
say much more than that other than there's a lot of work that's been
done out there on the topic.

		I mean, I at one point was the head of the steering committee of all
of the CDC prevention research centers.  And we grappled with these
issues extensively, even as to what constitutes community advisory board
or community or et cetera, et cetera.

		And even in community-based participatory research, it is very
difficult to find the perfect examples of what worked perfectly or
obtain, even from the guidelines of the 33 such centers in CDC or the
centers that NIH has established, guidelines as to what does constitute
all of this.  And so it's better to use examples and some of what is out
there and just watch the language.

		DR. FISHER:  Any other comments?

		(No response.)

		DR. FISHER:  Okay.  Let's go on to section 7 and Suzanne.

		MEMBER FITZPATRICK:  This is a nicely written section.  And, like Jan,
I really like these little boxes of information.  I thought that was
really helpful.

		This section is about designing and implementing strategies for
effective communication.  And what it really stresses -- and I think it
is really good -- is that you really have to have some sort of plan,
formalized plan.

		I think it makes the people reading it realize that you just can't
think about this kind of haphazardly.  You really need to sit down,
think about it very early, make some sort of plan for how you are going
to follow through on it.  I think that is one of the strengths of this
chapter, is that it really makes you think more structured in the area.

		I think the first thing that people have talked about is the
definition of stakeholder.  People have talked about how it is really
unclear who the stakeholders are.

		Even looking at this, the very beginning of the chapter includes the
community as one of the stakeholders.  But in the previous chapter, it
defines them as everyone but the community.

		So I think that it's important maybe up front -- and this has been
reiterated -- to define -- I'm sure the community really is one of the
stakeholders -- who the other stakeholders are.  And then they're
suggesting that you engage them, all stakeholders, early and often in
the process.

		And I think, although it's critical to identify all the stakeholders,
you don't want to identify so many people that it really becomes
unmanageable.

		So I am sure there is a lot of research in this area.  I don't know
about it.  And also when you engage a lot of people with a diverse
interest in an issue, there is going to be conflict between these people
that might interfere with moving forward with a successful research
study.

		So, again, I am not an expert in this area, but I am sure there are
some references that you might be able to put in on managing conflict. 
And I am sure other people in this group know about managing conflict
when you have a diverse set of stakeholders that all have a vested
interest somehow in the outcome of this study.

		And that takes care of 7.1 and 7.2, where they talk about
implementing.  Again, I said you should think about some type of
conflict or managing large groups and some references.

		In 7.3, you are talking about making announcements early so the
community or all of the stakeholders know what is coming.  And I am just
wondering if before EPA makes an announcement that they are about to do
a study, they get at least an initial community buy-in, a stake in
buy-in so the first time the community doesn't hear about it is an
announcement in the news or in the public that they are going to be
coming to do a study in your area because, for one thing, you don't have
any idea.  You could get better information about how feasible it is to
really be able to do that kind of study if you have talked a little bit
ahead of time to the people that would be involved.

		And you make a lot of references to plain language, which I know what
it is since I work in the government, too, and we are inundated with it,
but you don't have any references.

		I mean, there are a lot of nice Web sites that talk about plain
language and how we should be writing things in plain language.  And if
you don't know of them, I can send.  I know HHS has a whole series of
them.

		And I can add those.  And that would help people know what we mean by
plain language and how to implement it and gives you some of the --
another reviewer mentioned that there are a lot of programs that you can
put in the community to look at whether you have written in plain
language.

		You talked about in section 7.5 ways or strategies for communicating
with the subjects.  It's communication materials.  And at the bottom of
page 83, you talked about as an example pediatric flyers.  And the
content would include all of these topics, but you never really said in
there it should be like really, really at a level where the kids should
be able to understand it.  I'm not sure that they need to know all of
these things, but they really need to know what is going on.

		If this is going to be your example, the most important part is that
it's really written not at the eighth grade level but whatever level
they're at.

		And, then, in addition, you want -- and this is something someone else
said about communities not having people that are literate.  You're
talking about communication tools, such as the internet, which is a
great tool, but they should stress that if people in the community don't
have an internet connection or computers, that if you're going to
promote certain communication tools, you should make sure that they are
able to access them, too.

		And, therefore, maybe the community needs a set of -- if you are going
to use the internet as your major tool, make sure that all the libraries
or whatever have computer access for people so that everyone can do it
because we see this even, you know, as you work in your child's school
and try to communicate with everybody and a lot of the community doesn't
have a computer.  And it's really hard to communicate with them.

		And I do think the Web sites are an effective means of communicating
with people because you can have a place where they can go for questions
and you can have a place where they know they can come back to long term
if there are going to be study results.

		And then in 7.6, you talk about educating the community on the role of
the study.  And this comes back to the same problem again, if you
educate people a lot, are they going to change their behavior?  I mean,
I might if I thought that someone was going to come in after they told
me that, you know, this might not be a good practice.

		On the other hand, you want to educate them afterwards.  So I don't
know what the balance is there.  It's important to educate them, but
it's also important to make sure that if you educate them too much that
they change their behavior while the people are there, then you're
really not doing any good.  So I don't know what the answer to that is.

		And 7.7 is talking about giving the research results to people.  And
this is something that we always look at where people are going to give
the results, but they don't really have any context for whether they're
relevant or not.  And I think that can't be over-emphasized in this,
that if you're going to get people results, they have to be told what
they actually mean.

		And then they have to have people that they can turn to or community
resources that they can turn to to get additional information or
follow-up information because otherwise you might just scare them.

		And they also should I think have the option of not knowing their
results if they don't want to know them.  If they would rather not know
what the results are themselves, then they should have that.

		And then also talking about publications, I thought the community and
all of the stakeholders should be notified prior to the publication so
that they don't hear about it on the news that you are publishing
something on the news.

		I'm not sure they have a right to change the science because the
science is the science.  And it's your interpretation of the data.  But
I do think they need to know.  They don't want to turn on a television
and hear about the results of a study without being notified first.

		And the last thing you talked about I found in the grievance
procedures.  And you talk about responding to criticism.  And I am
wondering if we should put a section in on litigation, if there are some
types of lawsuits that might result from it.

		I don't know if that is really a reasonable possibility or not, but
there certainly must be -- I am certain that EPA has had litigation on
some of these things because I know we have.  So there might be at least
some resources that you could put there for people, where to go or at
least if you suspect that someone might go there, you know, who to talk
to in the EPA to follow up on it.

		DR. FISHER:  Thank you, Suzanne.

		Rebecca?

		MEMBER PARKIN:  Okay.  I have quite a few comments on this section.  I
have to say that although I found much of the rest of the report easy to
read, I really had difficulty with this section.  I know it's a hard one
to write, but I hope some of my comments will help you understand why I
had difficulty with it.

		One is that there is a very heavy emphasis on one-way communication. 
Although in other parts of the report you talk about two-way
communications, bidirectional communications, respect for the community,
blah blah blah, over and over again in this section the examples are one
way:  written materials, TV messages.  It's a mixed message that you are
sending the reader.

		So I really had difficulties with that because your exposure studies
are inherently in an environment where you have to interact very
intimately with people, with individuals in their homes, and with
communities to get access to those homes and to be able to work in that
community in a context that is appropriate for them.  So it is a much
more interactive kind of dynamic than a lot of the examples in this
section.  So that was one of the things that was most difficult for me
as I read through here.

		I think it is skewed in a direction that is perhaps misleading for
your researchers who will read this.  So I would you to reconsider your
emphasis on one-way communications, media-directed crisis communications
here that are in this section.

		Although these aspects of communications may be a part of a
comprehensive strategic communication plan, they're often not
appropriate as the major emphasis in my opinion for community-based
exposure kinds of studies that you're talking about in this report.

		In fact, one of the strategic approaches that is touched on, I
believe, in this chapter is the strategic risk communication plan that
Health Canada has published.  The reference in this document is now out
of date.

		Health Canada has published its entire framework, including a handbook
with lots of tools in it.  And it now has seven steps.  And I think one
of your text boxes has six steps.  So I wanted to bring that to your
attention.  It's now published, and it's out.  And I think it could give
you some good guidance here.

		A second point is that while researchers have important
responsibilities, institutions in which they work also have
responsibilities for ensuring that the observational exposure studies
that you want to support can, in fact, be done.  Institutions are
mentioned somewhere else earlier in this document, but this is where it
really struck me.

		For example, if employers do not value support or reward researchers
for conducting community-based observational exposure studies, then it
will be very difficult for researchers to do all of the things
recommended in this document.

		The ethical underpinnings and managerial support for this type of
research much be explicit in the institution's work culture.  And
certain institutions actually have a statement that say, "We value" blah
blah blah so that everybody has that visually represented in the
workplace and it's a visual reminder that this work is hard; this is
difficult; this is challenging work; and, yet, you've got the management
support here to go ahead and do this.  So that is one suggestion there.

		The other thing that I thought was interesting is that the importance
of formative evaluation is not noted in the document.  There are lots of
things that could be done along the way, whether that is going to be
with a community advisory board or some other mechanism, to find out
whether the mechanisms you are using for the particular study are
working, are acceptable to the participants, are things that fit with
their culture and their cycle, their daily cycle perhaps, but that kind
of ongoing evaluation and feedback from the participants can help
identify community needs and issues before they become crises.  And it
can also permit the researchers the opportunity not only to improve the
conduct of the study but to actively demonstrate their respect for the
participants and the community.

		I've already mentioned the stakeholder issue.  So I wouldn't mention
that again.  I do want to mention the issue around empirical testing a
little bit more, though.  Somewhere on page 82, toward the bottom, there
is discussion about comprehension needs to be correctly identified as an
issue for researchers who are planning their work.  And they talk about
testing of tools in a very brief way.  And I think empirical testing of
communication methods and content is really underemphasized in many,
many institutions.

		Many of us think that if we write a 30-second sound byte that has no
more than 3 points in it that our job is done.  That is completely
different than checking on whether the communication strategies that you
want to use in an observational study like this will work.  It requires
very different approaches, and it really requires pilot testing or
empirical testing of your strategy to be sure it's going to work.  And
that's where a community advisory board can help a lot as well to give
you some ideas and insights.

		On the next page, in section 7.3, in my opinion, this section is in
conflict with the purpose of the document stated in section 1.  In my
opinion, it focuses far too much on the how to level and not enough on
the ethical issues or means to demonstrate those principles, which would
be things like pretesting, et cetera.

		I have a number of specific comments for that section.  I don't think
I need to go into them in great detail here.  But I will mention that
another keyword search I did is I believe it was Barry who mentioned
earlier in section 6 that some of the documents sound very directive or
that you're making assertions without really supporting them with
citations.

		Another search I did was to look how often the words "should" and
"must" were used in the document.  "Should" comes up over 250 times. 
"Must" comes up about 20 times.  And the reason I looked for that kind
of language is I'm saying, well, the "must" must be the things the
agency thinks researchers absolutely have to do.  The shoulds are the
things that it would be nice if you had time to do this.

		And I'm not sure that that's really what you meant when you were
writing, but it, again, is a way of checking to see whether the tone of
what you have written is really what you mean.  And I was hitting this a
number of times in section 7.  And that's why I bring it out here.  But
I saw it in other parts of the document.

		I just wasn't sure whether you were saying certain things are
absolutely critical, they have to be considered, and other things are
really optional.

		Someone else mentioned just a minute ago the issue that learning from
participants of the community is an important part of the process, but I
really didn't see it in the document.

		And I think that needs to be mentioned again.  And that would be
another way of counteracting the overemphasis on one-way communications
is to point out how much researchers can learn from communities and
participants.  And that is part of that dynamic relationship that I was
speaking to earlier.  We are going to change as we learn things about
the people we're interacting with.

		On page 86, there is quite a lot of text about crisis communications
and responding.  And I really wasn't sure why that was there.  It almost
made me feel like they've been burned and they want to be sure this is
in there so the researchers think about it.  I mean, we've all been
burned doing this kind of work at some point or another.  I mean, we
have all had to go into crisis modes.  But I really wasn't sure why it
was there.

		And I began to wonder whether it really belonged in an appendix or it
would be better off referring to some other documents that deal with
this at length.  It just didn't make sense to me, an observational
exposure study done in individuals' homes.

		Something is escalating if you wind up in a crisis condition and
you're not telling your researcher what those conditions might be that
they need to look for so that they can help those factors decrease,
rather than increase, into a crisis mode.  So I was a little concerned
about that whole section.

		On page 90, toward the top of the page, there is a statement about
judging people's perceptions as accurate.  I really have a hard time
with that one because perceptions are what they are.  There's nothing
accurate about them.  What I'm wondering is whether the author who wrote
that sentence was really thinking about the fact that lay perceptions
may differ from expert perceptions.  There's a lot of literature on
that.

		And so noting that there is a difference between two kinds of people,
that the researchers and the participants may have different
perceptions, that is perfectly legitimate.  But talking about a
community or a participant's perception as being accurate is really
incorrect, is inaccurate.

		Another point that comes up somewhere in that top part of that page
also is the choice of the word "opinion."  There is some literature out
there that talks about opinions and judgments are different things.

		Opinions are much more ephemeral.  Judgments are much more stable
elements of our psyche.  And I wasn't sure again whether the author
really meant opinions or whether they meant judgments.  I think they
probably meant judgments here.  And I've got the line number, et cetera,
that I will pass along.

		Also on page 90, there is discussion about documentation of the study.
 And I also started to think about this from some of the community's
viewpoints that I have interacted with where "documentation" is a really
negative word.  It's pretty scary.  They have been documented for other
reasons, and they don't want to be documented again.

		So I wondered, again, if there was a way of repackaging that so it
wouldn't trigger a negative perception that you wouldn't want to
trigger.  Okay?

		"Recording" might be a more neutral word than "documentation," but,
again, that might be a term that you would have to pretest with some
communities to find out what will work.

		Sources of information.  There were two here that I noted in
particular I wanted to bring to your attention.  University of Kansas
has a community toolbox Web site.

		I don't know if any of my public health colleagues have ever used it,
but it is just absolutely loaded with communication tools, conflict
resolution tools, media tools.  You name it.  It is just absolutely
loaded.  So I have listed that here as one that might be useful for you,
both for the one-way and the interactive kinds of communications that
would be involved.

		Another one that I thought of is someone else brought up the issue of
conflict resolution.  Gail Bingham at Resolve has an excellent short
pamphlet posted -- I believe it's on Resolve's Web site -- about
conflict resolution.  It just is a very short, excellent manual for
people to know about.

		Page 80, my additional issue is there is mention of communications --

		DR. FISHER:  Rebecca, we're just running really --

		MEMBER PARKIN:  I'm trying to get the last one.  This is it.

		DR. FISHER:  Okay.  Good.  All right.

		MEMBER PARKIN:  Page 80, there's mention of consultation of
communication specialists.  It's unclear.  I think without more
information there, I think many people would think that you would want
them to go to media or public relations specialists.  And I'm not sure
that is what you mean.

		I think in many cases they are absolutely not the people you would
want to go to when you're talking about these kinds of in-home or
in-community kinds of settings.  So I thought that that particular
section on page 80 needs to be reworked.

		And that was my last comment.

		DR. FISHER:  Good.  Okay.  Thank you so much.  You know, I think
you're really bringing a new perspective to the Board in terms of
epidemiology.  So it's really quite educative.

		Germaine?

		DR. BUCK LOUIS:  Just very quickly, I think this section is an
opportunity to address the over-arching issues of data sharing,
particularly with the community help we might expect.  And, Suzanne, I
think you mentioned the importance of making sure the community hears
about the study first or that the results from their work before the
general community.

		One other point that I noted in my write-up is whether or not this
might be an opportunity to make a call for database, Web-based data
management structures.  Since we all have to collect data but making not
only the data but also ongoing aspects about the study available in real
time, 24/7, I know a lot of studies will also host video conferencing or
have videos that people can go and access about the study, logs,
anything you need, but, at least for the kid cohort coming up -- I don't
know if my kids are the exception, but they just don't communicate the
same ways that we used to.  They're all into the instant and text
messaging and pretty techy-savvy.

		And that's all.

		DR. FISHER:  Kannan?

		MEMBER KRISHNAN:  So in the interest of time, I will restrict it to
two short comments.  The first one, the text on page 101 relates to some
of the challenges for the chemicals like health effects data or
standards and communication challenges when there's not sufficient data
on health effects or standards.

		It just troubles me when people measure chemicals for which there are
no standards and the health effects are not known.  There has to be a
good reason for measuring such chemicals.  And they have to have some
means or tools or strategies for interpretation, not just relative to
background.  That's just not going to help.

		You know, there's a reference given, you know, to Williams 2004, but
to me it has to come out really well that the investigator should ask
the question of "What would I do with this?  What purpose would it serve
if there are no standards, no health effects, and no interpretation for
the kinds of things I am measuring?"  It just talks about the challenges
and leaves it off.

		But I think you won't serve any purpose if there are no strategies for
interpretation and communication in those cases.  So it would be tweaked
and clarified a bit there.

		And then page 103, there's a discussion about communicating all
resolves or resolves that are only about the LOC.  I think we heard some
discussion about it.

		I do think that it's fair that we provide the results to all or who
chooses to get it with an indication of what they mean.  So that kind of
a clarification or an option would be useful.

		DR. FISHER:  Thank you.

		Other comments with respect to this section?

		(No response.)

		DR. FISHER:  Okay.  I think those comments were very comprehensive. 
Because we are short on time, I am not going to review, but certainly
all of those comments are going to appear in writing.  And I took notes
just in case.

		And, just to summarize these two sections, there is lots of overlap. 
And this is a section in which people are recommending somewhat more
empirical kinds of guidance within what is going on and then a lot of
specific references I think that people made that can be helpful.  But
this is always a tricky area.

		So we want to thank you so much for allowing us to be thinking about
these issues and to learning.  And we hope that we have communicated
sufficiently that we thought it was an excellent document.  That's what
you get when you get all of the people in the room.  There is always
room for improvement.  So we thank you very much.

		DR. CUPITT:  Well, thank you.  And thank you to the Board.  We will
take seriously your recommendations and comments as we move forward at
finalizing this document.  But thank you.

		DR. FISHER:  Thank you so much, both of you.

		Okay.  Now we are going to -- yes?  Sure.

		DR. LEWIS:  Thank you, Dr. Fisher.

		Again, I want to thank my colleagues, Marty, Larry, and Roy, for all
of their effort.  It has been a pleasure working with you on this topic.
 I also want to thank our two consultants, Dr. Ryan and Dr. Buck, for
being part of our discussion today.  I really appreciate your input and
for the two consultants to share your remarks and your comments with the
lead discussants of the questions that you were assigned to.  Thank you.

		DR. FISHER:  Yes.  I echo, thanks to Germaine and Barry.  It was
really great.

		Okay.  I have just an announcement.  We are a little bit behind, and
we did want to finish the sodium azide.  The meeting was scheduled to be
over at 3:30.  We are going to take a break for a half-hour, 45 minutes
at 3:30, but my question is whether or not we want to come back until
5:00 o'clock to attempt to try to complete as much as we can because
tomorrow is a very, very heavy day where we're scheduled to last until
6:45 or whatever it is to begin with.

		So I guess the question is, will EPA be here?  Will the sponsors be
here until 5:00-5:30?  We need to know that.

		MR. CARLEY:  Nancy and I can be here until 5:00.  I strongly urge that
we find a way to finish sodium azide today.  Tomorrow is going to be
hell day anyway.

		DR. FISHER:  Absolutely.  Okay.  So we will take a break.  Our goal
would be to be back by 4:00.  We may be back by 4:15.  So we will do as
best as we can for that.

		Okay.  So, John, I assume you are presenting.

		MR. CARLEY:  Yes.  Nancy McCarroll is going to join me in a moment,
but I will get started.

		Okay.  We're going to follow the sequence with which you are all
familiar except those of you who are new.  But you will rapidly become
familiar with this drill.  I will do a little introductory context and
background thing.  Nancy will do her science review.  And then I will do
the ethics review.

		What is sodium azide?  It is a very simple molecule that's used as a
laboratory reagent, as a raw material for production of azide-containing
compounds, as a pharmaceutical intermediate, as a preservative for blood
laboratory reagents and biological fluids, and as a gas generant in
automotive airbags.

		There is a lot of literature about this use, which was common 20 years
ago, but it has been phased out since when they found more efficient,
less expensive and less toxic alternatives.

		It was formerly used as a pharmaceutical to treat high blood pressure
and as an anti-neoplastic agent.  It is in that context that it was
tested in the Black, et al., study that we're going to be discussing
with you.  It has been proposed recently for pesticidal use as a
replacement for the fumigant methyl bromide for the non-food uses that
I've listed there, which I won't take the time to read.

		The Black study was conducted in the U.S. in the early '50s.  The
authors were affiliated with New York Medical College and New York
University.  They did this work with support from the Leukemia Research
Foundation.

		This study further explores observations that have been made in
previous work by the research team.  Sodium azide appeared to lower
blood pressure in hypertensive patients.  That earlier research was with
sodium azide as a neoplastic agent.  And this made this observation and
decided to investigate it further.

		This study reports both acute and chronic oral dosing of both
hypertensive and normotensive subjects.  They did some follow-up testing
in rats in which they induced hypertension and then confirmed the
differential effect between hypertensive and normotensive subjects.  And
they also showed or these data showed that humans were substantially
more sensitive than rats.

		So with that background, here is Nancy McCarroll, a toxicologist in
our Health Effects Division, to tell you more about sodium azide.

		MS. McCARROLL:  Good afternoon, everyone.  I will be as quick as
possible.  I am sure you are very, very tired of doing this all day.

		I thought first I would just a quick look at the properties of sodium
azide.  As John mentioned, it is a very simple molecule.  It has a very
low molecular weight.  It is a white crystalline solid that is odorless
and soluble in water.  And when it's dissolved in water, it looks just
like water, which may account for many of the reported laboratory
accidents that occurred with sodium azide over the course of time.

		It is acutely toxic, causing hypotension in hypertensive rats.  And
there are numerous reports of accidental ingestion by humans.  And the
symptoms lead to a wide host of conditions, which are presented in the
next slide.  And here I was just going to -- thank you.  Thank you,
John.

		MR. CARLEY:  Push the button.

		MS. McCARROLL:  Push the button.  Okay.

		You can see the whole array of symptoms that are possible with sodium
azide, starting at the lowest doses, which are about 0.7 milligrams per
kg, where you see dizziness, palpitation, hypertension, tachycardia. 
And as the dose increases up to about two, you see EKG changes,
headaches.  And when you go up to about ten, you cause death.

		Someone pointed out to me that we obviously do not know for sure, but
it's plausible that all of these symptoms could be medicated by
hypotension.

		Next slide.  As you know, this is the study of Black, et al., from
1954.  And it is an old study.  I do not pretend that it lives up to
today's standards, but we do believe that it has scientific merit.  And
we are going to attempt to show that to you.

		Black's study had two different components.  The human component had
an acute and a chronic phase.  The number of subjects was between 30 and
35 patients, all of whom were documented cases of increased blood
pressure for a period of one to 10 years.  And several of these patients
also had kidney damage; and the control subjects, which were 9
normotensive individuals.  And these were normal healthy students and
lab personnel as well as some cancer patients.

		Dosing was for the acute phase 0.65 to 1.3 milligrams of sodium azide
in water.  It was administered orally.  And if you convert the doses to
a milligram per kilogram basis, the dosage is 0.1 to 0.2 milligrams per
kg.

		The chronic phase used comparable doses in the same oral route.  The
difference was dosing was at least three times a day for periods from
ten days to two and a half years.

		There was another very small segment that involved about three or more
patients receiving sodium azide via intravenous injection or sublingual
administration.  I am not going to talk about that since it was such a
small amount of information that we really can't discern anything from
it.

		And the clinical observations that were recorded were blood pressure
readings.  They were conducted pre-administration, 2 to 5 minutes after
the acute administration, 4 to 12 hours after the last daily dose in the
chronic administration.  Additionally, routine clinical studies of
kidneys, heart, and liver function were conducted on three patients. 
And patient complaints were recorded.

		The results from the acute phase, which is actually figure 1, page 12
of the article that was part of your package, there was a marked
decrease in blood pressure 45 to 60 seconds after treatment with the 0.2
milligrams per kg.  And there were no effects in the normotensive
patients receiving comparable doses.

		Chronic phase.  This is actually presented in table 1, page 14 of the
article, the Black, et al., article.  Ten subjects showed significant
decreases in blood pressure, but the diastolic pressure remained greater
than 100 millimeters of mercury.

		Fifteen subjects showed blood pressure near normal levels.  Three of
these 15 subjects, these were the ones who were examined for clinical
signs.

		And they showed no damage to kidney, liver, heart, bowel, or urinary
function.  Five of the subjects showed only minimal changes in blood
pressure.  There were no effects in the normotensive patients receiving
comparable doses for up to ten days.

		There was another finding of the study that was discussed by Black, et
al.  And this had to do with the lowering of the dosage of sodium azide
to 0.04 mgs per kg in 20 of the hypertension patients in the chronic
phase of the study because of increased sensitivity.

		The second phase portion of the study involved rat testing.  And this
was an evaluation of hypertensive versus normotensive rats.  The dose
they received was 0.6 to 0.7 mgs per kg, which was administered IV.  And
blood pressure was monitored.

		I found this rather intriguing how one could monitor the blood
pressure of a rat, but there was nothing written and the authors did not
comment on this.

		Another study was in the data package, which had been submitted by the
registrant.  The author was Grizzle in 1953.  They also did induction of
hypertension in rats.  And they used a microphonic manometer to record
the blood pressures.

		The results I did mention.  And I apologize.  The rats were made
hypertensive either by a figure 8 loop on the kidney or a partial
ligation of the renal artery.

		The results indicate that doses as low as 0.6 to 0.7 mgs per kg
induced a drop in blood pressure that lasted for 30 to 45 minutes in the
hypertensive rats while inducing no effects in the normotensive rats at
comparable doses.

		Now, to put this study into context of the findings for other
toxicology studies, as you can see, the Black, et al., study with the
hypertensive rats is the lowest doses causing an effect.  In the other
studies that we have on this chemical, in a dog or normal rats, the dog
is actually the most sensitive species, where we see a low L of 3, but
this took 26 weeks to see this effect that occurred almost immediately
before sacrificing these animals.  In rats, the doses are also very
large compared to the Black, et al., study.

		What these basically show is that the rats were 17 to 30 times more
sensitive or the hypertensive rats were 17 to 30 times more sensitive to
the action of sodium azide than normal rats and about 5 times more
sensitive than dogs to sodium azide.

		Those are the results.  John thought it would be a good idea for me to
mention the study limitations.  And, as I said, we all know that we have
to keep in mind this study was indeed done in 1953.  So one would be
very foolish to expect it to live up to the standards that we have
today.

		There was no demographic information on the treated subjects or
controls.  The sample size was relatively small.  The clinical testing
was rather limited.  There was no clinical testing in the acute phase. 
And, as I mentioned, only three subjects were run through that phase in
the chronic part of the study.

		The limitations for the rat study are that there was very, very little
information on the rats; very limited information on the test protocol;
and the data were reported in a narrative summary, rather than tables.

		Despite the limitations, the results can be useful because they
provide the lowest point of departure to assess the health risks to
humans.

		We believe that this is relevant because about 25 million Americans
are currently being treated for hypertension.  And the first line of
treatment is the anti-hypertensive diuretics, many of which have azide
as a key component.

		Given the increased sensitivity that Black, et al., mentioned for 20
of the subjects, it is highly likely that such individuals would be the
most sensitive and the most at risk.

		Based on these considerations, we have drawn the following
conclusions.  The Black, et al., study provides evidence that
hypertensive humans are more sensitive to the hypotensive effects of
sodium azide than hypertensive rats and more sensitive than the normal
animals used in toxicology studies.  We base this statement on the
evidence that the acute levels and induced normal blood pressure in
hypertensive rats were approximately 30 to 35 times higher than the
acute dose producing a drop in human blood pressure within 45 to 60
seconds.

		Based on the toxicology studies, as I mentioned earlier, the dogs are
the most sensitive animal species.  And when comparing the dogs to
humans, we find that the humans are 150 times more sensitive than the
dogs.

		Similarly, for the rats, which are the main animal used in FFRA test
guidelines -- there are about 18 studies that are conducted with rats --
the lowest NOAEL would be 5.  And this suggests that humans are about
250 times more sensitive than the rats.

		Final thoughts.  The Black, et al., study has been used by other
groups to set safe levels for sodium azide in the workplace.  These
groups include the American Conference of Government Industrial
Hygienists; the U.S. Occupational Safety and Health Administration,
which is OSHA; and NIOSH, the National Institute for Occupational Safety
and Health.  And these threshold levels define permissible limits of
exposure, incorporating a reasonable margin of safety.

		Thank you.

		DR. FISHER:  Thank you very much.

		John?

		MR. CARLEY:  Continuing rapidly along here --

		DR. FISHER:  I am going to wait for the ethics and science.  We will
ask questions of them after John finishes.  Okay?

		MR. CARLEY:  All right.  This presentation addresses both the
underlying study and my ethics review that was in your background
package.

		The regulatory context.  This research involved intentional exposure
of human subjects to sodium azide in clinical trials as a potential
therapy for high blood pressure conducted over a decade before the first
version of the declaration of Helsinki.

		There were no explicit standards for ethical conduct of this kind of
research.  The research was not conducted with the intention to submit
it to EPA under the pesticide laws or, so far as we know, to any other
regulatory authority.

		The applicable regulations in this case, it was submitted to us by the
registrant in September 2005, before the current rules took effect.  So
the requirement for documenting ethical conduct of the research at the
time of submission did not apply to this package.

		Had it applied, it's worth mentioning that because this data is so
old, there's not a whole lot that could have been provided by way of
additional documentation.  The trail is cold.

		The following provisions of the rule do apply.  First, the requirement
for review by this Board because this pre-rule research reports toxic
endpoints; secondly, the rule forbidding us to rely on research
involving intentional exposure of pregnant or nursing women or of
children; and, finally, the passage that forbids us to rely on pre-rule
research in the face of clear and convincing evidence that its conduct
was fundamentally unethical or significantly deficient relative to
prevailing standards.

		I am going to run through as quickly as possible the categories that
we have applied to previous studies, first the value to society.  The
information reported by the Black study is of potential value to EPA in
defining endpoints for assessing risk from exposure to sodium azide when
it's used as a pesticide.

		There were at least 35 subjects in the acute phase.  There were at
least 30 subjects in the chronic phase.  There were nine controls.  But
we know almost nothing about the subjects or how they were recruited. 
We don't know their ages, sexes, et cetera.

		We also don't know how those nine controls were distributed across the
acute and chronic phases of the study.  And we don't know for sure
whether the acute phase and the chronic phase were different subjects or
the same ones.  The likelihood is that the 30 chronics were among the 35
acutes.

		There was no discussion of risks to the subjects or their likelihood
in the article.  I found, however, that there were several clues to
suggest a general concern by the investigators for the risk to subjects
and for reducing it.

		They reported that the experimental doses were far below the reported
lethal range.  They monitored subject responses pretty closely, closely
enough that they called for dose reduction for two-thirds of the
subjects in a chronic phase that showed some developing sensitivity to
sodium azide while on a test.

		They also conducted the site experiments that Nancy mentioned about
different dosing regimens for the stated purpose of trying to determine
the minimum effective dose to see if the different pathways would make
it possible to get the result with less material.

		The research demonstrated a therapeutic benefit in hypertensive
subjects.  And it potentially has societal benefits of the three types
that I have identified in sites into potentially effective treatment for
hypertension, improved understanding of the side effects of sodium
azide, and the evidence that humans are more sensitive than animals to
the hypertensive effects of the material.

		But the evidence available to us is so fragmentary it's very hard to
tell whether those benefits were foreseeable when the study was
undertaken or whether they were sufficient to justify the unspecified
risks to subjects.  So we kind of strike out trying to find material for
balancing risks and benefits.

		With respect to independent ethics oversight, the report is entirely
silent concerning any sort of ethics oversight and identifies no
standard of ethical conduct.  I note that these characteristics are
absolutely typical of research reports from the mid '50s and do not in
themselves constitute evidence of anything.

		With respect to the way the information about subjects was handled,
there was no breach of confidentiality in the way the information was
presented in the published report.

		The trickiest bit from an ethics review perspective is the question of
informed consent.  You may have seen in the science review the data
evaluation record.  An observation by the primary science reviewer, who
was not Nancy in this case, that in their judgment, this was not
consensual information.  I disagree with that view for the reasons that
I will explicate in a moment.

		The report is silent is concerning what subjects may have been told
and about whether they participated voluntarily.  There is mention that
sodium azide was "administered without informing the patient of either
the nature of the drug or the change to be expected."  That is a direct
quotation.  And that is what led the science reviewer to be concerned
about this.

		When I looked at that statement in context, I understood it to be by
way of discounting the likelihood of a placebo effect and not by way of
saying, "This was non-consensual research."  So that's my interpretation
of it.

		Also, I think corroborating evidence is implicit, but the subjects in
the chronic phase self-administered the sodium azide three times a day
for up to two and a half years.  And I cannot imagine their doing that
involuntarily non-consensually.

		So I think the interpretation by the science reviewer was erroneous. 
And I think this is simply another case where the published report from
1954 is silent with respect to this critical dimension of ethical
conduct.

		40 CFR 26.1703 forbids EPA to rely on research involving intentional
exposure to pregnant or nursing women or children.  The age and sex of
these subjects was not reported.  But there is no evidence to suggest
that any of them were pregnant, nursing, or under 18.  And EPA policy is
that in circumstances like this, where we don't have any evidence and we
can't get it about these matters, we do not interpret 1703 to prohibit
reliance on this study.

		26.1704 forbids us to rely on pre-rule research such as this if there
is clear and convincing evidence that its conduct was fundamentally
unethical or significantly deficient.  There are unquestionably major
gaps in the documentation of the ethical conduct of this research.

		But I found no clear and convincing evidence that the research was
fundamentally unethical.  I couldn't identify a clearly applicable
contemporary standards of ethical conduct.  And I also found no
evidence, clear and convincing or otherwise, that this research was
deficient by the standards prevailing in this kind of research in the
early '50s.

		So I conclude that there is no regulatory barrier to EPA's reliance on
this research.  And that brings us to the charge questions.  If you want
to do clarifying questions first, I'll read these into the record later.
 What is your preference?

		DR. FISHER:  Yes.  Science questions, Kannan?

		MEMBER KRISHNAN:  Yes.

		DR. FISHER:  And then ethics questions.

		MEMBER KRISHNAN:  A few questions of clarification.  I thought you
mentioned 25 million people who were treated for hypertension were using
sodium azide.  Did you say sodium azide was used or --

		MS. McCARROLL:  Oh, no.

		MEMBER KRISHNAN:  -- you just said --

		MS. McCARROLL:  No, that --

		MEMBER KRISHNAN:  Those are people being treated for hypertension?

		MS. McCARROLL:  Right.  Twenty-five million people in the States are
being -- there are actually 50 million with hypertension.  And half of
them are being treated -- this was on the NIH internet, by the way.  But
the most common treatment for initial diagnosis is with the diuretics. 
And many of them contain the key component of azide, not sodium azide,
but azide.

		MEMBER KRISHNAN:  Okay.  And here it is being evaluated as a limited
use fumigant.  That's my understanding.

		MS. McCARROLL:  Yes.

		MEMBER KRISHNAN:  Then from my reading, the n was three for
toxicological evaluation.  But at some point, maybe around slide 10 or
12, I thought you said 10 people were evaluated for toxicological
endpoint.  Is that --

		MS. McCARROLL:  Oh, no.  It was only --

		MEMBER KRISHNAN:  Only three for --

		MS. McCARROLL:  Only three had clinical evaluations, three of the
patients in the chronic phase.

		MEMBER KRISHNAN:  Yes.

		MS. McCARROLL:  That's what the article said.

		MEMBER KRISHNAN:  Okay.  That was confused based on information on one
of the slides.

		MR. CARLEY:  Let me add a little clarification about that.  We
consider that all of the people who were monitored for drops in blood
pressure -- we consider those to be toxicological observations, not just
the three.

		MEMBER KRISHNAN:  But that's what the article says.  The authors say
that they evaluated three people during I think a few years or
repeatedly for clinical science of the liver, kidney, and --

		MS. McCARROLL:  Right.  That's why we listed specifically what they
said they evaluated, in addition to the blood pressure readings.

		MEMBER KRISHNAN:  Yes.  And the IRIS evaluation for this chemical was
done in 1988, I thought, and was based on a rodent study, I think
subchronic rat from an NCIA study if I remember correctly.

		MS. McCARROLL:  That's correct.

		MEMBER KRISHNAN:  Here the total dose is about 3.9 milligrams per
kilogram, which gives an RFD very similar to what is in the IRIS file
right now.  So I think it's consistent to your calculation.  The
document, your document, shows that the human dose for the chronic phase
would be .06 milligrams per kilogram.

		MS. McCARROLL:  Okay.  Well, we were thinking that if you take into
consideration the patients who had to have their dosage adjusted because
of an increased sensitivity to sodium azide, that it would drop down to
0.004  milligrams per kg.  They would represent the most sensitive
population.

		MEMBER KRISHNAN:  That's the number in IRIS.

		MS. McCARROLL:  No.  The number in IRIS, that's based on the rat
study.  You're right.  And they added the usual 100 uncertainty factors
and an additional 3 because it was a subchronic study that they had to
use for a chronic endpoint.

		MEMBER KRISHNAN:  They used 10 each to get to .004.  But the
uncertainty factors and the numbers don't interest me.  I just thought
about it.  I said it's not anything where different, but that's a big
point.

		One of the reasons that I was trying to see what is the necessity for
the EPA to be able to consider a study such as this, where there is lack
of information on the purity of test materials, sample size, the dose
levels, I think the dose levels was the one that struck me.

		They say they give between .65 and 1.3, which is a factor of 2.  But
one place they say, "We give it three to four times."  And other places
they say, "Three to five times."  So that's about a factor of 1.5 to
individuals with no known body weight, gender, race, or age.  So the
body weight, maybe there's a factor of two maybe from 43.80 kilos.  And
they give for up to two years, I guess, where it's not truly chronic. 
They pretend that it's chronic, but two years is not chronic, really.

		So I was kind of surprised that they don't know the milligram exactly
they gave.  And in talking about the dose, again, at some point, they
say that the dose was reduced from .5 to .25 in 20 patients during the
experiment.

		When you look at the table, there are only two people who receive .5
milligrams as a dose.  So I don't see how you could have had 20 people
receiving a dose where they only list 2 people as receiving the dose and
they say they reduced the dose by a factor of 2 to, I think there was,
some evidence of toxicity.  It's not clear what it was.  I think your
document also says clearly that there was no explanation of why the dose
was cut down.

		DR. FISHER:  Kannan, let me just state --

		MEMBER KRISHNAN:  Yes.  Okay.

		DR. FISHER:  I mean, I understand there is going to be --

		MEMBER KRISHNAN:  What's the urge?

		DR. FISHER:  Right.

		MEMBER KRISHNAN:  I mean, what is the necessity for us to offer or for
you to be able to use a 1953 study from which you can't tell what the
dose was and there were only 3 people tested, out of which one of the 3
has some pounding effects of some kind?

		So there is no control because when you evaluate toxicity, obviously
it's always with respect to something.  And there's no toxicity data in
the paper.  When there is therapeutic data, I mean, you have the bp
before, after, and so forth.  But there is nothing about toxicity data
in those three people except to say that one of the three had some
qualitative problems.

		So I just wanted to get a sense of, you know, how critical or why does
the agency see this as a critical useful study?

		MS. McCARROLL:  Well, I think I agree with you.  You know, we have not
tried to inflate the study and have mentioned repeatedly that this study
simply does not live to today's standards, which is basically what you
are comparing it to.  But there is an overall general trend.

		If you put this study with the accidental ingestion studies, you can
see a very clear dose response.  The fact that only three patients were
examined for other clinical signs, the study author even mentioned that
as a limitation.  But we have a very limited amount of data on this
chemical.  What struck me about this chemical in comparison with the
animal chemical studies is the dosage was so tremendously different.

		And, you know, I used to work in a lab.  Everybody knows how dangerous
sodium azide is in a laboratory setting.  So, really, if you were to
look at the table that I developed for looking at the RFD, you can see
this clear dose response of effects occurring in both normal humans as a
result of accidental exposure and these hypertensive patients.

		This is also supported, by the way, by the rat study, which is a
markedly lower dose than you have in the actual guideline-type studies.

		DR. FISHER:  Okay.

		VICE CHAIR BRIMIJOIN:  Let me just see if I can -- I think we are
almost at the point of clarity but not quite.  I hear the two of you
talking on sort of like 30-degree angles to each other.

		Perhaps what was troubling my colleague -- and at least I have the
same question -- is the study itself is not strong, but if it's the only
piece of information we have to go for, if it's the crucial piece of
information, if it's the study that tells us that human beings are, in
fact, much more sensitive than the various animal models that we got
data from, then it's worth proceeding with the discussion.  It's worth
your having come forward.  And it's worth proceeding with the discussion
about whether the study has got enough in it to use.

		But perhaps Dr. Krishnan is puzzled and so am I by the fact that we
have -- there is information out there in a database based on the rat
study.  And if you apply the standard safety factors and so forth, you
are coming up with the same number.  So, therefore, why even bother to
raise the issue now?

		MR. CARLEY:  Let me take a crack at this.  We acknowledge that this
study is weak, but it is the only which, as you suggest, shows humans to
be the most sensitive.  And it focuses on an endpoint that doesn't
emerge from the other studies in the weight of the evidence.  So that's
part of the reason.

		The other point is just the number, the .004 number, is not the same
in IRIS and in Nancy's evaluation of this one.  In IRIS, it is an RFD. 
In this case, it is an endpoint unmodified by uncertainty factors.

		VICE CHAIR BRIMIJOIN:  Yes but surrounded by considerable uncertainty
--

		MR. CARLEY:  Yes.

		VICE CHAIR BRIMIJOIN:  -- because the study wasn't designed at a Texas
endpoint.

		MR. CARLEY:  Yes.

		VICE CHAIR BRIMIJOIN:  So, I mean, it might be enough to say, "This
study provides evidence to support the default assumptions that in this
case humans are a damn sight more sensitive than rats," end of
discussion.  We don't need to know what the number is that we came out
with.

		DR. FISHER:  I think it sounds like there are a lot of reasons to be
critical, and there may be others.  But I don't want to go into that if,
in fact, the answer seems to be, at least from John and Nancy, that they
believe this provides information that may be helpful.

		So I think we have answered their perspective on the question, and we
still have to make a decision about our recommendations for that.  I
think you have given good reasons for your perspective on it.  Right? 
Okay.

		MEMBER LEHMAN-McKEEMAN:  I would just like to clarify a little bit
further.  We're starting by looking at the effects in a hypertensive
population, recognizing that they are more sensitive than the
normotensive.

		And is it your intention, then, that those subjects for which the dose
had to be further decreased because of some increase in sensitivity,
which we don't know what that was, is it your intention that that
number, which I believe is .25 milligrams TID, that is the value that
you are using to assess human sensitivity?

		MS. McCARROLL:  Yes.

		MEMBER LEHMAN-McKEEMAN:  Okay.  That wasn't completely clear.  And one
final question from me is, particularly in looking at the animal study
data, as I see it, there are two target organs that have been
identified.  One is the cardiovascular system.  The other is the central
nervous system.

		What is your perspective on the fact that this study, unfortunately,
includes no evidence of any kind of CNS effects?

		MS. McCARROLL:  Which study are you talking about?

		MEMBER LEHMAN-McKEEMAN:  The Black study.

		MS. McCARROLL:  Well, I think it is because the dose is so low.  I
mean, I think just starting out a therapeutic research program, you
wouldn't be giving doses that would cause CNS effects.

		And they are so intimately involved they both -- oxygen is the key
thing with sodium azide.  And both the cardiovascular system and the
nervous system require great amounts of oxygen.  So that probably the
same mechanism is at work in both systems.

		But if you look at the animal studies, you have to get up to doses.  I
think there was one of the animal studies -- these were the human
studies, but you had to get up to like ten milligrams per kg per day to
see brain necrosis.

		And that was a big issue with sodium azide.  What is the most
sensitive endpoint?  And we believe that it is hypotension.

		DR. FISHER:  Rich and then Dr. Kim?  Is yours about an ethical issue
or --

		MEMBER SHARP:  No.

		DR. FISHER:  No.  Okay.  Rich?

		MEMBER SHARP:  Two clarificational questions.  First, in the Black
study, since there are both the human studies and the animal studies
being reported in the same paper, is it clear or is there any evidence
that suggests that the animal studies actually were done prior to the
human studies or were they conducted concurrently?

		MS. McCARROLL:  I don't know.  They were just presented at the end of
the document.  So I don't know if that means they were done after the
human studies or --

		MEMBER SHARP:  Yes.  I couldn't find it either.

		MS. McCARROLL:  Right.

		MEMBER SHARP:  So just a clarificational question.

		The second question has to do with the definition of hypertensive that
was used in that paper because obviously definitions of what constitutes
hypertensive patient changed quite a lot over the last 50 years.

		MS. McCARROLL:  Right.

		MEMBER SHARP:  So how did they understand hypertensive?

		MS. McCARROLL:  I am going to have to kiss a dear friend of mine in
the office who asked me that same question.  It's basically the same as
we have today, the NIH definition of over 140/90 is considered
hypertension.  And that was basically what was in the Black article.

		DR. FISHER:  Okay.  Thank you.

		Dr. Kim?

		MEMBER KRISHNAN:  One clarification.  There was actually a pounding
headache reported as a clinical adverse event.  So there was something
CNS-related.

		My question is more of sort of a lack of what they are saying in the
paper because figure 2 and 3 talk about before and after blood pressure
change for acute effect.

		But then you go to the table 1.  They present the data from chronic
cases.  But the table itself reports both chronic effect and acute
effect.

		So because they are dealing with a different number of subjects,
figure 2 and 3 for acute studies with 35 subjects, with a chronic study
of 30 subjects, and chronic cases, there are data points actually
presented.

		So just out of curiosity, I studied them to see whether there is some
sort of semblance between the two data sets.  Can you bring that file
out?  Can you put them on one page?  If you go to preview, probably that
will do it.  Go to the file.  There is a print preview command.  It's on
there, right there.  Okay.

		So I tried to plot the data points in a comparable scale, both x-axis
and y-axis.  And so the first figure shows the results from 35 subjects,
acute phase.  And the figure down below is based on 30 subjects reported
on table 2.

		And, I mean, because of a slight difference and the fact the top
figure is kind of tilted cannot make it very clear, but I think there is
a sizeable overlap between the two data points because if you are doing
a chronic study, you could look at acute effect and report that.

		DR. FISHER:  Right.

		MEMBER KRISHNAN:  So between 35 subjects from one study and 30
subjects from the other study, there is some overlap.  And there is a
gap.  And I'm concerned about this gap.  They are silent about what
about these subjects that are not reported in both places.

		And other comments, I heard comments made by the presenters the few
times that the study did not, of course, meet sort of a current standard
of science, but I would even argue that this study doesn't even meet the
1954 scientific criteria because the first reported clinical trial from
the Medical Research Council in England in 1948 in British Medical
Journal has all of the trappings of a modern clinical trial.  This study
doesn't meet any of that.

		DR. FISHER:  Okay.  I think a lot of these points sound like what is
going to come up in the critique that I don't think we are asking
questions anymore.  I think let's distinguish between questions and why
don't you agree with this.  That's different.

		So I think if there are not any specific questions, we are going to go
on to public comments.  So hopefully we will have the public comment and
discussion or any questions we have of the public commenter before 3:30.

		So if that is okay with everybody, then Ms. Hauswirth?  Please sit
over there and introduce yourself as well as your affiliation.  In the
meantime, I have a question to ask in terms of is sodium azide in
pesticide products right now?

		MS. McCARROLL:  No, it is not.

		DR. FISHER:  Okay.

		PARTICIPANT:  It has been registered before.

		DR. FISHER:  Okay.  So getting back to Kannan's question, it's been
registered for -- and you are going to be incorporating this data to
help assess the amount of sodium azide that could be in products.  Is
that the use of this data?

		MR. CARLEY:  If we were to approve the pending application for
registration of sodium azide for these non-food fumigant uses, we would
first do a human health risk assessment.  And we would be concerned not
with dietary exposure because they aren't food uses but with various
other pathways of exposure, including bystander exposure and worker
exposure and so on.

		In order to support those assessments, we have to have some notion
about what the most sensitive endpoint is and what range of uncertainty
factors are appropriate.  And then we look at the behaviors that produce
the exposure and bring those together into the risk assessment.

		DR. FISHER:  Okay.

		MR. CARLEY:  So the use that we are contemplating is using this as an
early piece, a point of departure in this elaborate analytical
structure, where we try to figure out what risk sodium azide would pose
in the proposed uses.

		DR. FISHER:  Okay.  Thank you.

		Yes?  And so remember there are five minutes.  And please introduce
yourself and your affiliation.  Thank you.

		DR. HAUSWIRTH:  I'm Judy Hauswirth.  I'm a toxicologist, staff
toxicologist, with Keller and Heckman.

		On to the next one, please.  American Pacific, as we have already
stated, is pursuing a registration of a sodium azide formulation.  And
it is 20 percent sodium azide in water.  And although not stated here,
it is not solely water.  It also contains a blue dye.  It contains
ammonia.  And it is heavily buffered to keep it sodium azide.  So it
does not become acidic and become hydrazoic acid, which is more toxic
and very volatile.

		I also want to point out that sodium azide is not a fumigant.  Even
though it is replacing methyl bromide, it itself is not a fumigant.

		Essentially what AMPAC is doing is agreeing with the agency that there
is no clear and convincing evidence that this study was fundamentally
unethical or significantly deficient under the standards prevailing in
1954, but you certainly know more about this than I do.  And you seem to
disagree with that, with their conclusion on the study.  We also agree
that the study was scientifically valid.

		You have already heard a fairly extensive description of the study and
its outcome.  The one comment I have about the publication is that I
found it to be very difficult to read, very difficult to understand. 
And I really applaud the agency for tweaking it as much as they did to
figure out the study.  I know it took me a long time to do that as well.

		As you know, it was conducted to investigate sodium azide as a
potential hypertensive.  It was effective in lowering blood pressure in
hypertensives at dosage levels of .65 to 1.3 milligrams per day on an
acute basis.

		These doses have no effects on normotensive.  Chronically doses as low
as .25 milligram reduced blood pressure in hypertensive individuals with
no effect on kidney, liver, or heart function.

		I use the word "tolerance" here.  It's used inappropriately.  It
should be the subjects became more sensitive to sodium azide.  And,
therefore, the study was terminated.

		We believe that the Black study is scientifically sound because there
are no indications in the report that the study is not scientifically
sound.  There were, we believe, to be sufficient numbers of
hypertensives in this study.  And there was evidence of these patients
having been diagnosed with chronic hypertensions for periods of years up
to ten years.

		What wasn't considered was possible confounding factors in these
patients, such as smoking.  And these certainly were not taken into
consideration in the outcomes of the studies.

		We felt that this study was conducted according to methods and
equipment that were contemporary to the time.  Again, I have already
disagreed with that.

		Our major I guess you would call it disagreement with the agency is
that we don't believe that the Black study contains sufficient
information to determine a point of departure.  There was no clear NOAEL
determined on blood pressure in normotensive individuals and that,
therefore, we don't believe this study should be used for this purpose
and that animal data is better suited for this purpose.

		I just want to make a couple of more comments if I have got just a
couple of minutes left.  There is another study out there in which
exposure levels to sodium azide and the effects observed in humans in
the workplace was done.

		What is surprising about that study is that the authors did not see
the effects that they expected judging from the NIOSH levels set for
exposure.  They were surprised that they didn't see hypotension.  They
were surprised that they didn't see dizziness or anything like that.

		There was only one individual in that study that showed effects of
hypotension.  He continued working that entire day.  And there are some
indications that he actually was hypertensive.  But we have no
documentation on that.

		And the author -- and it's called the Trout study.  And I can't give
you the full reference of it.  I don't have it handy right now.  He was
surprised that the NIOSH levels are set so very low based on his
findings.

		Now, there was one finding they do find in workers.  And you mentioned
that it's a pounding in the head, which is very transient.  It's a very
acute effect.  It is kind of an annoying headache or pounding, which
disappears very rapidly.

		One other point I wanted to make is the rat study that was included in
black was an intravenous study.  So it is totally irrelevant to the uses
that we trying to get registered for sodium azide, our sodium azide
formulation.

		Thank you.

		DR. FISHER:  Thank you.

		Questions from the Board?  Lois?

		MEMBER LEHMAN-McKEEMAN:  You comment that the Black study should not
be used as a point of departure, that the animal study should.  If we
can take that one step further, there are data in rats.  There are data
in dogs.

		DR. HAUSWIRTH:  Yes.

		MEMBER LEHMAN-McKEEMAN:  Which of those do you believe is the most
appropriate to use to define --

		DR. HAUSWIRTH:  The most sensitive species.  Now --

		MEMBER LEHMAN-McKEEMAN:  That would be the dog?

		DR. HAUSWIRTH:  Yes.

		MEMBER LEHMAN-McKEEMAN:  Okay.

		DR. FISHER:  And you said it wasn't a fumigant.  So what is it?  How
would an individual be exposed to this?

		DR. HAUSWIRTH:  Workers could be exposed dermally, certainly not
orally unless it was intentional.  There is some chance that there would
be vapors in the air that would contain sodium azide that could breathe
it, but we think that that would be at a very low exposure.  The highest
exposure is likely to be dermal.  And we are actually advocating
conducting a dermal toxicity study in the rat.

		DR. FISHER:  Okay.  Any other questions?

		(No response.)

		DR. FISHER:  Thank you very much.

		DR. HAUSWIRTH:  Thank you.

		DR. FISHER:  Are there any other public commenters?  Do you want to
give your name and your affiliation, please?

		DR. RICHARDS:  My name is Doug Richards.  I am with American Pacific. 
We are the manufacturers of sodium azide.

		The point I wanted to clarify was with the Trout study.  It was an
actual NIOSH study conducted at our plant where we manufacture sodium
azide.  And it measured both dermal exposures and inhalation exposures
to sodium azide.

		My understanding is it was not submitted to you folks because it was
not intentional dosing but just an observation.  But it is probably the
most scientifically sound study and modern, 1994, study.

		DR. FISHER:  Thank you.

		Other questions?

		(No response.)

		DR. FISHER:  Okay.  Hold on for a sec and let me figure out.

		(Pause.)

		DR. FISHER:  Okay.  We are going to continue going.  And we will stop
when it is time to stop.  And then we will come back.

		So okay.  Why don't we start with Kannan.  We are pressed for time. 
And so was there something else you wanted to say?

		MR. CARLEY:  I'm confused about what the plan is.

		DR. FISHER:  I know.  I'm just saying right now we're not leaving at
3:30.  Okay?  I know you have to leave at 5:00.  Got it.  So, Kannan? 
Please let's keep comments to -- if you said it before, summarize what
that is in not as much detail because we do want to get ahead, but
obviously don't eliminate any critical detail.

		MEMBER KRISHNAN:  I said it before.  I said it again.  I'm unhappy
with this setting.

		DR. FISHER:  I'm sorry?

		MR. CARLEY:  Do we need the charge in the record first?  "The agency
has concluded that this study contains information sufficient for
assessing human risk resulting from potential acute and chronic
exposure.

		"Please comment on whether this study is sufficiently sound from a
scientific perspective to be used as the point of departure to estimate
a safe level of acute and chronic exposure to sodium azide and please
comment on the following.

		"Is there clear and convincing evidence that the conduct of this study
was fundamentally unethical?  Is there clear and convincing evidence
that the conduct of this study was significantly deficient relative to
the ethical standards prevailing at the time the research was
conducted?"

		DR. FISHER:  Just for the new members, we always talk about the
science first before we talk about the ethics on the basis of the fact
that if it's not scientifically valid, then there is little basis for
the ethics.

		Okay.  Kannan?

		MEMBER KRISHNAN:  I think my concern is twofold.  One is dose.  The
other is toxicity evaluation.  As I said, there's no clear indication to
me of the dose that was given to these individuals when you look at all
of this.  It's .65 to 1.3, 3 to 5 times, no known body weight, age, so
on.

		But, again, in the text someplace, they say that they reduced the dose
from .5 to .25 milligrams.  And the .25 milligrams is what EPA seemed to
be using in a calculation of NOAEL.

		The .5 doesn't appear anywhere.  The .25 doesn't appear anywhere in
the protocol or in the report of the results.  In the methods as well as
in the table of results, they talk about .65 to 1.3 as being the dose
given several times a day.

		So I am kind of confused.  I have nothing against.  I mean, if I can
see somewhere clearly that that was the dose given, .25 milligrams, then
I guess I don't have a problem with it.

		As I see it, it's .65 to 1.3 multiple times, not defining exactly and
not defining the body weight of people and so forth.  That troubles me. 
I just did a back-of-the-envelope calculation.  The dose could vary by a
factor of eight using all of those ranges.

		Now, in terms of the toxicity evaluation, out of the 39 people, I
think it's clear from the report -- well, it's clear from this unclear
report that only three subjects were looked at for any of the tox
endpoints.  And it's not clear as to what method they used to evaluate
kidney and liver tox.

		And there is no justification of a critical organ or a critical
effect.  You can't have a stand-alone study looking at a side effect
when the study was done in a therapeutic perspective just looking at
some side effect which may or may not have relevance and then tag that
as a NOAEL.  And that's how it seems to me from reading the report.

		And, again, one out of the three people report this pounding.  So I
don't know if that's what the real NOAEL is or thee's a reason to
discount that one individual and then take the data and to identify that
as a NOAEL.  So I have kind of difficulty with that because the dose is
unclear.

		I mean, if one wants to accept the range of eight and move along, then
the toxicity evaluation is unclear to me.  I mean, there's no critical
argument.  There's no data essentially.  There's no table that tells you
"Here is what we evaluated" with BUN or blood impression or serum
enzymes indicative of liver damage, 1, 2, 3, here before or after or
something.  That is how I would want to use something.

		This is not even as clear as an abstract.  You could go into any of
the SOT meetings and see clearly the dose that was given and what the
effects are plus or minus given.  And then you can choose even from
there.  There are people who report NOAEL from an abstract more clearly
than here.

		I don't see how the fine tooth I'm looking at would correspond to a
NOAEL or any other dose.  It's totally unclear.  And there is no tox
data that I can identify.

		So what I came out with is that after reading the paper, yes, yes, it
is anti-hypertensive, lowers BP.  That was clear to me if that was the
objective.  But that study on its own useful to establish NOAEL, I don't
think so, regardless of the year when it was conducted.

		Well, I will leave the rest of my time to my colleagues and then see
if there is a reason to change my mind.

		DR. FISHER:  Thank you.

		Lois?

		MEMBER LEHMAN-McKEEMAN:  Okay.  So I will try to be brief.  I would
agree that the two critical issues are dose and toxicity evaluation. 
And I think these data in this compound are in a different place, as it
were, than many other chemicals which we have evaluated in which we
actually have this human data set well before we ever had any really
good animal toxicity data.  And that's part of my problem as I look at
this data set.

		I believe that when I compare the animal data to the human data, I
think there is reasonable evidence to support the conclusion that humans
seem to be more sensitive than animals relative to the blood
pressure-lowering effect.  It's not clear to me whether they are more
sensitive with respect to other effects.

		And I am focused on the fact that I believe the animal studies have,
in fact, determined that there are two target organs of toxicity.  And
they are the cardiovascular system and the CNS.

		As a result of that, the fact that Black looked at liver and kidney is
interesting but somewhat irrelevant in that those do not appear to be
target organs of toxicity.

		I would agree with the fact that there is no data for what exactly
that meant.  So it's hard to know precisely what value those data are,
what sensitivity they are.

		And I would note that I was struck by the fact that patients did
present with a transient sensation of pounding in the head.  Whether or
not this is related to vasodilation and lower blood pressure is poorly
defined.  So I have no evidence to base the etiology of that on, but I
would consider that to be an adverse effect.

		And then as I tried to put this together and look at its overall
utility, one of the things that the animal studies and the human studies
concur on is the fact that with chronicity, there is increased
sensitivity.

		And so in the 20 subjects for which the dose had to be decreased over
time, down to .25 milligrams, what is troubling with respect to the use
of these data is that the basis of that sensitivity is undefined. 
Whether that was additional blood pressure-lowering effect, whether that
was something else that we don't know, it's really not clear.

		And so I look at this.  And as I looked at the totality of the data,
my own conclusion was that this study is not suitable for determining a
point of departure.  And given the nature of the target organ and the
nature of the chronicity of the changes, my own conclusion was that the
dog study as a six-month study was, in fact, the better study from which
to evaluate a point of departure.

		DR. FISHER:  Okay.  Thank you.

		We're going to have to break now.  And then we will come back, Dr. Kim
and Steve Brimijoin.  So we are going to break.  And we will come back
as soon as we can within a half-hour to 45 minutes.

		DR. LEWIS:  Just as a point of reference, Board members, we are going
to be meeting in our work room.  You have a folder that we gave you this
morning.  There are some materials in there for us to discuss.  Thank
you.

		DR. FISHER:  So let's all just hurry up and get into that room.

(Whereupon, the foregoing matter went off the record at 3:41 p.m. and
went back on the record at 4:36 p.m.)

		DR. FISHER:  Okay.  Just to summarize where we are at -- I believe
with respect to Lois and Kannan in terms of problems with this study
with respect to dosage and they detailed some of what those problems are
in terms of clarity on what the dosage was.  There's a certain amount of
problems with the clarity of what the toxicity levels actually were. 
And there are questions about whether or not the lowering of the
hypertension was useful for establishing a NOAEL and other questions
with respect to the relationship between the animal data, some concerns
about the fact that there were supposed to be two target organs of
toxicity, but that didn't seem to emerge in this particular study.  And
that's where we are at the moment.  So, Dr. Kim?

		DR. KIM:  Well, my comments were kind of raised earlier during the
question session, but the main issue I have with the Black et al study
is that I just don't know what the denominator of these two tables are
and what is not being reported that concerns me.  Otherwise, if we can
believe the data set that is presented here, I don't particularly have
any concerns with the EPA's sort of conclusions.

		DR. FISHER:  Okay.  Thank you.  And I'm sorry, who's -- 

		DR. BRIMIJOIN:  Me.  It's Steve.

		DR. FISHER:  Oh, Steve.  I'm sorry.

		DR. BRIMIJOIN:  So I'm massively confused I have to say.  So, in the
first place, this is a second or third -- maybe it's a third attempt to
do this kind of archeological toxicology, I would call it.  And I feel
that we're somewhat in the position of this guy over in Munich who's
taking these neanderthal bones and trying to reconstruct the whole
neanderthal from the DNA fragments that he gets in there.  And that's
what we've got here is we got a bunch of badly degraded neanderthal DNA,
and trying to figure out the shape of this beast is really just about
impossible.

		And so I don't know if we should operate in the mode of asking
ourselves, first off, how is the data going to be used.  I suppose the
responsible scientific approach is to comment on the data themselves. 
Nonetheless, I can't help asking how the data is going to be used, and
what I think about this study is that not only does it suffer from the
deficiencies of being an old study with not enough information about how
they did it, about the ethics or the scientific aspects, but also, it
again is a medical study.

		So it's, if anything, it would have a bias, I think, to under report
toxicity, and it's not in any case designed to scientifically assess
what is the safe level, what is the NOAEL, to give us that kind of
information almost never.  So we're really seriously hampered right at
the outset no matter how well done it is from the medical point of view.

		I do accept the fact, though, that it, from a scientific standpoint,
it seems to have established that human beings are pretty darn
sensitive.  And if I understand the tables that I have come across here,
it -- these data, if we accept them, establish that the human being is
substantially more sensitive, at least the blood pressure measurement is
a substantially more sensitive indicator than any of the published
measurements available from rat or dog data,, which mostly are not
measuring the proper -- the same endpoint.

		There's a study in rats on blood pressure.  I mean there's some data
from the rates on blood pressure, but the other animal data is sort of
classical toxicological data looking at tissue damage and other things. 
I would suspect that hypotension is in fact the most sensitive indicator
as Ms. McCarroll has put forward.  I think that's a reasonable
proposition.

		And so we don't -- the available animal data, other than the rat data
mentioned in this paper, the other animal data don't really come to
grips directly with whether humans' blood pressure response are more
sensitive than the rat blood pressure response would have been.  But
they seem to establish, nonetheless, that from all available
information, if we were looking for the most sensitive endpoint and the
most sensitive species, I think we would be entitled to conclude on the
basis of this study that, so far, available data, it's humans and blood
pressure as reported in this study.

		So I think that's -- the study is well done enough for us to be
confident that, yes, human beings are responding in that way to these
low doses of sodium azide.  Is that suitable for establishing a probably
LOAEL?  I'm uncomfortable with saying that it is -- or to establishing
kind of a LOAEL.  So then it -- that's where I am in terms -- looking at
the data themselves.

		Then it comes down to well, in what context is the data going to be
used.  So if the data were going to be used to say, oh, gee, you know
what, human beings are not very sensitive to this compound, so we should
be free to use a whole lot of it in the environment, I would say the
data -- there's no way these data are good enough for that.

		If the data were going to be used to say that -- to confirm, let's
say, or to anchor an existing calculations about what is an appropriate
point of departure, existing inferences about point of departure that
are based on extrapolation from animal data, and we come out in the same
general place with these human data, I would say, yes, these human data
are good enough to add weight to an existing determination of a
reference dose.

		If the data are going to be used, however, to say that human beings
are so much more sensitive than everything else, that we're going to
drive the reference -- the point of departure way down below where it is
already, I don't think they're strong enough for that.  So I mean I had
actually some sympathy to the industry sponsor on that point.  So that's
where I am.

		DR. FISHER:  Before we have comments, you know, for the benefit of
time, I want -- let's look at what the agency is asking us and see how
we frame the answer.  Because obviously, there are weaknesses.  There's
no doubt there's weaknesses to this study.  Everybody has identified
them.  No one would design this study to try to determine the NOAEL or
LOAEL or the point of departure for this particular compound.

		DR. BRIMIJOIN:  That's the crucial phrase.

		DR. FISHER:  Right.

		DR. BRIMIJOIN:  It should be used as the point of departure.

		DR. FISHER:  Exactly.  Exactly.  So is there -- so, first, we need to
answer this question, and it sounds like people are saying, no to this
particular question and then I guess whether or not we want to make a
recommendation that if it's a starting point for saying there may be
increased sensitivity in humans, that this is the only indication that
there are, maybe then EPA should look at things.  I mean to me that's --
if that's what the need of the data is.

		But then the question is are three subjects or two subjects sufficient
to even use it that way.  So why don't we just start with the question.

		DR. LEBOWITZ:  Yes.  I would like a straw vote because, you know, I
mean for the interest of time, if we basically all agree that the study
is not sufficiently sound, then we don't need to go any further.   And I
have a sense, anyway, that that's where the Board's going, and if it's
true, we don't have to hear  some of the specifics of basically the same
arguments over again.

		DR. FISHER:  Okay.  And -- but it's sufficiently sound in terms of
point of departure to estimate a safe level of acute and chronic
exposure.  Okay.  So why don't we take a straw vote.  How many people --


		DR. FISHER:  -- we're not voting.  We're taking a straw vote.  Okay. 
I'm not allowed to do that for some reason, but I really don't
understand it. Let me phrase the question another way.  Who would -- is
there someone who would want to speak to the value of this data in terms
of determining a point of departure to estimate a safe level of acute
and chronic exposure?  Richard?

		DR. SHARP:  I would just say that here, again, there doesn't seem to
be any debate about the fact that this is a study of very limited
significance, but EPA believes -- the folks at EPA believe this
represents some additional knowledge here, right?  That there's
something to be added here, and so just from that point alone, I guess I
would be inclined to be deferential on that point.

		DR. FISHER:  Well, but that's -- my question was two steps.  One was
whether the data is sound enough to actually be used for this purpose. 
And the next question is is the data sound enough for EPA to say, wait a
minute, we may want to look at other data, or there's just some evidence
here that something may be sensitive, but that they're not establish --
they're not using the data to establish that.

		Now maybe there's no -- maybe what I'm saying is not a distinction, so
somebody tell me if that is a distinction?

		DR. BRIMIJOIN:  Well, I mean I think if you put me on the witness
stand and said, okay, is this sound enough to be used as the point of
departure.  I say, no.  But if you rephrase that to say is it
sufficiently sound from a scientific perspective to support the
continued use of extrapolation from animal data indicating a --
essentially the same level, essentially the same LOAEL, continuing to
support -- in other words confirming that the proper point of departure
for humans would be a dose that is roughly whatever it is, ten,
hundred-fold lower than the available rat data, then I would say, yes.

		DR. FISHER:  So getting back to the first question, which -- I don't
think that you're commenting on the fact that it's sufficient to be used
for this particular type of calculation, but you're sensitive, as I
think we all are, to the fact that it's saying something.  So if I would
say that there's a consensus that the data is not sufficient to be used
as a point of departure to estimate a safe level of acute and chronic
exposure to that compound, what is the other aspect of the
recommendation that we might make?  Michael?

		DR. LEBOWITZ:  That the study is informative, as we have used the term
in the past, of the information that EPA has from other sources to be
able -- informative in a way that EPA can utilize the information and
that leads to basically what Steve said, that, you know, it comes up
with basically the same calculation you would get from the animal data,
but it's informative because human's are more sensitive.  And in fact,
their calculations are good, are correct, and they can use it that way. 
But it's only informative.  It is not scientifically sound by itself as
a point of departure.  Is that okay?

		DR. KRISHNAN:  I like that.

		DR. FISHER:  I'm sorry.  I'm going to say it wrong, but at least it's
something you an all correct.  It's only informative in terms of how EPA
will look at the animal data, something like that.  But it is not
scientifically sound in and of itself to establish a point of departure.
 Yes?  Not good but --

		MR. CARLEY:  Can I ask a clarifying question here?  I thought I heard
both Drs. Brimijoin and Lebowitz saying that it was potentially
informative if it leads to the same RFD that we would get -- I mean it's
sort of confirmatory if it leads to the same bottom line number.  Is
that a condition of its utility that it lead to the same bottom line
number, or could it be used independently with, for example, the new
animal data that the registrant told us they were proposing to run to
inform the uncertainty factors that we might apply to other data?

		DR. FISHER:  Mike?

		DR. LEBOWITZ:  We haven't seen those data and we don't know.  All we
know is that what we've heard from you two, and we are convinced from
what Nancy McCarroll said, that is informative of the animal data you
already have giving essentially -- confirming essentially the
NOAEL/LOAEL/POD, whatever term you want to use calculated from those
animal data that you already have.

		How it will impact -- and that's the end of my statement, because how
it will impact in terms -- or how it will be informative in terms of any
new animal data or any other studies that they are cognizant of that
might be provided to you, we have no idea.

		DR. KRISHNAN:  Celia --

		DR. FISHER:  Yes.

		DR. KRISHNAN:  -- may I add something?  I think talking about the
numbers, there are two aspects of classification that I wanted to
provide based on my -- flexion is about the numbers that we've been
talking about.  This study -- the document identifies an NOAEL of .004
milligram per kilogram compared to the current reference dose which is
the same.  That's the reference does but this one is a NOAEL.  Okay?

		Normally, some uncertainty factors are applied on this NOAEL.  In this
case, a factor of ten for inter divisional difference is sure and
possibly another ten for sub-chronic to chronic extrapolation because
it's only a two-year duration, and we want to protect people over a
lifetime.  Then potentially, we're talking about a thousand-fold lower
acceptable -- I mean hundred-fold lower acceptable dose compared to the
current reference dose, current one that's been in existence for ten
years or so now.  I mean if there is a study that's going to bring the
reference dose by a factor of a hundred, not on a comparative basis.  I
know you don't derive RFD.  You derive something else.  I mean this has
got to be a real strong one.  There's got to be some convincing evidence
to bring the existing no acceptable level by a hundred.

		My second comment relates to the fact that the humans appear to be
more sensitive than animals based on comparison of the doses.  Often the
comparisons should also be made on the basis of body surface, not just
always on body weight.  Here it's a metabolic inhibitor in this case. 
You know, maybe a body surface based comparison sometimes is more
relevant.  If you do a body surface comparison, then you are throwing in
a factor of 10.  That means 1 milligram in human I equal to 10 milligram
in rat.  That's where you start.  So there's a factor of 10 to begin
with.  It's not 10 times more sensitive, no.  That's just the way the
dose is supposed to be for that to be systemically equal because we're
considering basal metabolism.

		But here the mechanism is not very well established.  There are a
couple of papers but not very well established.  So I don't think you
can -- it's a very valid argument.  I wouldn't, you know, argue very
strongly on that.  That's a slippery slope.  Because a factor of 10 by
body surface and then just a dose based on these three individuals, I
mean that can vary by a factor of 8 because the number is .65 to 1.3, 3
to 5 times and body weight anywhere between 40, 80 and so on.

		So the difference of a factor of 30, I mean to me, personally, it's
not very convincing.  It is true.  I mean the effects are observed at
the lower does in humans which is very normal.  I mean I expect a factor
of 10.  That's why we use a factor of 10 when we go from the rat to
human all the time.  And you have to go the other way before you
compare.  That's where I wanted to get.

		DR. FISHER:  So is there anyone, based on the scientific evidence that
thinks that there is sufficient information here to use in terms of
helping to confirm the relevance of the animal data.  I mean I assume
you're arguing against that, that -- it's like apples and oranges, at
least from this study in terms of --

		DR. KRISHNAN:  No, no.  I haven't, in detail, to tell you the truth,
evaluated all of the animal data like rat, dog and everything else all
--

		DR. FISHER:  NO, no.  I'm just saying what we were -- right -- okay.

		DR. BRIMIJOIN:  So what's your bottom line, Kannan?  Is --

		DR. FISHER:  Yes.  What's our bottom --

		DR. BRIMIJOIN:  Is this sufficiently sound to be informative?  I think
everybody has -- is expressing reservations about using it either to
drive the RFD radically in either direction.  If the EPA is seeking to
basically stay at the same general numbers that it is and feels that
that this is in support of that, I think that -- I think it could go
that far.  But you and others I have spoken to -- I don't want to --
wouldn't recommend using this data to drive the exposure on it radically
in either direction, because of all the uncertainties that you have so
nicely laid out for us.

		DR. KRISHNAN:  As a corroborative study or something to support the
data derived from animals?  Is that what you're --

		DR. BRIMIJOIN:  Yes.

		DR. FISHER:  I think the way --

		DR. BRIMIJOIN:  I guess you could always --

		DR. FISHER:  I think the way that John phrased his question was are we
limiting the -- our recommendation about the value of this study to only
that it may be used to confirm data calculated from animals.  Right? 
Was that what you asked, Jon?  Was that something like it?

		MR. CARLEY:  Yes, that's close.  The idea was does it -- is it
informative so long as we end up in the same place that we would have
been without it, or does it add something unique in itself that would
change the endpoint --

		DR. FISHER:  So let --

		MR. CARLEY:  Endpoint's the wrong phrase -- change the outcome.

		MR. CHADWICK:  Yes.  I think that we think it's informative but
useless.

		DR. KRISHNAN:  You would list all the caveats.

		DR. FISHER:  Well --

		DR. KRISHNAN:  Listing all the caveats.

		DR. FISHER:  Right.  Taken this way, it does seem rather useless.  I
mean I don't see people arguing -- somebody -- I'm not sure where people
are coming from, so I'm going to ask the question again.  We know it
shouldn't be used to establish limits or anything else.  If it is
helpful in confirming what's been calculated from animal data, is the
study useful?

		DR. LEBOWITZ:  Yes.  It's informative to allow them to -- ell, what's
the word -- some greater consistency or whatever in staying with the
number they have derived from the animal data, then it's useful.  In
other words, because humans are more sensitive than animals.

		DR. FISHER:  Yes, Lois?

		DR. LEHMAN-MCKEEMAN:  It's late in the day and I think clarity is
becoming a real issue here.  However, let me see what I can do.  I'll
give a little bit of my own perspective first.  I think we looked at
this in general and thought that the data might suggest that there are
some species differences in sensitivity.  However, I agree with Kannan
with respect to dose normalization, I'll call it, and how that might
actually erode what we perceive to be a change in sensitivity.

		Secondarily, I think we're comparing an apple and an orange.  For
example, dog is frequently used in a variety of cardiovascular safety
assessments.  We have no idea whether the blood pressure was lowered in
the dogs who were treated in the six-month study.  So we can't make a
direct comparison on the endpoint for which we have data in humans.  So
even though there is the suspicion that there's a species difference in
sensitivity, when you actually start looking at the data, they're really
not there at this point, I believe, to support that conclusion.

		I am further confounded by the comment that there are additional data
that showed no change in blood pressure in people, and so -- but I don't
know anything about those data so I proclaim -- you know, I declare
complete ignorance of those data, but they -- somehow they undermine a
little bit of my confidence in this particular study with respect to
just reproducabilty.

		So I think where we're at is, again, when I first looked at it, my
thought was, okay, I -- I'm not terribly satisfied but I think there's
suspicion that there's a species difference in sensitivity.  Where I've
come out at the end of the day based on comparison of the same variable
across species, assessment of the -- what -- again, I don't know the
right words for it -- I'll just call it dose normalization as opposed to
just comparing to the, you know, administered dose per se and the
possibility that there are other data that would counter these
particular findings, I think those three, in combination, basically fail
to support the conclusion convincingly that there is a real species
difference here.

		DR. FISHER:  I think for the specific charge, the consensus was no. 
It sounds like for the secondary recommendation, at least I'm convinced
by Lois' argument that the recommendation is also no, that it's not a
recommendation because there is -- for all the reasons she's discussed,
there's no way of really knowing whether or not you could compare it to
animals within this particular study.  So I would say the recommendation
is no, no.  Anybody else?  Okay. Let me -- given this is one of the old
studies, right -- so the only criteria is was it fundamentally unethical
at the time it was conducted and would it hurt people -- yes?

		MS. McCARROLL:  May I just make a point of comparison regarding the
Trout study that was mentioned before?  That study, there was an
individual in that study.  And this was a dispute between the registrant
and we here at EPA.  There was one individual in the study who had three
separate readings of lower blood pressure.  What was in dispute was the
comment that was made by two of the registrant's representatives that
that individual later turned out to be hypertensive.  That was not in
the report.  We can only go with the data that are available to us in a
published report or submitted to the agency.

		DR. FISHER:  So, great.  But we haven't --

		MS. McCARROLL:  So may I --

		DR. FISHER:  -- seen that.

		MS. McCARROLL:  Okay.

		DR. FISHER:  So I don't think we're -- you know, I don't think our
opinion can change on the basis of a study that the Board has not been
able to evaluate.

		MS. McCARROLL:  Okay.  Yes.  I was just trying to address --

		DR. FISHER:  Right.  I understand --

		MS. McCARROLL:  -- the --

		DR. FISHER:  -- right, and I appreciate that.  I don't think that we
can --

		MS. McCARROLL:  -- uncomfortableness with that study.

		DR. FISHER:  Right.

		MS. McCARROLL:  But if I could just add one little last thing?

		DR. FISHER:  Sure.

		MS. McCARROLL:  A week after the individual was -- had this episode
with the lower blood pressure, he was reported as normal in the text
with a blood pressure of 120/80.  Now if that individual were
hypertensive, it's not something that goes away.  So we just assumed
because he had a normal  blood pressure at that point that he was not
hypertensive in the start.  But at any rate, that was brought before
another committee here at EPA, and they decided to discount that study
anyway, because it was only one subject.  Thank you.

		DR. FISHER:  Thank you very much, Nancy.  And, you know, I think we
very much appreciate that you're sensitive and are worried about the
risks to humans.  And unfortunately, our goal is to look at the
particular study and say whether the data is sufficient.  But I admire
and I think we're all very respectful and happy for the public of your
concern.

		MS. McCARROLL:  Thank you.  It's very kind of you.

		DR. FISHER:  Okay.  So who are the -- does anybody want to make an
argument that this was fundamentally unethical given the whatever was
going on at the time?  Richard?

		DR. SHARP:  I'm the primary.  It's late.  And I can say, no, I don't
want to make that argument.  Just to be very quick -- follow-up on
Kannan's model earlier, right there, two big points here that I think
are important for ethic issues.  One has to do with did it have an
acceptable risk to benefit ratio and was there any deliberate deception
that suggested that there was no informed consent provided.

		Risk to benefit ratio actually is fairly favorable in this study for
the reasons that Mr. Carley reviewed earlier, so we won't go through
that.  With regard to evidence that there may have been deliberate
deception, there's one unusual phrase that's in the report that
suggested that the drug was administered without informing the patient
of the nature of the drug or the change to be expected.  That could
reasonably be interpreted as a form of blinding and not necessarily an
intent to deliberately deceive.  And since evolving standards of
informed consent are really quite different now than they would have
been in the mid-1950's, I don't think there's any reason to conclude
that this was fundamentally unethical or radically out of step with the
prevailing ethical standards of the time.

		DR. FISHER:  Yes.  I'm sorry.  Gary, Susan?

		DR. CHADWICK:  I agree with what Richard has just said.  I would like
to make one comment.  John Carley mentioned that he felt that the
self-administration three times a day for two years indicated some sort
of potential for consent, but I would point out that in 1954,
particularly if you had cancer and your doctor gave you a bottle, even
if it was an unlabeled bottle of drops, and said take these three times
a day, a teaspoon of water, you would do that regardless with no consent
or anything other than just the request.  So I don't think that really
is supportive of consent, but I do agree that I think in 1954, quite
clearly probably no consent was more the rule than it was the exception.

		DR. FISHER:  Sue?

		DR. FISH:  Nothing to add.

		DR. FISHER:  Anyone else want to add anything about the ethics?  Okay.
 So -- yes, Rich?

		DR. SHARP:  Very quick thing.  We've consistently relied upon the
absence of something formally documented like the Declaration of
Helsinki as establishing what professional standards of conduct are in
the area of research.  I would regard that as a mistake and I think that
there are clearly standards of care that existed prior to that time that
were not codified, so we're talking here about whether or not, for
example, there may have been deliberate effort to deceive.  That would
have been ethically inappropriate despite the fact that there was no
professional code of conduct that reiterated that point.

		DR. FISHER:  I think that's because I was thinking Nuremberg and
informed consent.  So I think that's a good point and we should continue
to be thinking about that when we're reviewing these kinds of studies. 
Okay.  So we've answered the second charge where there's nothing
fundamentally unethical about the research, and I want to thank you all
for giving me permission now to constrain your discussions.  You may be
sorry but you've created a monster, but we shall see.

		Anybody know what time we're eating dinner?  Six?  I know we're eating
at Papa's.  Oh, Jaleo's.  So I guess we don't know --

		DR. LEWIS:  Let me just make a comment about tomorrow.  Tomorrow,
we'll begin at 8:00 o'clock -- 8:00 o'clock tomorrow morning.

		DR. FISHER:  So it sounds to me a 6:00 o'clock dinner?  Do we want to
eat at 6?  Okay.  So we'll meet at quarter to six and we'll walk to
Jaleo and tomorrow at the hotel.

		(Whereupon, at 5:10 p.m., the foregoing matter was recessed.)

 

 

	NEAL R. GROSS

	COURT REPORTERS AND TRANSCRIBERS

	1323 RHODE ISLAND AVE., N.W.

(202) 234-4433	WASHINGTON, D.C.  20005-3701	www.nealrgross.com

	 page \* arabic 1 

