	UNITED STATES OF AMERICA PRIVATE  

ENVIRONMENTAL PROTECTION AGENCY

	+ + + + +

HSRB MEETING

	+ + + + +

	WEDNESDAY, OCTOBER 18, 2006

	+ + + + +

	ARLINGTON, VIRGINIA

	+ + + + +

	This transcript was produced from an audio recording provided by SAIC.

                C-O-N-T-E-N-T-S

AGENDA ITEM	PAGE

Introduction and Identification of	3

 Board Members

Welcome	4

Opening Remarks	8

Meeting Administrative Procedures	11

Meeting Process	15

Update on EPA Follow-up of HSRB	19

  Recommendations

EPA Human Studies Research	25

  Review Official

CHROMIUM REPEAT OPEN APPLICATION TEST

HSRB Review of Science and Ethics Criteria	28

  For Completed Human Exposure Studies

Chromium Repeat Open Application Test	34

Public Comments	126

Board Discussion

Science and Ethics of IR3535 Insect	184

Repellent Product Efficacy Proposals

Ethics	257

Adjourn

	P-R-O-C-E-E-D-I-N-G-S

		DR. FISHER:  (In progress)  - were given, and we are thankful for
whoever put together those disks.  I think it was really great to be
able to read the material so well, and we really appreciate that.

		In addition, based on recommendations of the Board, as chair I had at
least three, maybe four - I can't remember at this point - but at least
three conference calls with EPA to plan this agenda, and I think we both
got a lot out of it.

		So I think this should be a very positive meeting, and we're very
grateful to EPA for being so responsive to our concerns, and also I hope
we've been responsive to learning about their responsibilities and what
they need to do.

		So with that, I'll start going around.  Dr. Krishnan, do you want to
just say your name and we'll start that way.

		DR. KRISHNAN: Kannan Krishnan from the University of Montreal.

		DR. MENIKOFF: Jerry Menikoff, University of Kansas.

		DR.  BELLINGER: David Bellinger, Harvard Medical School.

		MR. CHADWICK: Gary Chadwick, University of Rochester.

		DR. SHARP: Richard Sharp, Baylor College of Medicine.

		DR. PHILPOTT: Sean Philpott, Altamarks Biographics Institute
(phonetic).

		DR. CHAMBERS: Jan Chambers, Mississippi State University.

		DR. LEHMAN-MEKEEMAN: Lois Lehman-McKeeman, Bristol-Myers Squibb.

		DR. FENSKE: Richard Fenske, University of Washington. 

		MS. FISH: Sue Fish, Boston University.

		DR. FISHER:  Okay, the next thing I'd like to do is introduce Kevin
Teichman who is the acting deputy assistant administrator for the
science, Office of Research and Development for EPA, and we're very
happy to have him here, and we'd like to hear from him.

	WELCOME

		DR. TEICHMAN: Thank you very much, Celia.

		First of all, good morning, and welcome to this meeting of the EPA
Human Studies Review Board.  It is my pleasure to be here.

		Celia has introduced who I am, the acting deputy assistant, if you can
imagine more qualifiers before a title let me know, the acting deputy
assistant administrator for science in the Office of Research and
Development.

		At the same time I am the acting chief scientist in the office of the
science adviser where organizationally the HSRB resides.

		Leaving the prepared remarks that Paul made for me for a moment, which
is always threatening, I am also replacing Bill Farland, who I believe
the committee knows very well, and misses almost as much as I do the
wise advice that Bill offered the agency for over 26 years.  Those are
very big shoes to fill even in this temporary capacity.

		I along with my partner here, Jim Jones, the director of the Office of
Pesticide Programs, appreciate all of the hard work that the committee
goes through to prepare for and participate in such meetings, and I
understand both the time requirement that is demanded of you, and all
the effort that goes into review the materials properly and sufficiently
beforehand to make sure the discussion here is as robust as can be, and
takes into account all of the other professional obligations on your
time, so we appreciate what you are able to give to us.

		We also appreciate the effort in preparing the Board reports, and in
fact the prompt nature in which this committee accomplishes that.  You
are to be commended.  Such reports are truly evident of all the
thoughtful work and expert advice that you put in on behalf of the
agency.

		I also want to welcome at this point Dr. Richard Sharp in particular,
who I think is also with me attending his very first such HSRB meeting,
so we will learn together how things proceed here.  So welcome.

		And for this particular meeting, I know, Madame Chairperson, we have
the Board reviewing both the completed and a proposed study involving
exposure of human subjects to a pesticide active ingredient.

		We will also be presenting for HSRB review the results of a completed
human toxicity study evaluating the allergic contact dermatitis response
of individuals with known sensitivity to hexavalent chromium, and
associated with repeated exposure to a wood treatment solution
containing this chemical.

		The Board will also be asked to review two revised research protocols
for the efficacy of new formulations for repellant IR-3535 against ticks
and mosquitoes.  And based on all of your recommendations and review of
completed and proposed human studies research, the agency will seek your
advice as well and draft EPA guidance to the public concerning
submission of proposed and completed human studies research for review
by the Board, so that your procedures will be very important to us as
well, how to do this.

		Finally, I understand the Board will be discussing - and I hope to
most certainly be back to this if at all possible - how the handling of
material claimed to be confidential business information will be handled
by the HSRB.

		This morning we will have the opportunity to introduce Dr. Warren Lux,
who is three people to my left.  Dr. Lux is the new human subjects
research review official within EPA.  His appointment to the office of
science adviser predates mine by about a week, so give him a little bit
of latitude as well in his new position.  But he will provide the high
level leadership and overall agency direction regarding our human
research ethics work that we do within the agency, certainly involved
with EPA or EPA-funded research.

		Lastly I wanted to thank everybody within EPA who has worked so hard,
particularly with Paul Lewis sitting to your right, for their work
preparing for the meeting.  I wish to welcome those members of the
public who are here to observe the operations of the Board.

		To conclude, the HSRB process is just one of the examples of the
agency supporting an open and transparent process that ensures sound
science informs its environmental decision making.

		That is what's critical here, and your Board is one of the most
instrumental examples of how we accomplish that.

		Thank you for participating at today's meeting, and I look forward to
a very successful meeting.

		Thank you very much, Dr. Fisher.

		DR. FISHER: Thank you very much, Dr. Teichman, and welcome.

		Mr. Jones.

	OPENING REMARKS

		MR. JONES: I'd like to add my good morning and welcome to the Board
members this morning.  It's a pleasure to have all of you back here in
Crystal City in the Potomac Yards facility.  We had a very interesting
summer in the pesticides program, largely - not largely, but in no small
part - due to the work of this Board in the spring and summer of this
year.  We were able to meet a major statutory deadline that Congress
gave to us about ten years ago, and I want to thank all of you for all
of that hard work.  It was really a busy spring for all of you as we had
four meetings in a very short period of time, and I know that that
stretched all of you, and it stretched all of us as well.

		I also am very cognizant of the last meeting which was a pretty bumpy
one, certainly for us in the pesticides program.  But I think as well,
the preparation for that was a little bumpy for all of you.

		I believe that we took the feedback that we got from the Board very
seriously, and Dr. Fisher, I appreciate your comments at the beginning,
which I assume are somewhat related to the responsiveness that we'd had
to the issues, the very relevant important issues that the Board raised
at the last meeting related to how we prepare for these meetings, so
that you can be best prepared, and hopefully we brought that to a better
place, and hopefully we've demonstrated our responsiveness around those
issues.

		So as issues arise, which they are bound to, in the course and life of
this Board, related to how we get ready so that you can get read,
please, we're all ears.  We want to know to the extent that we can make
your job easier.

		We also had challenges related to an issue we first brought to the
Board at the last meeting related to protocols.  It was kind of
interesting that we at EPA in the pesticides program had far fewer
issues around what we thought were the most controversial type of
studies - the intention dosing toxicity studies - procedurally those
went quite well.  When we got to the protocol it was a little bumpier. 
Again we spent a lot of time over the summer trying to figure out with
Dr. Fisher how we could tee those issues up more effectively, and we're
back today with a protocol that you have seen earlier, and hopefully
we'll be in a much better place not only with that protocol but more
generically as it relates to protocols.

		And as Dr. Teichman mentioned a minute ago, we're going to be talking
today really for the first time as a Board about a tricky issue related
to confidential business information, and hopefully our dialogue today
will bring that to a place that will allow us to succeed with the
statutes that we have; will allow you to succeed in giving advice on
issues associated with studies where confidential business information
may be relevant.  And I'm confident we can bring that to a place that
allows for the process to work very effectively.

		So some interesting issues that we've got today, as Dr. Teichman.  And
I certainly won't go over reviewing that agenda with all of you.

		But again I really just wanted to say, hopefully we've been responsive
as it relates to many of the issues you raised at our fourth meeting
last summer, and that will lead to a more effective operation of this
Board which is really what's in our interest.

		Thank you.

		DR. FISHER: Thank you very much, Mr. Jones.

		Dr. Lewis.

	MEETING ADMINISTRATIVE PROCEDURES

		DR. LEWIS: Thank you, Dr. Fisher.

		I am Paul Lewis, and I serve as the designated federal officer to the
EPA Human Studies Review Board.

		I want to thank Dr. Fisher and members of the Board for attending this
meeting, and we appreciate the time and diligent work of Board members
preparing for this meeting, taking into account your busy schedules and
other professional obligations.

		And we also would like to welcome the public to observe this meeting,
and my EPA colleagues for all the hard work preparing for the meeting
and deliberations we'll be having in the next couple of days.

		I'd like to also thank Dr. Richard Sharp as a new member of the HSRB,
and I am looking forward to working with Dr. Sharp at the first analysis
of issues being brought before the Board today and in subsequent
meetings.

		By way of background the HSRB is a federal advisory committee that
provides advice, information and recommendation on issues related to the
scientific and ethical aspects of human subjects research.

		The HSRB only provides advice and recommendations to the EPA;
decision-making and implementation authority remains with the agency.

		As the designated federal officer at this meeting I serve as liaison
between the Board and the agency.  I am also responsible for ensuring
provisions of the Federal Advisory Committee Act, commonly known as
FACA, are met.  And these are to include that this committee has a
charter governed by uniform procedures.  The Board only provides advice
and is open for public viewing.

		HSRB meetings follow all FACA requirements.  These include open
meetings, timely public meetings in the Federal Register, and docket
availability via the ORD docket.  As a matter of fact many of our
documents are also available through the HSRB website.

		As the designated federal officer for the Board, a critical
responsibility is to work with appropriate agency officials to ensure
that all appropriate ethics regulations are satisfied.

		In that capacity Board members are briefed with provisions of federal
conflict of interest laws.

		In addition each participant has filed a standard government financial
disclosure report, and I along with our deputy ethics office, the EPA
Office of Science Adviser, in consultation with the Office of General
Counsel, have reviewed these reports to ensure all ethics requirements
are met.

		The Board reviewing challenging issues over the next several days, and
with a full agenda, and meeting times are approximate, thus may not keep
to exact times as noted due to Board discussions, deliberations, and
public comments.  We strive to ensure adequate time for agency
presentations, public comments to be presented, and Board deliberations.

		Board presenters, Board members, and public commenters, please
identify yourself and speak into the microphones provided.  The meeting
is being recorded, and copies of presentation materials and public
comments, written public comments, will be available at bureau
regulations dot gov.

		In terms of the public comment process, for members of the public
requesting time to make a public comment, your remarks are limited to
five minutes.

		For those that have not preregistered, please notify either myself or
another member of the HSRB staff that will be sitting outside a table
outside this room if you are interested in making a public comment.

		As I mentioned previously there is a public docket to this meeting,
and all background materials, questions posed to the Board by the
agency, and other relevant documents are available on the docket,
regulations dot gov.

		In addition several supporting materials are available on the HSRB
website, and our agenda actually lists the contact information for that
material.

		The Board will prepare a report from this meeting, a response to the
questions posed by the agency and their review of materials presented,
and their subsequent analysis.  And the agency anticipates announcing a
Board review and subsequent approval of its report from this meeting by
Federal Register notice later on.

		In closing I'd like to thank the Board for the meeting the next two
days, and I'm looking forward to both a challenging and interesting
discussion over the next several days.

		Thank you, Dr. Fisher.

		DR. FISHER: Thank you very much, Dr. Lewis, and also thank you for all
your hard work, and all our daily conversations and emails, and becoming
a member of the family it seems.  Everybody in my family knows, oh Paul
is on the phone.

		Okay before I continue, we have three additional Board members who
just came, so just - I won't ask you to identify yourself, but I will
identify Dr. William Brimijoin is here, Dr. Michael Lebowitz, and Dr.
Suzanne Fitzpatrick.  So we welcome you.

	MEETING PROCESS

		DR. FISHER: Okay, and I'm just going to review a little bit about our
process, and then we're going to move on.

		So basically I think Paul reminded you of what our responsibilities
are; that we take them very seriously; that we are here to comment on
the scientific and ethical aspects of research proposals as well as
completed research, and to make recommendations to EPA and help them
when they ask us on ways to strengthen their program.

		Our charter says that our objectives and the scope of the activities
is that we provide the advice, information, and recommendations on
issues related to scientific and ethical aspects of human subjects
research, and we see our major objective as to provide advise on
research proposals and protocols, reports of completed research with
human subjects, and how to strengthen EPA's programs.

		One of the tasks that we have taken on over the last number of
meetings is to help clarify and develop the criteria that the board uses
to evaluate science and ethics on different types of completed research
and protocols.  And we've done this so that we will have consistency and
fairness in the way that we approach each of the protocols as we move
on.

		And I think as we will see today, we greatly appreciate especially
what John Carley has done in terms of integration some of the ways we're
looking at information into guidance.

		The way - the board process for those who haven't been here before is
that Paul and I assign primary and secondary discussions for each of
these completed studies and protocols that the board receives.

		The way the process is is that EPA first makes the presentation on the
science and the ethics; the board then asks questions if the board has
questions about those.  

		Then we have public comments, and we will ask, as Paul said, the
public comments are limited to five minutes, and then the board will ask
questions if questions arise from the public commenters.

		We then go on to the primary and secondary reviewers who provide their
initial review.  We discuss that, or ask questions, and then move on to
see if we have a conclusion of the board.

		And as it says up there, if we note that there are significant issues
that were not posed by EPA, we will also do that in our role as
advisers.

		In terms of the board process, the science evaluation always precedes
the ethics evaluation; the rationale behind that is that to be of
benefit a study must be scientifically valid, and in addition, in order
to understand the risks, which are a very important part of the
risk-benefit analysis, we need to understand the science.

		So that the risk-benefit analysis in ethics especially in studies that
are not in any way assumed to have direct benefits to the subject, the
weighing of risks to benefits depends on the societal benefit and
scientific validity of the study, and the risks obviously come from a
science evaluation of what risks are posed by different chemicals.

		So that's our process.  I'm now going to turn to Bill Jordan to give
us an update on HSRB recommendations.

		I believe we had an update last time.  And the update I think is
critically important, because it lets us know as a board what we are
doing that has been helpful, or perhaps unhelpful, to EPA, when we see
how our recommendations have been integrated into their decisions about
- and I think it's very educative for the public, but it's educative for
us as well.  Because we also want to ensure that our process is
something that is going to be useful in terms of our advice.

		So Mr. Jordan.

	UPDATE ON EPA FOLLOW-UP OF HSRB RECOMMENDATIONS

		MR. JORDAN: Thank you, Dr. Fisher. 

		I'll say a few words about how much I have valued the opportunity to
work with Carl Lewis and you on preparation for this meeting, and thank
you for your significant investment of time and energy in helping us to
figure out how we can have the most effective meeting possible and deal
with important topics, and I'm looking forward to having a successful
meeting, so thank you.

		It was also to allow the presentation to get up on the screen.  And I
want to thank our colleagues in the health effects division who have
helped prepare the materials.  They deserve the credit to making sure
that we are ready, and you all have the materials you need to be ready.

		At the last meeting the board tackled four different topics, the first
of which was a completed intentional dosing study using the pesticide
active ingredient chloropicrin (phonetic).  And the board concluded that
that study of the acute inhalation toxicity of material provided a
useful scientific information for risk assessment, and met the
applicable ethical standard in EPA's regulation, and therefore, advised
that you thought we could use that in our risk assessments, which we are
doing; we are using the results to derive an acute reference dose that
will be the basis for a revised risk assessment that we expect to issue
later this fall.  And we will move ahead with chloropicrin through our
regulatory process, and look at it in conjunction with a number of other
pesticide active ingredients that are used as fumigants for soil pests
and other fumigant uses.

		The second topic that you gave us advice on was a set of guidelines,
testing guidelines, dealing with insect repellent efficacy.  These draft
guidelines were, the first time that we put them forward in this
version, this format for the board's review, and we asked a number of
questions about how the guidelines should be written.  And you gave us
very thoughtful and thorough answers to those questions, which we will
incorporate when we work on the next draft of the insect repellent
efficacy guidelines.

		Our intention, and most of our staff have been working on reviewing
protocols that have come into us, and we hope to benefit from the
discussion of the two IR-3535 protocols that are on the agenda later
today, and use that discussion along with what we heard from you in the
report about the guidelines to come up with another draft version of the
guidelines.

		As I mentioned, last meeting you also discussed the two protocols for
testing the insect repellent IR-3535 that were submitted by Dr. Scott
Carol (phonetic).  You concluded that both of the protocols as they came
before you needed revision in order to address both scientific and
ethical issues.

		The investigator, Dr. Carol, made extensive revisions to the proposed
protocols and we have been working with Dr. Carol, as you'll hear in
more detail, later today, to prepare a version of the protocols that we
think are scientifically sound, and meet applicable ethical standards,
and because we think they have improved substantially and are ready
again for review, we're bringing them back to the board for your further
consideration.

		The last topic that came before the board was a set of five proposed
protocols that were directed at measuring exposure to pesticides of
people who are engaged in various activities associated with the
mixing/loading application of pesticide products.

		These five protocols dealt with different scenarios for that - those
activities, and when the board looked at the materials before them, they
- you all said that they needed revision to address a range of
scientific and ethical issues.

		We had thought long and hard about that advice, and in particular the
scientific issues that you identified are similar to and overlap with
some of the issues that we've had as we worked with the whole area of
exposure evaluation and assessment.  And we are planning to take a
number of those scientific issues to a meeting of our FIFRA (phonetic)
scientific advisory panel in January of this year.  We will ask the
scientific advisory panel for advice on the science issues, and use
their advice, working with the agricultural handler exposure task force
to develop revised protocols that will, we hope, address the scientific
issues in a way that gives us higher confidence that materials, the data
produced by such research will be scientifically sound and useful to
answer the kinds of regulatory questions that we as an agency need to
address as we regulate pesticides.

		It's not exactly clear how all of that will play out, but our best
prediction is that the agricultural handler exposure task force will
work with us to have revised protocols available for submission in
either spring or summer of next year, so that they can go in the field
in the following year to actually execute the research.

		So that's where we stand on the four topics that were raised at the
last meeting.

		DR. FISHER: Thank you very much.

		I was wondering, Mr. Jordan, it might be helpful to have a public
clarification on the distinction between the roles of the scientific
advisory panel and the - and our board, because I think there had been
some statements concerning the need for consistency which may be in some
sense misinformed.  So it's my understanding that the SAT is going to be
looking at general issues, how best to perform certain types of
scientific studies, whereas the role of our board is to look at specific
protocols that are presented to us.

		In addition we are both independent boards that are not necessarily
there to - our mission is not to come out with the same recommendation.

		MR. JORDAN: Dr. Fisher, your summary of the roles of the HSRB and the
Scientific Advisory Panel is accurate and hits the points exactly.

		I'd like to elaborate on a couple of ideas.  One is that the law that
EPA uses to regulate pesticides, the Federal Insecticide, Fungicide and
Rodenticide Act, directs EPA to work with the FIFRA scientific advisory
panel as the main source of scientific advice on issues that arise in
the course of our regulatory program.

		So when we are issuing regulations or taking regulatory actions
involving cancellation or suspension, we are required to go to the
board.  And as a matter of practice, we have found that the board has
provided us very valuable advice over the years on a range of scientific
issues.

		So we have worked with the SAP to help sort out specific issues
relating to particular chemicals, and general issues relating to broad
scientific questions that arise in the course of our regulatory
activities.

		Some of the folks who are members of the HSRB have also actively
served on the SAP and Dr. Brimijoin, Dr. Chambers, Dr. Fenske, I may be
missing some others.  But Paul Lewis cut his teeth in the advisory
committee process, working with the SAP.  So there are a lot of folks
who are familiar with the SAP and bring insights to the HSRB about how
it operates.

		The HSRB, as you reviewed in your opening comments, has a very
specific charge, which includes not only consideration of ethical issues
but also scientific issues, because bad science is bad ethics.  And so
there is the possibility that there will be some subject matter that
overlaps between the SAP and the HSRB, in fact, we've already seen that
with regard to the review of the Nethercutt (phonetic) study that is
presented first to the SAP and then to this board.  It also came up in
connection with the insect repellent efficacy guidelines.

		But my hope and my expectation is that both boards, when they address
an issue, will look at the science, and if there is a course of sound
science that both boards have pointed to, and we are confident that the
focus, while slightly different - broader science issues in the SAP, and
specific protocols and studies here in the HSRB - that we will not have
either a lot of duplicative work or any conflicts between those two
bodies in terms of the kind of advice that we get.

		DR. FISHER: Okay, thank you very much.

		Dr. Lux, would you like to introduce yourself?

	EPA HUMAN STUDIES RESEARCH REVIEW OFFICIAL

		DR. LUX: Yes, thank you.  I guess we're going to go back to the
introduction mode for a moment.

		I'm newly arrived here by way - Paul asked me to say a few words about
who I am and where I came from, and I will try to do that very briefly.

		I am a neurologist.  My clinical focus over the years has been on the
treatment of persons with frontal lobe disorders, particularly traumatic
brain injury.  So people with cognitive and behavioral disorders,
particularly of the sort that involve their own self regulation,
cognitive and behavioral self regulation.

		My scholarly interest has been in understanding the cognitive
capacities that underlie human agency, and therefore, that underlie
effective execution of autonomy, of personal autonomy, and that's how I
got into bioethics, and then went on and had formal training in
bioethics as well as in clinical medicine.

		So that's where I come from, that's how I come to this position.

		Here as you all know the human research, human subjects research,
which which EPA is involved is really quite extensive, and it covers a
lot of different domains.  We are engaged in human subjects research in
house.  We conduct human subjects research at other institutions.  We
support and fund human subjects research by other institutions.  And we
utilize human subjects research, done by, supported by, funded by
others, in the regulatory process.

		And my task is to further develop the ethical infrastructure revealing
that whole array of things.  And in that regard my office is in the
office of science adviser specifically so that it does have a reach
across all of the different activities of the agency.

		Your role as I understand it is to review these third party studies. 
Your role by law is to review - you are required to review these third
party studies.  But your charter is much broader, and allows you to
review studies across the various domains in which the EPA is involved.

		And as we further develop the infrastructure for that, I expect from
time to time to come to you to seek your advice on some of those studies
as well, and I will very much appreciate having it, and I look forward
to working with you on it.

		DR. FISHER: Thank you very much.  We look forward to it as well.

		Okay, I guess we're now going to turn to the chromium repeat open
applications test, and before we do, I'm going to make a presentation,
so continuing my presentation.

	HSRB REVIEW OF SCIENCE AND ETHICS CRITERIA FOR

	COMPLETED HUMAN EXPOSURE STUDIES

		DR. FISHER: As I mentioned before, the board has developed criteria
that we use to apply to different types of studies, and I think it's
always helpful if we review what those criteria are prior to our
discussions.

		Okay, so the first question that we ask is, we look at the scientific
aspects of research.  Did the research - the broad and general issues
are whether or not the research, design and implementation meet
scientific standards, and whether the data generated by the protocol
have useful implications for the agency's weight of the evidence
determination, or when applicable, other aspects of the risk assessment.

		In terms of specific criteria that we have come up with and have been
put in our public report, and I think one of the reasons that I'm also
presenting this, and I think that EPA has done a great job of picking up
on this, is that these criteria I think will be helpful later on to
investigators when they submit protocols or previous studies; that it's
always helpful to let the board know how these criteria were met,
because this is what we're going to be looking at.

		So is a valid scientific question addressed by the study, whether the
purpose was clearly defined; whether there are specific objectives and
hypotheses; can the study as described achieve these objectives, and
test the hypothesis.

		Next slide.  We also look at population characteristics; where or not
there is a justification for the selection of the specific target
population; whether or not the sample size is appropriate, and how that
number was derived; can the findings of the study be generalized beyond
the study sample, and was that the purpose of the study?  Are the
participants representative of the population of concern?  

		Are the inclusion and exclusion criteria appropriate both from a
scientific and an ethical perspective, and whether the sample is a
vulnerable group.

		We also look at the procedures.  What is the basis for the proposed
dose levels and formulations in the study?  Will the measurements be
accurate and reliable?  Are the measurements appropriate to the question
being asked?  Are adequate quality assurance procedures described?  And
can the data be statistically analyzed.

		We also look at the social value as well as participant risks.  As I
mentioned before, the risk-benefit is a balance that we need to look at
from an ethics perspective.  Part of that value is going to be whether
or not there is existing data adequate to answer the scientific
question, because remember, our responsibility is to judge whether or
not it's ethical to pose any risks to a human subject based on the
benefits of the scientific question.

		Are new studies involving human subjects necessary to answer the
question?  What are the potential benefits of the study?  What is the
likelihood that the benefits would be realized?  What are the risks of a
serious or irreversible, and is there a plan allocating individuals to
treatment if in fact there is a negative reaction to the experimental
treatment?

		Next.  Then I think it's appropriate for the chromium study, which is
a human dosing study, that we go over what the board developed as
criteria for human dosing.  Next slide.

		Some of this overlaps with our general criteria.  Is the scientific
question worthwhile?  Are human subjects necessary?  Is potential risk
serious or irreversible.

		We look at the dose selection.  Is it sufficient to test the question?
 I think it was the board's conclusion that in most instances a single
dose is not sufficient to answer a question.  And is the dose selection
based on appropriate data, and is that data provided, preclinical,
previous studies, et cetera.  Next slide.

		End point selection is something we also look at.  Is it consistent
with the aim of the study?  Is it appropriate to answer the question
about human responses, sensitivity, accuracy, validity, replicability.

		Is the end point measured accurately and reliably with good quality
assurance?

		As we talked about in the general criteria are the characteristics of
the population generalizable to the question that EPA has asked, whether
or not the data is reasonable to address, and are there appropriate
inclusion/exclusion criteria.

		Next slide.  We look at the method, once again, whether the sample
size is sufficient.  Is the selection of control and experimental groups
appropriate?  Is the staging of dose intervals, dose amounts, and the
type of exposures sufficient to answer the question?  Is there quality
assurance for observations, instruments, and data?

		We also look at the statistical analyses, in particular whether or not
the data can be analyzed, and is the statistical method appropriate to
answer the question.

		Then we also looked at single dose level studies.  So we can move on. 
If the chromium is not daily dose, right?  Okay, so let's move on.

		Now we'll get to the ethics evaluation.  Of course in any already
completed study we ask whether or not the study failed to fully meet
specific ethical standards prevalent at the time the research was
conducted.  Examples are FIFRA, Declaration of Helsinki, the 1991 Common
Rule, and for those studies that were conducted and completed after
April 7th, it would be the EPA Common Rule.

		Okay, if the study did not fully meet ethical standards prevalent at
the time, then we ask whether or not the conduct of the study was
fundamentally unethical; whether there is clear and convincing evidence
that the research was intended to seriously harm participants, or failed
to obtain informed consent.

		And whether or not the conduct of the study was significantly
deficient relative to the ethical standards prevailing at the time.  Is
there clear and convincing evidence that the deficiencies identified
could have resulted in serious harm based on knowledge available at the
time the study was conducted or the information provided to participants
could seriously impair informed consent.

		Additional ethical criteria that we are using in order to make these
determinations: are risks justified by those benefits, I've spoken about
that before.  Are risks necessarily and sufficiently minimized?  Are the
subjects equitably selected?  Are there sufficient safeguards against
coercion?  Are there procedures for ensuring subject safety including
adequate monitoring?  And has the PI provided sufficient and appropriate
documentation of the IRB review.

		Next slide.  We also look carefully at the informed consent, whether
or not there has been documentation that consent has been obtained,
whether the consent included appropriate information of the identity of
the pesticide and mode of action, whether there was an adequate
discussion of risk, alternatives to research participation must be
included; whether there is clear information about participation and
compensation, if relevant; whether there was a clear statement of who is
financially responsible for study-related injury; and whether there was
clear communication of a voluntary nature of participation and how one
can withdraw from the study.

		And I think that's all for the chromium.

		Okay, so now we are going to turn to Dr. Liccione - is that how you
pronounce your name?  Liccione, I'm sorry - and Mr. Carley are going to
give us an EPA overview of this study.

	CHROMIUM REPEAT OPEN APPLICATION TEST

		DR. LICCIONE: I would begin by thanking the board for their time and
consideration.

		The topic I will be talking about today concerns dermal sensitization
testing of chromium in a human study.

		Next slide.  The following outline of the topics that are going to be
presented.

		First, we're going to give you a brief introduction on allergic
contact dermatitis and what it means, and then some information on
regulatory framework and a little about the science background.

		Then we'll talk about the previous science advisory panel and human
studies review board, review of chromium, and then finally we will focus
on the topic today, the Procter (phonetic), which is the human study for
evaluation.

		Next slide.  And this slide defines more specifically the form of
chromium that will be our focus, and that is, chromium in treated wood. 
Hexavalent chromium is a component of a pesticide product for use as a
wood preservative.  And in the particular form that was studied in the
Procter study is called acid copper chromate, which is a mixture of
copper oxide and chromium trioxide.  It also is known by the common name
Copper Shield.

		And the agency has concerns that the general public may experience
dermal exposure to chromium remaining on wood treated with ACC.  And
there is also concern for potential to cause allergic contact
dermatitis, which I will sometimes refer to as ACD, which is known to be
a sensitive dermal endpoint associated with sensitizers such as
chromium.

		Next slide.  What kind of information can be derived from dermal
sensitization testing?  The agency perspective is that information
derived from tests for skin sensitization serves to identify the
possible hazard to a population repeatedly exposed to a pest substance.

		Next slide.  What do we mean by treated article as it pertains to a
pesticide?  A treated article is a registered pesticide that is
incorporated into an article to protect the integrity of the article or
substance itself.  Treated articles such as treated wood do not bear
pesticide labels nor other information to inform the public about any
potential hazards including dermal sensitization.

		Next slide.  Okay, what is ACD?  It is in short a delayed
hypersensitivity reaction that can be characterized by two phases: an
induction phase which involves exposure of sufficient magnitude or
duration to activate the specific immune mechanisms resulting in the
acquisition of sensitization.

		The second phase is often called the elicitation or challenge state,
whereby response is induced in sensitized individuals upon exposure to
the allergen produced by the relevant route.

		Next slide.  When talking about dermal sensitization testing it is
very important to consider irritation responses which may be difficult
at times to distinguish from ACD.  There are some similarities, but
there are very important differences.  With irritation you may get a
little edema, but most often you find edema, especially associated with
other signs, with ACD.  Paradema (phonetic) may be present in both
situations.  Itching which is not on the slide may also be present,
although it tends to be more homogeneous with ACD.

		In regard to a reaction time, in general for irritation, tendency for
a more immediate response, but note that sometimes there is a cumulative
irritant effect.

		Reaction time with ACD generally requires some time, usually 24 to 72
hours.  And with irritation you can get a response on the first
exposure, whereas with ACD you generally don't see a response after one
exposure.

		A particularly big difference is that ACD is an immunological process
that involves memory stages by key lift sites (phonetic), whereas
irritation does not.

		Next slide.  Other important points to consider about ACD is that it
is recognized as a threshold phenomena.  Thresholds are largely
determined by the potency of the allergen, and the induction and
elicitation thresholds can vary among individuals.

		An important point to consider is that induction thresholds are
considered to be higher than what is seen with the elicitation stage.

		Those response relationships have been documented for both of these
phases.  

		Next slide.  Other important points: induction may occur after a
single exposure to a large area of skin, or as a consequence of repeated
skin exposures.

		Some data also suggest that a sensitizing potential may increase with
the repeated exposures. 

		Next slide.  As I have mentioned, chromium and the dermal
sensitization issues associated with chromium exposure have previously
went both to the science advisory panel and also to the Human Studies
Review Board.

		In May of 2004 the scientific issues in terms of developing the
foundation of a scientifically sound approach to quantitative risk
assessment of dermal sensitization for pesticide chemicals was presented
to the science advisory panel.

		These included chemicals incorporated in other materials, in other
words, the treated articles.  And hexavalent chromium was presented as a
case study.

		Next slide.  The board took a look.  The panel actually looked at all
the available data, the animal data, largely the local lymph node assay
immunopatch test data were considered.

		And the science advisory panel noted numerous limitations of the use
of the animal data, and the deficiencies in many of the human studies.

		Next slide.  However, the science advisory panel did identify one
particular human study as the best available, and this is the Nethercutt
study.  And the panel identified the critical dose from this study to be
89 nanograms per square centimeter.  And this represents what is called
a MET 10 percent, or the 10 percent minimum elicitation threshold.

		This value was considered a conservative level largely because it
involved occluded patch testing, for 48 hours, which tends to both
maximize absorption and make the skin more reactive.

		The panel however recommended that an open application test with
repeated daily exposures would be more appropriate for risk assessment
purposes when dealing with treated articles.  And this is often called
the repeated open application test as one good example of the kind of
protocol that the SAP was envisioning.

		Next slide.  Just to talk a little bit about the Nethercutt study.  As
I mentioned it was also reviewed by the HSRB not too long ago, in May
2006.  And the board concluded that the study was sufficiently sound to
be used to estimate a safe level of dermal exposure to chromium.

		The board also concluded that the study was properly designed; that it
was well conducted; and it employed an appropriate scientific and
clinical methods to determine MET for dermal sensitization.

		The final conclusion, and an important one, was that the HSRB
concluded that the MET 10 percent was a reasonable point of departure to
be used in terms of risk assessment.

		Next slide.  I'd like now to turn our attention to the main focus of
today, and that is the Procter study that deals with the human testing
of chromium.

		Next slide.  As I mentioned the science advisory panel recommended a
repeated open application test, and that's what ROAT stands for,
sometimes I will refer to it as the ROAT protocol or ROAT study.

		And the purpose of the study was to develop a 10 percent MET level for
chromium STP within ACC wood treatment solution.

		They also looked at both potassium dichromate.  However, I want to
mention that our concern here today is with ACC, not in particular
potassium dichromate, so I will not be talking too much about potassium
dichromate.  And utilizing this ROAT protocol.

		For ACC the specific purpose then was to determine a safe level of
repeated dermal exposure.  Our focus as I mentioned will be ACC today,
and in this study, 148 possible volunteers were contacted; 88 were found
to be eligible for this study; ultimately a total chromium sensitized
population of 60 actually took part in the study, consisting of both men
and women; there were 10 control participants, all female; and all 60
chromium sensitized study participants were actually confirmed to be
chromium sensitive, based on a patch test single exposure, and that was
performed by the sponsoring clinic.

		Next slide.  They included both inclusion and exclusion criteria, and
I think the exclusion criteria show the importance.  Any persons taking
immunosuppressive or steroidal medications was not included.  Any
current exposure to chromium or copper in the workplace in home, anyone
presenting with symptoms of active dermatitis, fissures or lesions of
the skin were precluded.  Any serious systemic disease or medical
condition; pregnancy or anyone attempting pregnancy; current breast
feeding; any illicit drug or alcohol use; and anyone that was planning
to leave the area for two days or more.

		Written consent was provided to participate in this study.  Medical
and occupational history questionnaires were filled out.  Information on
dermatitis was available for all subjects.  Next slide.

		The concentrations that were selected for the ROAT study were doses,
including a control that consisted of a copy chlorate solution.  The low
dose was 90, then there was a 250, 750, and then the high dose was the
2,500 nanograms per square centimeter.  And to note the importance that
the dulse (phonetic) metric that is applicable for these kinds of
studies and consistent with dermatological researchers considers the
best is the mass per unit area.

		The lowest concentration that was selected, the 90, represented the 10
percent MET value from the Nethercutt study.

		According to the study authors, the highest concentration was
considered an extreme upper bound in terms of exposure to chromium six
from exposure to ACC-treated wood.

		Next slide.  Patients were exposed to chromium in brief, and at the
same time, on different arms, to either the ACC or the potassium
dichromium.  High concentrations - the four concentrations in the
controls were applied in the 10 microliter volume to five one-square
centimeter areas on their right forearm utilizing this transparent tape,
and the diagrams and description of the methodology was very carefully
portrayed in the document.

		The concentrations for the potassium dichlorate were done on the left,
whereas the ACC was treated on the flexor area of the right forearm.

		Exposure then was for six hours per day for a 10-day application with
a weekend period off.

		Next slide.  Okay, the grading of the allergic and irritant responses
was performed by Dr. Joseph Fowler (phonetic), and I just want to
comment briefly about Dr. Fowler's credentials.  He is a very
experienced dermatologist, with considerable expertise with chromium
dermal sensitization.

		He was the principal investigator and the diagnosing physician for
this study.  So that was his role.  He's licensed to practice in
Kentucky and in Indiana.  He is the current president of the North
American Contact Dermatitis Group.  I will be referring to this group as
the NACDG.

		He was the past president of the American Contact Dermatitis Society. 
The study was actually conducted in his private practice at the
Louisville, Kentucky clinic, and study participants were largely
recruited from the patient population from his private practice.

		All skin responses were graded for the usual responses: eridemia
(phonetic), vesicular germination (phonetic), papule scaling, itching,
according to the following scale.

		So the scale here I won't review and go into all of this, but this
just shows you that it was clearly defined what criteria are used in it.
 And a very important point is that the criteria that were utilized is
very similar to the criteria that physicians used in the NACDG.

		Next slide.  The test sites were examined daily for the presence of
any irritant and/or allergic responses.  If the response was just to be
allergic, the dose was discontinued only for that dose causing the
allergic dermatitis response.

		In contract, if the response was irritantcy, or if it was of uncertain
designation, doses was continued, and there was further monitoring for
morphology, change in severity, and the persistency of the effect to try
to determine the classification.

		Next slide.  The ROAT results were treated in two different
categories, and were modeled with respect to the calculation of the MET
. One was to consider only allergic responses, and this is what is
referred to as scenario one in the study.  The assumption behind this is
that participants who had allergic responses to lower doses were also
considered to be allergic to all higher doses, and this actually did
occur in at least one participant.

		The other scenario that was modeled and interpreted for results is
allergic plus irritant responses.  The assumption for this scenario two
is that once again participants who had allergic responses to the lower
doses were also considered allergic to all the other doses, higher
doses.

		And all irritant responses were assumed to be allergic responses, with
the exception that when only itching was reported without any visual
signs such as erythema (phonetic), papule formation, the observation was
not considered either an irritant or an allergic response.

		Now both scenario results were also normalized to the chromium
sensitized U.S. population.  Based on what is known about the population
in the NAGDC database, and utilizing the more current evaluations for
1998 to 2002.

		This was based on the observation of the disproportionate number of
the most severe respondents in the current ROAT study.  The
normalization briefly was done by comparing the number of participants
who had a plus one, a plus two, or a plus three patch test reaction to
the proportion of individuals with these exact patch test grades in the
NACDG database of the patch test grade, for all individuals that were
actually patch tested by those physicians from those years.

		The data from the patch test normalized population then were used for
estimating the MET percent values that would be representative of
general chromium sensitized populations in the United States. 

		And the next slide shows you the results, the METs were actually
determined by benchmark dose modeling that the agency currently uses, so
what you are seeing there is the BMD 10, and for the scenario one,
remember this is just the allergic scenario, and the value was 270
nanograms per centimeter squared, about three times higher than the
Nethercutt MET percent.

		So for scenario one, after going through the normalization procedure,
we did a higher MET 10 percent value, which would - as one would expect
- after renormalization. 

		For scenario two, once again, scenario two is both allergic and
irritant responses combined, and the soon to be allergic.  We get a
lower MET value, and we get roughly about where Nethercutt comes in. 
And then the normalized one is also higher.

		The next slide shows the lower confidence limits, in other words, the
BNB 10 percent lower, about 95 percent confidence limits, for each of
the scenarios.  So that is from our own - they utilized our benchmark
dose methodology.

		Okay, next slide.  The study has many strengths.  Both males and
females were represented in the study.  And they had a large population
of sensitized individuals which in terms of statistical power it's one
of the better studies that you see in terms of dermal sensitization
designs.

		The study participants were blinded to the control and test solutions,
and good experimental design to establish both responsive relationship
for elicitation.  And the study utilized good criteria that was
consistent with the NACDG database.  The methodology was clearly
explained, and there was verification of dose delivery.

		And repeated dermal exposure is more realistic for risk assessment. 
So those are the strengths.

		Next slide.  We didn't find any apparent weaknesses with the study.

		Next slide.  We conclude, the agency concludes, that this study, the
Procter human study, contains information sufficient for assessing human
risk from potential repeated human exposure to chromium as it exists in
treated wood as an ACC solution.

		Next slide.  The scientific considerations for the board today is one
to recognize that the agency identified this study on chromium, a human
study, who had documented, previously documented sensitivity to
chromium, and one that utilized a ROAT protocol as recommended by the
science advisory panel.

		The study was conducted to identify a level of chromium exposure below
dermal exposure would not appear to elicit an ACD response using the
wood treated solution ACC.

		Please comment on whether the Procter study is sufficiently sound from
a scientific perspective to be used, for use for quantitative dermal
exposure for exposure to ACC treated wood.

		Thank you.

		DR. FISHER: Thank you.  Now before we hear about the effects, I think
we'll have some questions about the science.

		I have a question, an incredibly naive question, but I'm trying to
figure out what a dermal patch has to do with touching wood.

		DR. LICCIONE: Okay, the patch does not have anything to do with
contact with the wood.  The patch test here was utilized to confirm
previous sensitivity. 

		In the protocol, the ROAT protocol itself is not an occlusion.  It is
a transparent tape that allows you to delivery carefully to the same
area.  But it is open, open so it was typical exposure.

		DR. FISHER: Right, I understand that, but I guess the perspective of
one of our criteria is the value of the study, and the proposed value of
the study is that treated wood public bystanders putting their hand on
the wood may in fact have some contact with it.

		So I'm just wondering, it would seem to me that the compound itself in
terms of how it might get to the skin, the mix with the wood, would have
some  implications.  So I'm just trying to figure out, what is the great
benefit of the data produced here to answer the question about whether
or not the public is safe.

		DR. LICCIONE: Right.  By using the repeated open application, which is
not a patch test, allows you to deliver the stuff on contact.  So if you
are sitting on a deck and you put your arm, you can get exposed. 

		This is the best way to do this, to repeat an open application, where
you allow delivery to an open system, and allows you to look at
different doses at the same time.

		DR. FISHER: Once again my question is, whether or not in the real
application of it, the fact that it's in the wood itself, that's my
question, and how this is generalized to coming through the wood.

		DR. LICCIONE: Okay.  The chromium as they explained it in the
background document, chromium six, there is a fixation process that
generally takes place in about two weeks.  So anyone coming into contact
with it before it finally sixes could actually be exposed to chromium
solution.

		The doses that were selected to identify, one was the upper bound,
2,500 based on some exposure information that they discuss.  So that
would be the worst case scenario.  So say there is wood in the factory,
and it's shipped out before it has any time to begin to fixate, that
would be an upper bound.  Lower bound was to the 95th Nethercutt,
because they wanted to try the minimal one, and Nethercutt had addressed
issues of what happens if you are exposed, even with the patch, to some
MET level over great areas.

		So in that compass the 90 was a good reasonable cutoff.  So in terms
of exposure it's realistic in the sense that if someone - if a factory
sent this out before it actually got fixed, we could get exposure.  And
the good thing is that allows you to do a more realistic kind of
exposure where it's opening where it could allow - where it dries
normally, and also, realistic washing.

		DR. FISHER: Thank you.  Other questions?  Yes, Dr. Chadwick, and then
Dr.  Fitzpatrick.

		DR. CHADWICK:   On slide 22, the last report, you talk about
normalization because of a disproportionate number of high responders. 
And the normalization as I understand it then results in a higher --

		DR. LICCIONE: Right.

		DR. CHADWICK:  - than you would get without normalization.  So my
question is twofold.  One, why did they have such high responders?  And
why did they feel it important to normalize as opposed to just use the
raw data?

		DR. LICCIONE: Okay, good questions.  Those are questions that we
ourselves considered.

		DR. CHADWICK: I appreciate the judgment.

		DR. LICCIONE: Yes.  One of the reasons, one of the feedback on the
normalization procedure, okay, what could account for what happened in
this study?  Why was there a disproportionately large response to the
plus two or plus three responding?

		What may have happened is that they selected the population.  A large
number of them were the plus ones in the original population, the 100. 
If you take a look at the 88, a good number of them, I think the
percentage is down here so that you can - that I can explain it a little
better - about 70 percent of them turned out to be negative.  And of
those were actually weak sensitizers to begin with.

		The reason maybe that - because it's an immunological process, one,
over time these weak sensitizers lost their sensitivity.  So that's one
factor.

		The other factor is that these weak sensitizers were identified by
patch tests to begin with.  There were some false positives that could
happen, so I think what these weak sensitizers were, their response can
be interpreted as an HCP (phonetic).

		So I think that could happen.  That's what they do discuss about some
of the factors.  But the factor that appears to trump for them is loss
of sensitivity, given that those - a number of them were already weak
sensitizers.

		And it's also concordance.  The patch attempts, though they were more
strongly reacted, tend to act more strongly than the ROAT.  You see the
one individual participant number 70 that reacted, the only one that
actually showing the severe -- the most serious response to the ROAT,
the plus three with only one individual.  Most of the other individuals
were ones and twos, more twos - there was only one weak sensitizer that
actually reacted with a ROAT score of one, not a patch, I'm now talking
about the ROAT of one plus.  And that was only to a high dose, to the
highest dose.  So there are some dose response relationship.

		And we'd like comments on the normalization.  That's the rationale
that appears, that I think is - immune responses can change with time,
in that most of your initial population that was identified as negative
had actually lost some sensitivity.

		DR. FISHER: Dr. Chadwick, do you want to continue?

		DR. CHADWICK: So the normalization was done after the data was
collected?		DR. LICCIONE: That's right.

		DR. CHADWICK: Part of the original study, they collected some higher
numbers and higher reactions than -

		DR. LICCIONE: Exactly.  They amended the protocol.  What they noticed
was that, well, how do you interpret this study when you have so many
more?  And they went back to NACDG database and they saw both they were
proportionally responding to the plus three is significantly different. 
And just to show you the numbers here, somewhere here. 

		Yeah, the NACDG database for a patch test greater than plus three, we
are talking about seven point seven percent, virtually you had 27
percent seen in this study.  So the attempt was to normalize it,
basically what they used was a simple proportionate analysis.  You have
an extreme random population in here, and that in itself is part of - a
very extensive population to begin with.  Then we get some information
about distribution.

		I never see normalization done with dermal sensitization.  I've seen
it without the biological end point.  I'm sure you've all seen this. 
And Nethercutt didn't do it because I think no one actually went back
and obtained the data to do that.  Something where exactly will be
what's considered the upper bound, but there is an argument for the
normalization.  We are dealing with a very small population, based on
the heterogeneity within the database.

		And I think - I gave it some thought.  One of the issues that I was
concerned with was how representative of the exposures of the people
that we had.  Because if you look at it, there was variance in exposure
to cement.  There's exposure to leather.  There's exposure to cosmetics.
 There are some unknowns.  So in the real chromium sensitive population,
you'd think they'd look at a cross-section. 

		So I think that was what they were attempting to do.  They tried to
make it more meaningful to the overall general population of chromium
sensitized.

		DR. FISHER: Okay, Dr. Fitzpatrick, then Lois and then Camden and then
--

		(CD Change)

		- skin tight, that is always a variable of fact.  But we don't see any
- once again another COT (phonetic) actually looked at some of those
issues and did not report that any major differences.

		I mean if somebody - great lengths were done to preclude anybody with
any damaged skin or anything like that.

		DR. FISHER: Suzanne, please.

		DR. FITZPATRICK: Did he determine what the most sensitive skin type
was?

		DR. LICCIONE: No.  What they did was, they took the chromium sensitive
population known and they actually then just took what population
existed and precluded anyone that might have had anything that have
affected it.

		But yes, everyone within this room would - somehow they're going to
absorb an allergen, and the biggest difference is your immunological
response to that is going to vary a lot.

		DR. FISHER: Lois.

		DR. LEHMAN-McKEEMAN: I have two somewhat related questions.

		You note early in your presentation that irritation is not an
immunological reaction.

		DR. LICCIONE: That's correct.

		DR. LEHMAN-McKEEMAN: And yet in making the final evaluation of these
data the irritation is actually included with the allergic reaction in
one of the scenarios.

		So the question I have, is that scientifically legitimate to do?

		DR. LICCIONE: The - oh, I'm sorry, finish your question.

		DR. LEHMAN-McKEEMAN: Well, you can go ahead.

		DR. LICCIONE: Okay.  Usually, as I mentioned right from the beginning
irritation and ACD is sometimes very difficult to tell.  This requires
judgment from an experienced dermatologist, Dr. Joseph Hollerez
(phonetic).  I believe what they did here in this study was to consider
worst case scenarios.  So you went ahead and assumed that all these
irritants were fine.  Obviously they are not.  There's very clear cases
that there's irritantcy responses that were not ACD.  There were some
questionable ones, but what they did - I think they were trying to
present a worst case scenario.

		DR. LEHMAN-McKEEMAN: Well, just in looking at the tables themselves,
many of the subjects who presented with irritation in fact presented on
day one.

		DR. LICCIONE: Right.

		DR. LEHMAN-McKEEMAN: And so that kind of argues against that really
being allergy.  But at the same time I have to wonder - and this is the
second part of my question - if you have this background of what appears
to be a relatively high incidence of irritation, potentially that can
skew the immunological reaction somewhat if you are exposing the skin
perhaps a little differently.

		And the major point of my question is this, if that's the case, is
that potentially related to the product itself that's being tested?  And
if it is, is the evaluation and comparison to this historical database
actually a relevant thing to do?  Because you really have an apple and
an orange here that you are trying to compare.

		DR. LICCIONE: Okay, let me go back.

		Chromium is both an irritant and an allergic contact dermatitis
response.  You typically see this with any allergen.  So when you do
these studies you are going to see irritantcy.   Sometimes just
irritantcy alone.  There were five individuals that didn't have it.

		There's also cases where you are only going to see ACD.  Then there
are going to be cases where you get both.  You will typically, if you
follow some of these, you will get irritantcy at a lower dose than you
get the ACD.  This is typically what you are going to find.

		You will find that in the NACDG database as well.  And the population
there would have the same kind of typical thing that you would expect.

		The other thing is true as you point out irritation did occur in some
cases day one.  But in other cases it didn't appear until much later;
great variability.

		The other thing to keep in mind is that irritation was in all cases
plus one low grade thing.  There was no evidence of any serious
cumulative irritant response which I was concerned about that would in
fact, absorption or interpretation.

		So I think the study is good in that sense.  Because if you do a study
with cosmetics, you are going to run into the same problem.  That's why
I initially talked about irritation, because it is a problem with these
kinds of studies.  You have to be clear on the criteria, but it is not
an immunological process.  So it's only one plus two response, but that
only lasted I think one day.

		So I don't think that the irritantcy responses confound the
interpretation.

		As I mentioned, I think what they were trying to do here is say, even
if you are assumed all these irritantcy responses, this is the worst
case scenario, you come back down to what Nethercutt got, an NEM percent
of 90.  And that's a more unrealistic kind of exposure.

		Not that irritation is not an important input.  It certainly is.   And
you know the agency does look at irritation, and it is part of the
product.

		I don't think that there is an apple and orange difference between the
database.  The database out there are going to respond in the same way. 
If you took those individuals they are also going to respond in the same
way.

		And just some of these individuals are actually part of that database.
 They have been tested by some of these physicians too.  So you are
going to - and like any other study you can only get so many people who
are going to want to do this type of study.  I just recently got over
poison ivy, and apparently it's just like chromium, never want to do it
again.

		DR. FISHER: Dr. Krishnan.

		DR. KRISHNAN: As I was looking at the protocol I was struck by the
fact that the pH of the solution was seven and the potassium dichromate
study was more acidic, or rather five, in case of the copper seal
solution, I can assume why, but I just would appreciate any
clarification that you may have.

		DR. LICCIONE: Why the difference?  Well, I think potassium dichromate
tends to be more acidic.  But we're not concerned with the potassium
dichromate for this particular study, for our purposes for regulating
for ACC.

		The potassium dichromate was intended for the use for a soil criteria
cleanup criteria.  And so we didn't really spend much time on potassium
dichromate.

		But I think part of the problem with potassium dichromate that was
noticed by dermatologists is that it is - tends to be more acidic.  And
then also the solution was causing more irritation and they had to
actually go back and drop the dose.  Nethercutt actually discusses some
of these early pitfalls.

		DR. KRISHNAN: In trying to understand the comparison with the
historical clinical database, I was wondering whether it would be
investigator related, you know, the difference in rating could be
investigator related.

		So my question, in order to be able to understand that, is whether Dr.
 Fowler's data are also part of the database, which I presume, but I'm
just asking.  And number two, whether the plus three grading in that
database also reflects some of the chromium studies.

		DR. LICCIONE: Okay, the first question was the criteria.  As I
mentioned earlier, that's one of the first things we checked was the
criteria that was being used by Fowler was the same as the NACDG
database, and the answer is, yes, it is.

		Did they contain some of the individuals in the chromium study?  That
offhand I do not know.  It's very likely that that population was part
of that, because that is the known population of database.  But to what
extent I don't know.

		What was the one other question you had?  Okay.

		DR. FISHER: Micah.

		DR. LEBOWITZ: Thanks.  I'd like some clarification first of some of
your definitions, when you are differentiating irritation and allergic
contact dermatitis.

		I assume this is not an official NACDG definition or delineation
between the two.  And also of course there are delayed effects of
irritants that aren't immediate, and immediate - there are some
allergens even for contact dermatitis that also have an immediate
hypersensitivity response.

		DR. LICCIONE: Right, the excited - you're talking about the excited
skin syndrome.

		DR. LEBOWITZ: Right, and I'm assuming this differentiation you are
placing here is more for the elucidation of the committee, and not
anything official that we should be terribly concerned about as an
agency definition.

		DR. LICCIONE: Right.  That's correct.

		DR. LEBOWITZ: And in fact, I was curious, we talked earlier about
potential - not likely, but potential differences between HSRB and SAPs,
but there is - the SAP I don't know how seriously it considered the
NACDG criteria or protocols in coming up with its recommendations, but -
and I don't know how much the NACDG people were involved in the SAP, but
it seems to me that there always is a potential for scientific and
medical confusion when you have different groups coming up with
different criteria and protocols.  And I don't know how to resolve that.
 I don't know if the SAP or the agency addressed that question.

		But it's sort of important to me in terms of how these studies are
examined and defined.  Do you know anything about that?

		DR. LICCIONE: I don't know offhand if the SAP got into the - I wasn't
around - I wasn't at the agency then.  When I read the former SAP
report, I didn't see whether they actually considered the NACDG.  But I
do want to say that the NACDG is considered a very solid thing with
dermatologists, because it's not only chromium that they deal with; they
deal with many things.  So they have considered the good diagnostic
procedure; that I can say, when it comes to diagnostic and clinical and
medical, Dr. Fell (phonetic) himself is - conducts more patch tests than
anyone in this country.  And I don't know if the SAP has gotten into it.

		But if you look at many of these protocols on irritation, and I know
that the SAP for example brought up some - that was a contentious issue
with the Hansen study, the study that the board also commented on, where
one of the problems was that they used a little bit different criteria.

		But I think that from the dermatological societies out there, that
this database has very good -

		DR. LEBOWITZ: I mean the reason I asked, I've been involved in the
NACDG stuff over the years, and so I was sort of peripherally - it's not
my main area - but I'm always - and I'm involved in a lot of
standardization.

		So it's possible or conceivable that the SAP was influenced very much
by the case study, in this case, the hexavalent chromium, in making its
decisions as to what to recommend.  And that may or may not be the same
if it looked at other causes of allergic contact dermatitis.

		I had some specific questions about the study as well.  It concerned
me that those who were initially considered controls, who happened to
react to the patch test, were then thrown in with the -

		DR. LICCIONE: Those two individuals.  I wondered what happened there
too.  I'm not sure how that could - I'm not sure how that was - they
somehow must have developed sensitivity to chromium somehow, and a
likely source could be cosmetics.

		DR. LEBOWITZ: But not to be excluded seemed to be a concern.  I don't
know if it would change the data.

		DR. LICCIONE: Oh, you mean the individuals that were -

		DR. LEBOWITZ: Yeah, in the original patch testing.

		DR. LICCIONE: The ones that were negative and excluded?

		DR. LEBOWITZ: The controls that were positive -

		DR. LICCIONE: Oh, right, the two that were positive -

		DR. LEBOWITZ:  - were changed.

		DR. LICCIONE:  - were actually included in the -

		DR. LEBOWITZ: Yeah, I don't think they should have been.  But that's
another point.

		Do you happen to know, I didn't read the full study or have time to
ask the individual reviewers, but the time between the patch testing for
inclusion and the ROAT exposure study, was it at least two weeks in
between that?

		DR. LICCIONE: Yes.  I think they were defining day one, the first
three days were dedicated to the patch, and then the 48 hours after the
patch, and then they started.  That's why when you see the report, it's
a little confusing.  Sometimes they say visit 11 or day 15.  

		DR. LEBOWITZ: That concerned me a bit too in terms of immune response,
because there is a tendency for an inclusion method to actually elicit a
 longer term response that might have affected the results of the ROAT
testing.

		DR. LICCIONE: Well, I do know that others generally do that. 
Nethercutt did that.  You check the sensitivity to see -

		DR. LEBOWITZ: Oh, yes, but then you usually wait long enough for the
immune response to calm down before you start testing with multiple
doses.

		And that's the reason I asked.

		The issue of - which you didn't bring up in your slide but it's very
important - is there was a two centimeter interval between each of the
five different doses on each of the arms.

		And I think that's fairly standard.  However, when you have very large
doses, you know, close to doses that are not large, then you may have a
bleed out effect, which I don't know if that's anything.

		DR. LICCIONE: Okay, they weren't too careful to avoid what you had
said about the hyperexcitability, the excited skin syndrome.

		DR. LEBOWITZ: Yeah, when you have two large doses together.  But one
large dose can affect -

		DR. LICCIONE: Well, there is no evidence of that.  If you look at
grading up and down in terms of, they actually didn't find evidence that
that happened.

		Also, Dr. Mydvack (phonetic), who is -

		DR. LEBOWITZ: Yeah, I worked with him.

		DR. LICCIONE:  - okay, who has done mechanism - assisted in the design
of this protocol.

		DR. LEBOWITZ: Yeah, I didn't see that; thanks.

		And finally, two questions, three.  Model versus actual, we've talked
about.  Normalization, I don't know if the NACDG distribution is
Gaussian that they call it normalization.  It may be also very skewed. 

		And I think what you - what the term here is really that they have
tried to make the population response in this study similar to what the
response may be in a general U.S. population.

		DR. LICCIONE: Correct, that is correct.  That was the attempt.

		DR. LEBOWITZ: Okay, so that helps me understand it.

		Now, the MET 10 percent is a good point of departure.  But they really
don't determine a NOWA (phonetic) here do they?

		DR. LICCIONE: That's correct.  It is not NOWA.  The idea, Nethercutt
explains what the MET 10 percent means.  You are looking at a 10 percent
response level, but given that the population, the prevalence of
chromium sensitization is so low, less than one percent; there are
different values that go around, but it's very low; dermatologists who
work with chromium consider that to be protective of 99.9 percent of the
general chromium sensitized population.

		DR. LEBOWITZ: And finally, thank you both for your presentation, and
your clear response to the questions.

		DR. LICCIONE: Great, thanks for the questions.  That helps us very
much. 

		DR. FISHER: John, I had a question, then Rich, and then Rich.

		Okay, my question has to do with, remember quality assurance is one of
our criteria, and going by the last conversation that had been brought
up by our members in terms of when it's no longer irritation and it
becomes the allergic reaction.

		Was Dr. Feller the only person who was evaluating this transition?

		DR. LICCIONE: He was the diagnosing physician.

		DR. FISHER: So he's the investigator, the diagnosing physician.  Was
there any independent evaluation of when the difference between
irritation and allergy took place?  I mean is he the only person
evaluating everything?

		DR. LICCIONE: Yes.

		DR. FISHER: So he's running the study, hypothesized what the results
were going to be, what's involved in selecting the doses, and is also
the only person who is measuring the responses and making distinctions
between irritation and allergen and also deciding whether or not
somebody should withdraw from the study; all him?

		DR. LICCIONE: That's my understanding.  He is the principal
investigator and the diagnoses.

		DR. SHARP: Thank you, Dr. Liccione.  A few questions.  

		I have the dubious distinction of being the primary responder here.  

		Just to follow up on what Dr. Fisher said, because that kind of
surprised me.  You are referring to Dr. Fowler as the principal
investigator.  I thought he was the clinical actor in this, but I
thought this study was run by Exponent (phonetic).

		DR. LICCIONE: The study - yeah, was managed by and run by Exponent,
but they also defined him as a principal investigator.  It's just
clearly spelled out as principal investigator.

		DR. SHARP: That's not the title.

		DR. LICCIONE: It doesn't say it on the front page, but if you look at
the back pages there are statements about the role.

		VOICE: 119.

		DR. SHARP: I assume that Exponent wrote the protocol then.

		DR. LICCIONE: Yes, that's what I assume too.  But they actually
conducted, did the study.  They didn't - Fowler did do the recruiting.

		DR. SHARP: No, I guess my point is, that as a research study, it's my
impression that Dr. Fowler is an experienced clinician, but he's not -
well, he may be a well published researcher as well, I don't know.  Is
he?

		DR. LICCIONE: Yes.  Very well -

		DR. SHARP: So you consider that he designed this study, not Exponent;
they just assisted him?

		DR. LICCIONE: No, I think Exponent may have done the design of the
study from what my understanding of Exponent's role.  But he did have
some capacity as a principal investigator, which means he had some input
into the design of the study.

		DR. SHARP: Okay, thank you.

		You made a statement a little while ago, which I just wanted to double
check on.  You stated that the control group was not treated with Copper
Shield?

		DR. LICCIONE: No, they were.  It was with a copper solution.  Because
the ACC also contains a copper, so they weren't exposed to chromium -

		DR. SHARP: Pardon me?

		DR. LICCIONE: They were exposed to copper chloride.

		DR. SHARP: No, Copper Shield is the formulation that includes chromium
and copper.

		DR. LICCIONE: right.

		DR. SHARP: They were exposed to it; correct?

		DR. LICCIONE: Not to Copper Shield.

		DR. SHARP: Well, you might want to double check that, because they are
listed in the table of results as having been treated with all the doses
of Copper Shield.

		DR. LICCIONE: Right, right.  There was also I think some there, like a
vehicle control, copper -

		DR. SHARP: Well, there was a zero.

		DR. LICCIONE: Zero, that's right.

		DR. SHARP: Everyone got the zero, but the controls got all the doses.

		DR. LICCIONE: Right.  I'm going to say it as the vehicle controls.

		DR. SHARP: Yes, I agree.  There was - I realize what you are saying
now.  Okay.

		Do you know of what the - the dose, or the loads, as they correctly
state in the study, of chromium six during the rote studies are very
well defined and nicely documented, do you know what the chromium load
was on the skin during the patch test?

		DR. LICCIONE: All he said was, in the confirmatory patch test - 

		DR. SHARP: The initial patch test, yeah.

		DR. LICCIONE: The confirmatory patch test, all they said was, they
didn't give them much detail, they just said point two five percent
potassium dichromate was actually used.  And we're assuming, because
they used Finn chambers, at most 10 - 20 microliters, but they didn't
come out and say that.  But I don't think that's that important an
issue.  That part was just to verify sensitivity.

		DR. SHARP: Well, it's just a question, I'm trying to understand this. 
Do we know what the load on the skin was during the patch testing?  We
don't, is that the case?

		DR. LICCIONE: Not exactly.  They didn't say the exact volume.  They
just said point two five percent.

		DR. PHILPOTT: If I may interrupt, Richard, this is Sean Philpott, we
know from the Nethercutt study, and reviewing the literature for that,
that it's considerably higher, it's about twofold higher than the
highest dose that they are using.  It's 4.4 migs per square centimeter.

		DR. LICCIONE: Not migs, micrograms.  But note that Nethercutt used a
little bit of a different thing.  He used the true test patches with the
gel.  This study they used the Finn chambers, so roughly maybe 10
microliters, 20 microliter capacity, they just didn't spell that out,
but the purpose of that was just to confirm sensitivity.

		The most concern that we had with loading of course comes in the main
study, the ROAT study.

		DR. SHARP: But I'm asking about the patch test.  And so what I'm
gathering from Dr. Philpott's comment is that the patch test load was
higher than any of the ROATs.

		DR. LICCIONE: But not only that, it's included so it's kept on.  So
all those other factors - 

		DR. SHARP: Okay.  And then in reference to the NACDG database, do you
know if the same load was used for all the patch testing in that
database?

		DR. LICCIONE: I don't know.  They didn't give details.

		DR. SHARP: Did you explore that at all?

		DR. LICCIONE: No.

		DR. SHARP: Do you have the database?

		DR. LICCIONE: No.  I think it - I'm not sure how accessible it is.  We
haven't looked into that.  We wanted to see comments from the board on
the normalization.

		DR. SHARP: We don't know how many cases are in the database?

		DR. LICCIONE: Oh, yes we do, we know there is a total of - for those
years there were 495 individuals.

		Now how they exactly patched, all we do know, one of the important
things that was clear is that they used the same criteria.

		But we don't suspect that it would make that big of a difference,
because once again as Nerthercutt showed, it's mass per unit area.

		DR. SHARP: Well, that's what I was asking about, did they use the same
load?  Did they use the same mass per unit area consistently across all
the NACDG members?  I know they all used the same chamber, and they used
a certain SOP, but I wasn't sure about that load which was my question.

		DR. LICCIONE: Right.  I don't know, and in fact, I'm not even sure if
it changed over time, when they changed procedure.

		DR. SHARP: You don't know that one?  Okay.

		The - I have just a couple of more questions, honest - Dr. Fitzpatrick
raised the issue of the gender discrepancy between the treatment or
sensitized and control groups.  I guess all of us are kind of surprised
that when you have the ability to select a control group, and you have a
mixture of males and females, I don't know of any investigator who would
then purposely choose all females, all of one gender, for a study of
this kind.

		Do you have any explanation for that?

		DR. LICCIONE: They don't offer any explanation.  I wonder too why the
total population ultimately ended up in just 10 controls that were
female.  I guess sometimes in these studies it's hard to find
recruitment.  But I don't know why they ended up.  At least Nethercutt
has looked at sex of dermal sensitization with chromium, doesn't find a
gender difference.

		DR. SHARP: Well, I don't know, this is just an observation that I made
which I haven't quite digested yet, but that's what we're here for to
try to figure this out, the one plus responders in this study were 30
percent male and 70 percent female.  So a lot of the females were - 23
out of the 33 females were one plus, and 13 of them were two and three,
where there were more males.  I don't know quite what to make of that.

		DR. LICCIONE: That's because of the population that's available rather
than how much was actually -

		DR. SHARP: No, I'm just saying proportionally there were more females
in the lower - this is for the patch testing with the potassium
dichromate prior to the ROAT study, if you look at the percent or the
proportion of the one plus response, it's 30 percent male and 70 percent
female.  If you look at the proportion of those who responded at either
two plus or three plus, it's essentially 50-50, 50 percent male - 52
percent male, 48 percent female.

		And so - and then you have a control group that is all female.  So I'm
trying to figure out from a study design point of view what the
implications of that might be.

		Did your group discuss that at all?

		DR. LICCIONE: Well, we, in looking at Nethercutt and others that
address gender issues, and we don't think that there is a gender issue.

		DR. SHARP: Well, there appears to be from this data.  That's what I'm
pointing out.  Maybe we will hear more from that later.

		DR. CHADWICK: Can I follow up on that question?  This is Gary
Chadwick.  Do we know if the controls were all part of the dermatology
associates lab?

		DR. LICCIONE: Yes, they were.

		DR. CHADWICK: So all the controls were employees of Dr. Fowler?

		DR. LICCIONE: Yes.

		DR. FISHER: Employees were patients?

		DR. LICCIONE: They were 10 employees; one was a former employee, and
another was a doctor.  But most of them were employees, though.

		DR. FISHER: Dr. Sharp, were you finished, or do you have another
question?

		DR. SHARP: I do have one more question.

		Was the - in this subjects or the participants in the study were
blinded to the dosing levels at the particular skin areas which is
important in terms of something like, say, itching; but the evaluator -
most of the evaluation was done by visual observation of Dr. Fowler - he
was not blinded to -

		DR. LICCIONE: That's correct.

		DR. SHARP:  - he knew who all the controls were, of course, since they
worked for him.

		DR. LICCIONE: That's true.

		DR. SHARP: And he knew who all the patients were, because they're his
patients.  And he also knew where the high dose was, where the medium
dose was, where the zero dose was.

		I mean this is perhaps just the nature of the study, but did you
discuss that in your group?

		DR. LICCIONE: Yes, we wondered - I looked everywhere to see if there
was a double-blind blinded study, that it in fact was double-blinded. 
But we couldn't find any evidence that it was double blinded.  So we
assume it wasn't double blinded.

		But when you typically see studies done like this, you oftentimes
don't find studies that are double blinded in these dermatology type
studies.

		DR. SHARP: Okay.

		DR. LICCIONE: It would have been a great strength if it was included
as a double blind.  It would have been nice.

		DR. SHARP: And the one other thing that I noticed, and I wanted to get
whether you all discussed this or not, is if you look at the irritant
response data, you find that for Copper Shield, 13 out of the 60
sensitized individuals had an irritant - were classified as having an
irritant rather than an allergic response; but zero out of the 10
controls.

		DR. LICCIONE: Okay, I just wanted to clarify, the 13 out of the - that
doesn't mean they didn't have ACD at higher doses.

		DR. SHARP: They may have had -

		DR. LICCIONE: Only five individuals did not have - only five
individuals had only truly only  irritantcy.

		DR. SHARP: So there were 13 people who had - who were discussed in the
section of the report that discusses the irritant response.

		DR. LICCIONE: That's correct.

		DR. SHARP: They go through and discuss each person.  Now I'm not
saying they didn't.  Some of them, yes, may have had allergic response
as well.  But I'm just saying that there were 13 of the participants out
of the sensitized group where it was determined that what was being seen
on the skin was irritation rather than allergic response.

		But out of the control group there were no irritant events whatsoever.
 Does that strike you as unusual?

		DR. LICCIONE: I don't have an obvious explanation for why the controls
didn't show the irritantcy response.

		DR. SHARP: Because if it really is just an irritantcy response, are
sensitized groups more susceptible to irritant response?

		DR. LICCIONE: Apparently in this case that may have been the result. 
When you see other studies you tend to see irritantcy in the controls as
well.

		DR. SHARP: Okay, thanks.

		DR. FISHER: Were you following up, Sean?

		DR. PHILPOTT: Yeah, I was going to follow up.  There was a question I
was going to ask, which is that if you actually ignore in the irritantcy
tables, if you ignore itching which is obviously very, very subjective,
but you look solely at other physical signs, what I find particularly
interesting is that many people who had ACD also exhibited irritantcy at
the next lower dose, and often very late in the study, suggesting as Dr.
Lehman-McKeeman has pointed out that it may be irritantcy, it may not.

		And I think it was Suzanne who pointed out that irritantcy, can lead
to immunologic changes that could drive you one way or the other.

		DR. LICCIONE: And I think that's why they went ahead and modeled that
response concluding both with that assumption.  Because when you
followed it out, it was interesting, because always when an individual
had both ACD and the irritantcy, it was always the irritantcy coming at
a lower dose.  And a lot of them when you tracked it, it was like, well,
it becomes difficult, but especially at the plus one grade, to say is
this truly irritantcy or is it truly - I actually looked at some of the
pictures, and it's sometimes difficult to tell.  And it's a problem that
all dermatologists face when they do these kind of studies.

		DR. FISHER: Dr. Sharp, and then Dr. Philpott.

		DR. SHARP: Just a few shorter clarificational questions.  I wanted to
come back to this issue of whether the controls were actually exposed to
chromium.  You really need to know that answer, actually, because that
is a very basic issue.

		DR. LICCIONE: Yes, everyone was exposed to the Copper Shield, but the
vehicle control was like a copper vehicle control.

		DR. SHARP: My reading of protocol was similar to Richard's as well.

		DR. LICCIONE: I was confusing to what he was referring to as controls.
 Because he mentioned copper, and I thought he was talking about the
vehicle.

		DR. SHARP: At the amount, and given the extent of the duration of
their exposure, would that level of exposure to chromium be associated
with a potential to become sensitized to chromium?

		DR. LICCIONE: Oh, whether - okay.  As I mentioned earlier, the dulse
usually - well, at the higher dulses there is always that possibility. 
The  dulse for the sensitization is considered to be much higher than
for elicitation.  But here there were some high end exposures that you
are referring to, the 2,500.  Your question is, could an individual
conceivably - like for example the controls.  That is a good question. 
I can't answer that offhand.

		DR. SHARP: A related question to that.  Are there other examples of
exposures, something like beryllium, where patch testing itself had been
associated with sensitization responses.  That indicates that with
regard to the patch test that was done here that that exposure to
chromium could actually have caused sensitization.

		DR. LICCIONE: No, these were already known to be chromium sensitized.

		DR. SHARP: Again, I'm thinking specifically of the controls.

		DR. LICCIONE: Oh, of the controls.

		DR. SHARP: Could the patch testing itself at that level have actually
been a potential cause of sensitization in the controls?

		DR. LICCIONE: Nethercutt mentions - refers to that.  And other
dermatologists have pointed out that possible risk.  And that presumably
is theoretical possible in a high dulse.  And at the two five, whether
that happened or not, they just didn't follow up.

		DR. SHARP: So and then related to this, not knowing much -

		DR. FISHER: Could you bring that mike a little closer?

		DR. SHARP: Sorry.  Also fighting something coming from Houston to D.C.
 

		DR. FISHER: Oh, sorry.

		DR. SHARP: Related to this last question on this front is that not
knowing anything about this population, for people who are chromium
sensitized, are they at increased risk fo developing any other T-cell
mediated diseases or anything of that sort?

		DR. LICCIONE: Okay, not necessarily.  If you are sensitized to a
certain agent, does that suppose you could be sensitized to another
agent, you are saying?

		DR. SHARP: I'm wondering in general if people who are chromium
sensitized, are they at increased risk of developing any other type of
immunological -

		DR. LICCIONE: Not that the literature has borne out, that anyone has -
like poison ivy incident that I had doesn't mean I'm going to be
predisposed.  It means that I reacted to the allergen, that specific
allergen, in a certain way that my immune system is going to respond
particularly to that.

		DR. FISHER: Dr. Philpott.

		DR. PHILPOTT: I'm hopefully the last question, I guess.  And actually
it's more just returning to what Dr. Lebowitz raised, which is a
question that was of concern to me, which is about the duration of time
between the patch test and the conduct of the ROAT studies.

		It was my reading of the protocol actually that there was at least
apparently a two-week period, based on the time frame that they were
proposing their tentative schedule had patch testing in August with the
first round of ROAT the following month, and then two subsequent rounds
in later months as well, which would suggest that there was sufficient
time between patch testing and ROAT to allow the immune response to
decay.  The difference between the visit and the day numbers relates to
the fact that dermatologists work Monday through Friday.  So one was
Monday -

		DR. LICCIONE: Yeah, that was confusing.

		DR. PHILPOTT: But that would be day eight.

		DR. LICCIONE: That was confusing.  Well, that's good to know.

		DR. FISHER: Kannan.

		DR. KRISHNAN: Just wanted to see whether any effort was made to pull
the new data with the historical data before doing a normalization.  I
didn't see any effort of that.

		DR. LICCIONE: The effort on the part of the study authors?

		DR. KRISHNAN: As they do the normalization, has been done based just
on the - for example, the three plus grade occurrence in the historical
data only.   But was there any effort to pull the newer data with that
historical data?

		DR. LICCIONE: Older data?

		DR. KRISHNAN: And then attempt normalization.

		DR. LICCIONE: Well, the normalization they clearly see was for that
time period, 1998 to 2002.  More recent, over time some of those
individuals would lose sensitivity or change; others would be included.

		So that was the most recent data, historical control data that is
available.  Because historical control can change over time.

		DR. KRISHNAN: The one from the present said it wasn't pulled before
attempting a normalization.  That could change your proportion of
people, responders and so on.

		DR. LICCIONE: No, they did pool it, they were just trying to do a
proportionate analysis saying, okay.

		DR. KRISHNAN: By keeping them separate?

		DR. LICCIONE: Okay, I didn't understand what you were asking.

		DR. KRISHNAN: That's what I wanted to make sure.

		DR. LICCIONE: Okay, yes, that is correct.

		DR. KRISHNAN: In terms of blending, it seems to me that it was someone
other than Dr. Fowler who applied the solutions essentially.

		DR. LICCIONE: Right, the conduct of the study.

		DR. KRISHNAN: But it was the same location in all the patients, so
that wasn't randomized or anything.  

		DR. LICCIONE: That was carefully done with markings to minimize any of
that kind of variability.

		DR. FISHER: Yes, I had a comment and a question.  One question is,
when was the last subject run in this study?  Because I know there is
some confusion about IRB approving something in August, 2006.

		When was the last subject run?

		DR. LICCIONE: I know - I'm not sure what you mean by run.  There were
three rounds.

		DR. FISHER: I guess my question is, was it after April, 2006?

		VOICE: I'll speak about that when I talk about the applicable
standard.  But when the last subject was run is not relevant to the
applicability of the standards.

		DR. FISHER: Okay, I see.

		VOICE: It's when the first subject was enrolled that we use as the
definition of initiation.  So that determines what the standards are.

		But I'll go over that in a little more detail in my presentation.

		DR. FISHER: Thank you.

		My only comment is thank you so much for all this effort.

		DR. LICCIONE: Oh, thank you.  These are very good questions you raise
that help us.

		DR. FISHER: But I do have a recommendation for next meeting.  And that
recommendation would be that in your presentation you cited no
weaknesses.  But you obviously say weaknesses with respect to a single
gender control; the fact that the study wasn't blinded.

		DR. LICCIONE: We just didn't know how significant they were.

		DR. FISHER: No, no - 

		(CD Change)

		MR. CARLEY: - the longest citation that we have used in any - on any
of the studies that we brought to you.  And doing it once was long;
doing it three times would have made it utterly illegible.

		So the second and third submission are just cited by MRID on this
slide.

		The summary here is about what we have to deal with; it's not really
about the study itself, which John Liccione just summarized very well. 
The original submission, 46884001, is the primary report of the
scientific results of the red study for chrome -6 as Copper Shield, as
ACC, as the wood preservative.

		This one came in in mid-July just a couple of weeks after your last
meeting, and we did a check on it for completeness, and discovered that
most of the materials required by the rule of Section 1303 were not
present, so we notified the registrants and they submitted the second
volume, 46922901, and then later in the summer questions arose about the
fact that they had described a study in which there were concurrent
application of two materials to the two arms of the same subjects, and
we had only half the results so we went back and we asked them for the
other results, to which they appended a duplicate copy of the material
about the ethical conduct with an identical in all respect except a
couple of page numbers.  So that's what we had to start with.

		And this is interesting.  This is a transitional artifact having to do
with the effective date of the rule.  The point that Dr. Fisher was
alluding to with her question earlier.

		This study was initiated before the effective date of the rule, last
April.  In fact it was initiated in mid-2005 when they enrolled the
first subject.  Therefore it was not subject to the requirement for
prior review of the protocol under 261125.  But the submission occurred
after the effective date of the rule, and the submission therefor was
subject to the requirement to document ethical conduct that's contained
in 261303.

		In terms of the scope of those requirements they're almost identical. 
1303 requires one additional thing which is the actual consent materials
that were used but without the names of any of the subjects.  But the
scope that it would have come in with the protocol is the scope that was
required at the time of the court submission by 1303.

		This is one of rather few studies that you are ever going to see that
look like this, that were initiated shortly before the rule; submitted
after; and therefore have this big slug of documentation ex post.

		But we have to consider this as a completed study, as an old study if
you will in terms of its ethical acceptance criteria.  It's subject to
the criteria at 261704 for prerule studies reviewed as completed
studies.

		I mentioned that we did a completeness check when we got the first
submission.  It didn't address most of the requirements of 261303.  We
reported the deficiencies to the registrant in mid-August, and they then
resubmitted the supplement in late August, and the additional material
concerning the results of the sodium dichromate study in early December.

		Because this is a completed study, and because it is one subject to
the 1704 criteria for acceptance, I have used the same review framework
that I used for the other completed study subject to the same criteria
that we've discussed at earlier meetings. 

		So this is the summary presentation of the manual framework with the
familiar seven topics beginning with value to society.

		This study is designed to determine the allergic response or the
threshold of allergic response of chrome-sensitized subjects after
repeated open applications of Copper Shield and potassium dichromate.

		Let me just mention that in my view the ethics of this needs to
consider both the Copper Shield and the copper dichromate as part of a
single study.  I didn't make the distinction that John Liccione did
between the ACC data and the copper dichromate data.

		The intent is to support more accurate assessment of exposure and risk
from chrome six when that exposure results from the use of ACC as a wood
preservative.  And in the case of the potassium dichromate from
contaminated soil, it was funded by Porous Products Research Laboratory,
which is the registrant of Copper Shield.

		I defer to John's excellent review on the questions of scientific
validity.

		Subject selection is very interesting.  All of the subjects had some
prior relationship with Dr. Fowler.  Most of them were his former
patients, and that raises some concerns; not necessarily disqualifying
concerns, but it means that special care needs to be taken to ensure
that the subjects understand that they are not in a doctor-patient
relationship; they're in an investigator-subject relationship.

		And so generally speaking that was handled pretty well.  Although all
of the subjects were former patients of Dr. Fowler and had been
identified, the pool fo candidates was identified from an analysis of
his patient records, all of the recruiting was handled for example not
by employees of Dr. Fowler's office, but by employees of the contractor
Exponent who did the work.

		And they clarified the changing role of Dr. Fowler with respect to
each subject, and I thought that was handled reasonably well.

		DR. FISHER: John, can I just break in for a minute and ask with
respect to HIPPA laws, how did he get the names of who his patients were
to the recruiters?

		MR. CARLEY: I don't think that was reported.  If it was, I don't
recall seeing it.

		DR. FISHER: It says that initially a research assistant contacted
these people to see if they wanted to participate.  Would that have been
someone in his office then?

		MR. CARLEY: The research assistant who contacted people was an
employee of Exponent.  Not an employee of Dr. Fowler.

		Okay, going back up, not done yet.  All of the control subject, as we
were discussing a few minutes ago, the untreated controls were employees
or near employees.  As Dr. Sharp pointed out, one was a former employee,
and another was a relative.

		And at the direction of the IRP all of those people signed a special
noncoercion statement in addition to the informed consent that all of
the other subjects signed that crossed their heart and said they weren't
being pressured to do this in any way.

		There was also a third component referred to in the protocol of April
2005.  That protocol was submitted to EPA for its review informally as
well as being submitted to the IRB.  And both in our comments on the
protocol and in the IRB's first round of comments on the protocol we
noticed that there was reference to a preliminary methods testing phase
using employees of Exponent, and there was no discussion in that
discussion of any IRB oversight or informed consent.

		The - to the best of my knowledge there has been no response to EPA's
comments on this subject.  The response to the IRB was, oh don't worry
about that; that's not part of the protocol that we're asking you to
review.  And the matter was dropped.

		Next subject is the risk-benefit thing which I've subdivided.  Several
specific steps, good steps, were taken to minimize the risk.  The total
dose for chrome six was kept to a level that was well under the RFD. 
Only small areas of skin were treated.  Treatment was stopped at any
dose level that caused a response; but interestingly, only at that dose
level, and not necessarily at higher dose levels, something which I
don't fully understand.

		There was medical oversight throughout the process.

		The benefits that were characterized in the study: first, they
accurately stated that there were no direct benefits to the subjects. 
They anticipated as one of the benefits an improved risk assessment. 
And they asserted that the other benefit from the ACC aspect of the
study was potential wider use of ACC wood preservatives, and that this
was a societal benefit because the environmental consequences of the use
of ACC wood preservatives were more benign than the consequences of the
use of alternative wood preservatives.

		The ratio of risks to benefits was addressed in the report in this
quotation.  Given that the health risks to and response burden on the
study participants is minimized by the design of the studies, and that
volunteers are paid for their participation, the anticipated benefits to
society will outweigh the risk to study participants.

		Which brings up the question of compensation.  The - someone who
participated in all sessions and showed up for all of them would have
been paid $1,215, and I don't know exactly what to compare that to. 
This was a nonresident study.  They went in; they got their daily dose;
and then half a day later they washed up and so forth.

		But my sense is that that compensation may have been high enough to
influence subjects' decisions to participate.

		There was -

		DR. FISHER: Do you know for the employees, do they get that on top of
being paid?

		MR. CARLEY: That was my understanding.  That was my interpretation of
what I read in the study.  I think the untreated controls were asked to
get hit on the arm with five drops of water in exchange for which they
got an extra $1,200.

		DR. FISHER: John, Richard just made a point.  I think we need to be
careful that the controls were treated; they're not untreated.  They're
treated -

		mR. CARLEY: Right, the unsensitized controls.  Sorry.

		DR. FISHER: Consistent with your point that those - whatever those two
things, the Copper Shield and the potassium dichromate both need to be
looked at.

		MR. CARLEY: Yes, my mistake, thank you.

		The study was reviewed by an independent ethics review board, Shulman
(phonetic) Associates of Cincinnati.  And the record submitted suggested
that the IRB did a pretty good job.  They reviewed; they conditionally
approved; they revised; they approved; and then they monitored the
study.

		The documentation of the IRB approvals is weak.  I say that because
what happened apparently is that the stamped, approved documents from
the IRB were retyped for inclusion in the study report, and in the
supplemental materials documenting ethical conduct.

		I went back and picked my way through the correspondence with the IRB
that was also provided in the supplement, and I satisfied myself that
the retyped consent materials and information for subjects in the study
report was indeed consistent with what - substantively consistent with
what the IRB approved.

		But I was troubled that they didn't follow the more common practice of
submitting stamped approved materials.  Compliance asserted with many
standards, including the common rule, the Declaration of Helsinki, the
Nuremberg Code, the recommendations of the NAS committee, the Belmont
report, and FDA's regulations.

		FDA's regulations were there because the IRB which apparently does
most of its work on the drug side, sent in reaction to the submittal
some model language that referred to appropriate regulations with a
request that the investigators adapt this appropriately to this
particular study.  That language was adopted, rather than adapted,
without any change and without making an adjustment.

		The intention was never to send this to FDA.  The intention was always
that this would be a study for EPA.  But they included whole paragraphs
without fixing it.

		On informed consent, all the subjects provided signed consent.  The
process is described in considerable detail, and the process by the way
is something on which EPA had commented at some length when we saw the
proposed protocol in April of 2005.

		The process was conducted by the Exponent staff, not the employees of
Dr. Fowler, who was of course known to the subjects as their
dermatologist, and whose staff they had worked with before.

		And I note in my review that the process showed some confusion about
what constitutes consent and when it was obtained.

		And specifically they talk about individuals who while they verbally
agree to participate, end quote, identified in the course of the very
telephone conduct, after a very brief discussion of unscripted, quote,
information regarding study objectives and protocol and the potential
hazards of thermal testing with chromate compounds, end quote.

		I repeat, that conversation if it was scripted, we don't have the
script.  This agreement, i.e. verbally agree to participate, is
subsequently characterized, as quote, verbally agreeing to potentially
participate, end quote.

		Then in their first office visit, potential participants as they were
called then are informed in more detail, and have an opportunity to ask
more questions.  And then participants, end quote, are given a copy of
the informed consent form to consider as they decide whether they want
to participate.

		And then they sign the primary consent form and supplemental forms as
appropriate.

		I found no evidence that there were any study-related procedures other
than the administration of questionnaires that were conducted before
obtaining a proper written consent to the full IC material.

		But the description I found both confusing and troubling because of
this ambiguity about what the status of people was, when they had been
contacted but not yet informed beyond the barest minimum.

		I don't - candidates would be a better term than participants, for
example.

		Subject privacy was maintained.  Subjects were free to withdraw. 
However, if they withdrew they were entitled for payment for all of the
days they had done, but no payments were made until the end of the
study, and the reason offered was to ensure regular attendance and
completion of the study, which is clearly an attempt to bound the
freedom of the subjects to withdraw.

		In terms of prevailing standards, this was conducted in the U.S. in
2005, a reporting phase until 2006.  Plainly the common rule in FIFRA
12A2P are appropriate standards, and as I mentioned before 40 CFR 261303
requires at the time of submission documentation of the ethical conduct
of the study.

		I want to mention at this point two errors in my review of September
12th on page five.  The citation in the first paragraph of that review
should be to 261704 rather than 1705 as the standard for approval.  The
quotation is correct, but the citation is wrong.  The passage I quoted
is 1704 but I called it 1705.

		And then I also quote text from 1703, which does not reflect the
amended language that took effect on August 22nd, which adds nursing
women to the list of specially vulnerable groups, data concerning which
we can't rely on.

		Neither of those errors or the corrections has any effect on the
substance of the review.

		I noticed in general that compliance with the requirements of the
common rule was pretty good.  But while this is a noteworthy potential
ethical deficiency, we don't have evidence to resolve it, but the
preliminary testing does not appear to have involved IRB oversight.  It
may not have involved informed consent.  In any event it was described
as something involving employees of the organization.

		And so the issues of oversight and consent would have been subtle. 
And that may be a problem, but it is one for which we don't have much or
any real evidence.

		So in summary, there are some gaps in the documentation of the ethical
conduct.  I would characterize them as minor gaps.  Certainly we have
far more information about the ethical conduct of this study than we do
about the conduct of some of the earlier studies that you have reviewed.
 And gaps themselves don't amount to clear and convincing evidence which
you need according to the regulatory standard.

		I found no evidence that the research was fundamentally unethical;
261703 does not prohibit us to rely on it; all of the subjects were at
least 18; and those potential subjects who were pregnant, trying to get
pregnant or nursing were excluded.  And there is as I mentioned a
possible deficiency in the preliminary study but in my judgment this
didn't rise to the level of clear and convincing evidence that the
conduct was significantly deficient, relative to prevailing ethical
standards.

		Which brings us to the charged questions which are the usual ones for
studies considered under the standard of 261704.

		Is there clear and convincing evidence that this study was
fundamentally unethical or that it was significantly deficient relative
to the ethical standards prevailing at the time?

		Thank you.

		DR. FISHER: Thank you, John.

		Are there any questions?  Dr. Fitzpatrick, then Jerry then - Michael,
did you raise your hand?  No, okay.

		DR. FITZPATRICK: Just a quick question.  Does it bother you that the
IRB stamp doesn't have an expiration date on it?

		MR. CARLEY: Didn't notice it.

		Something else that - no, no, no, I'm getting confused, there is a
different IRB.

		DR. FITZPATRICK: The stamp - well both of those studies, the stamp
only has the approval date but doesn't have the expiration date on it. 
And you are assuming they approved it for a year but you don't really
know.

		MR. CARLEY: I didn't notice it, but I did notice that they were on top
of the periodic reporting thing, and everything was done finely.  In the
event looking at this ex post, I don't think there was any substantive
problem resulting from that.  They plainly stayed in touch and continued
to monitor the conduct of the study.

		DR. FISHER: Jerry?

		DR. MENIKOFF: I have a general question about benefits, and in
particular, the value of research to society.  And this may be partly
due to the strange status of the study, being that it is sort of a
pre-existing study.

		But in your review of this study, and you have actually said a little
more in your discussion, you note that they are trying to find out more
details about this compound and the risks.  But in the other  studies
we're dealing with later today you more explicitly, for example in the
insect repellent studies, you go into more discussion of alternative
products, and the possible benefit to society for having this as an
alternative product.

		In this study you said less about that.  And I guess I'm just, in
terms of educating myself, was that partly due to the procedural status
of this study, or more generally in terms of EPA's evaluation of the
benefits and looking at this benefit, the consequences of the study.

		Do you more routinely, for example, look at this compound - and this
is getting partly at the comment that one of the public commentators
will make - do you look at the issues relating to what other compounds
do people use to serve the same purpose and factor that into the benefit
analysis or the value to society analysis? 

		Is that clear enough?

		MR. CARLEY: Well, I'll try to respond to the two or three separate
questions that I thought you asked, and if I don't hit the mark I'll try
again.

		What I think you were getting at with your use of the procedural thing
is the distinction between review of completed studies and review of
protocols.

		In the case of this study the question of whether it should have been
conducted is the only relevant one, that we are faced with the
existential phenomenon; it was conducted; it is before us.

		So in that sense I didn't spend a lot of time on scratching my head,
would I have recommended doing it differently if I had reviewed this
before the fact. 

		In fact I did see it before the fact, and recommended quite a few
changes.  But I didn't spend a lot of time on that.

		DR. MENIKOFF: Let me just - I guess my main point is in terms of the
value of the research to society, it seems like at least in this case
you said less in terms of, we're trying to get a more accurate judgment
of the risks that this compound causes, and not get back to the broader
issue of, there are other compounds out there, the issue has been
raised, and we'll see shortly, that other compounds are just a lot less
risky, to what extent does the existence of riskier compounds out there
that our society uses to achieve the same purpose, do you view that in
general as always a part of your analysis of value of research to
society?

		MR. CARLEY: In general I think the answer is now.  Bill, step in and
help me with this if you like, but give me a chance to screw up first.

		There is a provision in FIFRA that talks about criteria for
registration, and basically say that essentiality cannot be made a
requirement.  So we are as an organization in the habit of looking at
the case pretty much in itself.

		In the case of looking at particular studies or study proposals, there
are practical limitations on the extent of the comparison.  When we look
at the questions that we developed for protocols including some of the
questions that you all specifically recommended at your last meeting,
some of these questions will come up, the basic idea being what's the
hole in the fabric that will be filled if this study goes forward?  How
will it fit into context?

		And that's I think the generic form of the question you're asking. 
But I think probably at least up until this point we have not thought in
terms of considering alternative pesticides to this one as a directly
relevant factor in assessing a particular study.

		DR. MENIKOFF: Partly I guess it was just in terms of consistency with,
when we get for example to the insect repellents, it was specifically
raised that the public, there's a hole out there, and this product will
fill that need.  It sounds like you're basically telling me that in
general your jurisdiction, under your rules you're allowed to do, is
similar to the FDA in that once for example - but the FDA analogy is
that the FDA doesn't figure out whether this is a particularly good
drug, but rather that it's safe and effective, and then it's up to the
public to decide whether or not it wants to buy it.

		MR. CARLEY: That's a rough analogy.  It's a pretty good one.  And I
think it's extremely important, although you kind of rode over it, to
make the distinction between discussions that occur before the study has
been conducted, and the kind of discussion we are in now about this
study which is after it is done. 

		MR. JORDAN: This is Bill Jordan.  If I could add a couple of words.  I
want to underscore the last point that John made.

		It seems to me that the inquiry for studies that were initiated and
completed and were not subject to the prior protocol reviews presents a
different situation from ones where we're looking at the protocol and
trying to decide whether or not among other things it's ethical to go
forward with this study, the benefits of the information to be gained by
the study will contribute enough to justify the risks that the subjects
will encounter.

		So John's choice not to dwell particularly on the justification value
of the information to - in how that sorted out, and would you do the
study if you were looking at it afresh seems to me to make a lot of
sense.

		When we get to protocols, we do actually have that choice in front of
us as an agency, and you as a board will have an opportunity to advise
on that.  But it seems to me that the question of how to weigh the risks
and benefits, the benefits of the information against the risks to the
subjects, when you move beyond simply the value of the information is
going to get tricky.  Pesticides have all sorts of risks.  They have
affects on people who may be handling them; they may be residues in
food; they may also affect wildlife; they may also have benefits
compared to other products.

		So trying to figure out whether approving the registration of a new
product based on the results of a human study is an overall benefit for
society is a dicey subject, and it's hard to make those comparisons
particularly at the early developmental stages where people are just
beginning to start the human studies and we may not have a full
understanding of the picture of effects on wildlife, or how effective it
is compared to another product and so on and so forth.

		So I think we'll all have to feel our way forward as we look at
protocols in this area, and trying to sort through this one in
particular will be one of the harder challenges.

		DR. FISHER: Ken?

		DR. KRISHNAN: The Copper Shield in itself is presented as an
alternative to the wood preservatives currently in use, based on the
lowest copper content among the preservatives.  So I don't know if that
helps or addresses any of your concerns, Jay.  Copper being hazardous to
some of the environmental components like the fish and other wildlife,
so there is an interest in having preservatives that lower copper
content.  That is more of a justification that's -

		mR. CARLEY: Let me add a point about that.  The benefits that were -
that the IRB and that we would have considered had this come to us
before the fact have to do with the benefits of the information expected
to be gained from the research, and this comparative copper load in the
environment as a consequence of a speculative significant change in the
usage patterns, the quantitative usage patterns of different wood
preservatives, is getting pretty far downstream.

		Using the specifics of this case, I would be inclined to give very,
very little weight to that argument that a major growth in the sponsor's
sales would represent some increment of environmental improvement and a
social benefit that should be attributed to the conduct of this study.

		I think that is far fetched, whatever the superlative form of that
adjective is.

		DR. FISHER: John, I had some comments and then some questions.

		One is, I think we had mentioned, and it should be underscored, at a
previous meeting, that compensation can't be included as a benefit.

		So I would like - it's probably good for you to note that as your
review goes on.  But I want to affirm that again, because we have said
that.  And this is a previous protocol obviously, but this for future
protocols that can't be done.

		MR. CARLEY: I call your particular attention to the quotation marks at
the beginning and end of the text in the bullet under risk-benefit
ratios.

		DR. FISHER: Exactly.

		My second comment is in some sense a little bit of a follow up with
respect to also including in benefits, right, that the ACC may then be
used in other - one of the benefits was that it could be used in other
products?  If you go back to -

		mR. CARLEY: Not so much in other products, but just at higher levels. 
I don't mean higher concentrations; I mean more broadly, more widely.

		The argument is - quotation marks, or whatever paraphrase marks look
like - the argument is that this study would justify approval of this
product for uses which if they grew to sufficient prevalence would
result in a decrement in the environmental load from the use of wood
preservatives in America or something.

		DR. FISHER: I'm interested in its use in the informed consent.  And
given these are employees, to say that there is going to be greater use
of a product produced by a company in some sense has a certain amount of
meaning within an informed consent that could in some sense whether
they're signing a coercion note or not, be saying that this is an
important study for the company; you need to be involved in.  So it's
just something to think about.

		MR. CARLEY: Remember that there are a few arms lengths involved.  The
employees of Dr. Fowler were fairly far removed from it as far as the
products research laboratory, and from Exponent, who was the contractor
that brokered the study.

		They wouldn't be - I don't see an issue there.

		DR. FISHER: All right, great.

		Then I just want you to - if we look at the risk they describe in the
informed consent, so - and who knows which document this is.  It looks
like it's on page 179. 

		But it says the study is being performed to determine whether you have
a response such as redness, itching or swelling at the site of
application of any of the test materials.

		The it says, you might develop a rash on your back, and then it says,
if a topical steroid is provided, the study doctor will discuss the
risks of the medication with you.

		Now given that it would be anticipated that some people are going to
experience this, I'm wondering, then that becomes a risk of
participation in the research if in fact you are going to get these
topical steroids, no?

		VOICE: Well, if I can answer that, one of the questions that I had
was, were the individuals that did develop ACD, were they provided a
topical steroid.  It was unclear.  And use of a topical steroid was an
exclusion criteria.  So I was interpreting it as they weren't treated. 
But it was unclear.

		And I was hoping that you guys may have more insight to that.  At this
point I would assume that the topical steroid is treatment post
withdrawal from the study.

		DR. LICCIONE: That is correct.

		VOICE: That you've gotten your reaction, you've taken your
measurement, and you still have skin sloughing off, and we probably
ought to do something about that.

		DR. FISHER: Right.  But I guess my question is, whether or not if in
fact risks are associated with the treatment you are going to get, if
there is an experimental reaction that needs treatment, is it
appropriate to say, we'll let you know what the risks are of that
treatment if it happens to you but not part of being informed of the
risks that potentially could happen to you as a legitimate potential
risk of being in the study.  I guess that's my question?

		DR. PHILPOTT:  Excuse me, I'd say that the answer is no, because at
that point you are moving off of research into clinical care, and
whether you use a topical steroid, or you decide this person needs to be
on IV steroids, that's a clinical issue but not an issue.  As Sean
points out, you need to provide for what happens if, but the what's
going to happen part is clinical, not research.

		MR. CARLEY: Essentially the entire pharmacopeia available to Dr.
Fowler could potentially have been used depending on what happened. 
It's very highly contentioned, and I think that if you think of the
informed consent as the reasonably predictable effects sort of thing - I
just made a mess - that probably - I agree with Dr. Chadwick.  I think
that goes too far.

		DR. FISHER: Okay, Sue.

		MS. FISH: Also I just want to remind us that we're not the IRB here,
and we're not looking at a prospective study yet to be conducted.  But
is the fact that the risks of the steroid cream, even for some of us who
think it might have been appropriate, is it clear and convincing
evidence of noncompliance with the regulations or fundamentally
unethical in retrospect.

		And I think we could probably debate for a long time whether or not
the risks should be in the next study to be done prospectively.  We'll
probably get some debate.  But the study is already done.

		DR. FISHER: Right.  And just as a reminder, we are not limited to
fundamentally unethical, which is harm and no informed consent, but also
significantly deficient.

		But I think your point is well made.

		Let me ask one more question, and that is the unforeseen risks.  And
the unforeseen risks say there may be risks from participating in this
study that are unknown including an allergic reaction.

		I'm confused.  I thought this study was studying allergic reactions.

		MR. CARLEY: I understood that to be the risk of an allergic reaction
to something other than the hexavalent chromium.  And there were other
materials in the formulation 

		VOICE: More a systematic reaction.

		DR. FISHER: Okay, Gary and then Richard. 

		DR. CHADWICK: I guess I find myself in the role of IRB apologist
today.

		I think you perhaps can read that as an allergic reaction up to and
including antiphylaxis (phonetic), and maybe that's what they're trying
to say here is that it could be a really really bad allergic reaction. 
I agree with you, that's the whole point of the testing is to get an
allergic reaction.  But I think that what they are probably trying to
say is that although we don't think this is going to happen you could
really go off the charts.

		I suspect that's what - you have to read between the lines.  But I
would agree.  It's not well written.  It's not clear.  And either it
shouldn't be there, or it should be better clarified.

		But as Sue points out, we are not an IRB, and we are not prospectively
giving them -

		DR. FISHER: Right, and what we do have to decide is whether or not the
information was significantly deficient that people who were making a
participation decision were in fact not informed.  That is part of the
charge for previously conducted studies, which is, to put it in my
question, I don't know how we're going to decide on it.

		Richard. 

		DR. SHARP: I want to come back to a question about compliance with
privacy protections.  There is an inconsistency, at least I consider
this an inconsistency in what's said with regard to this preconsent
contact that takes place.

		In the protocol it's described as being the case that every contact
was made by research staff that did not work in Dr. Fowler's clinic.

		We also have the materials, the script, that was actually used to set
up that conversation.  And the very first statement in that indicates,
quote, I'm a research nurse calling from Dr. Fowler's office.  So it's
hard for me to reconcile how those two things could conceivably be the
case.

		So since that follow up call - or excuse me, that preconsent phone
call also includes the collection of significant medical information
including things like use of immunosuppressive agents and so forth, that
seems like that raises some issues that we need to sort out who made
that call.

		MR. CARLEY: The principal author is here in the room and may be able
to help.

		DR. FISHER: Gary.

		DR. CHADWICK: That actually raises a couple of questions that might
need some additional thought by the board in discussion.

		To what extent do we worry about HIPPA on this board and privacy
protections and potential violations or noncompliance with HIPPA and/or
other federal regulations. 

		So that's kind of touching on what you are talking about there. 
That's something that I think John raised.

		A question I had, and again I think this is maybe a policy issue for
the board picking up on John's comment about the IRB approvals being
weak, that they were retyped and reformated for the report.  I can
understand why a person would do that and not assign any nefarious
intent or whatever.  But I do think originals are needed, and if they
are not supplied they need to be obtained, and it is not acceptable -
this would be my recommendation that it would not be acceptable without
the addition particularly of these documents including the IRB approval
letters and the informed consent that was actually approved and stamped
by the IRB.

		And then my last comment, and again John pointed out the fact that
this talks about compliance with FDA regulations, and in the consent
form it does talk about disclosures and so forth, again touching on the
sort of HIPPA issue.  But it does talk about disclosures, and it only
states that disclosures would be made to the FDA.  And it doesn't
mention anywhere that this would be disclosed to the EPA.  And it makes
one wonder what was the IRB thinking, and should that have been edited
out.

		MR. CARLEY: In the IRB correspondence, they said, here are examples of
the sort of thing that you need to put into this.  You should adapt them
appropriately for the circumstances of this study or words to that
effect.

		So that's I think what the IRB was thinking.  You need to ask a
slightly different form of the question: why didn't they adapt it as
appropriate to this circumstance?

		DR. CHADWICK: Right, and you would think that if they were doing
something to be submitted to the EPA, they would at least mention that.

		Now maybe that's in the HIPPA documentation which may or may not have
been provided, whatever.  This certainly doesn't seem to include
anything about the information to the subjects that this is going to be
- this data is going to be given to the EPA, and in fact in the handout
here anticipating the next talk, we have pictures, colored pictures
which are - in the consent I assume qualify as nonidentifying photos,
which it says are only going to be used for medical publications.

		So potentially you have a violation of consent by giving this to the
EPA, although it's clearly something you would expect of this type of
study.

		So I think there are some deficiencies.  But again not clear and
convincing to that level.

		DR. FISHER: Sue, yes.

		MS. FISH: In the HIPPA documents, just to clarify Gary's question, the
HIPPA documents also relate to FDA, and don't mention EPA, but that goes
back to the original question is, whether HIPPA is a concern of ours or
not.

		And I would think that the question is, like EPA's comments on that. 
Because certainly under the ethical standards it says, did the study
fail to fully meet specific ethical standards prevalent at the time the
research was conducted?

		Well, HIPPA has entire sections on research.  And how you implement an
informed consent authorization; all those kinds of things.

		I would think that it would be hard for us - if we thought a violation
of HIPPA was apparent, it's difficult to recommend as ethical a study
that did not adhere to that regulation.

		So I mean that's - a fault if you ask me.

		MR. JORDAN: This is Bill Jordan.

		It seems to me that in terms of ethical treatment of subjects that
protecting privacy is recognized widely as a legitimate consideration. 
And HIPPA as best I understand it, and I'll readily confess my
understanding is pretty limited having only skimmed some forms and then
signed because I wanted the doctor to look at me, they set a set of
standards, relevant set of standards.

		Therefore it seems appropriate to me to think about HIPPA in terms of
these sorts of issues.  But frankly we have not discussed this at any
great depth with them.  The pesticide office, and I would want to confer
with Dr. Lutz and get his thoughts on it, and perhaps talk with our
colleagues at OHRP to see how they sort these things out.

		But in terms of that, it seems to me a relevant factor to look at. 
Whether it rises to the level of being significantly deficient I don't
think we're prepared to offer any views.

		DR. FISHER: Jerry?

		DR. MENIKOFF: Yes, I guess my general take, just thinking aloud a
little bit, is that probably HIPPA, we shouldn't be enforcing HIPPA.  If
we look to the preexisting studies, the studies that were conducted
before this, yes, there was general language about appropriate ethical
standards.  But then on the more prospective studies, which
theoretically I assume are under stricter rules, it's actually enforcing
the actual regulations, the subpart K, L et cetera.

		DR. FISHER: I don't think that's true.  I think the question under
previously conducted study is, did the study fail to fully meet specific
ethical standards prevalent at the time the research was conducted.

		DR. MENIKOFF: Correct.  All I'm saying is in general I would have
thought by and large we were going to apply stricter standards to the
new studies as opposed to the old studies.

		DR. FISHER: I would just add, if we believe a study violated federal
law, whether we want to consider that as a significant deficiency.

		DR. MENIKOFF: The general argument on that is, let's ask ourselves
what standards we are going to apply prospectively and the ones that are
actually under these regulations.

		I think in general most people apply the Common Rule which has its own
language and subsection that deals with there being appropriate privacy
violations.  In general OHRP's view of that is that there are separate
privacy standards built into the common rules that are separate from the
HIPPA rules, and -

		DR. FISHER: Well, let me say that OHRP, and as part of SACHARP
(phonetic) we wrote an entire set of recommendations about OHRP should
handle HIPPA which was accepted by the secretary.  So OHRP does not - it
does include how to follow HIPPA recommendations as part of how it's
going to evaluate certain studies.

		And I'd also point out that HIPPA was in effect in April, 2003, so
that it is the law that covers this particular study.

		Suzanne?

		DR. FITZPATRICK: We don't have all the information here.  Because he
does a lot of research studies, he may have already asked all of his
patients if it's okay to be conducted to be in further studies.  So
therefore there is not any kind of violation if they have already agreed
that they would be willing to be in studies.  So I think that without
further information on what the data that the people are taking from
them, what their prior agreements were, you can't really say anything.

		DR. FISHER: I agree with you. And that's part of what I think Jerry
was pointing out in terms of post versus what we can require, but I also
want to point out that he cannot do it verbally.  It had to be a written
authorization, especially since he is both the investigator and the
researcher.

		And the extent to which we have those pre-participation authorizations
may or may not be something that we are going to pursue in a study that
was already conducted.  But it's always important to make a statement
about if we had the information or if it's a protocol, where we would be
going with that.

		If in fact the authorizations to provide the information in recruiting
was not signed, then that would be a violation of HIPPA.

		Any other comment?

		Okay, so I believe we have public comment.  And that's from Deborah
Procter of Exponent.

		And Deborah, could you come up. 

		Ms. Procter, you are going to have to introduce yourself.  And also
we're going to be limiting you to five minutes.

		And in introducing yourself, give your credentials as well as who you
are speaking for.

		You have to stand up?  And if you are comfortable sitting down, so
that you could speak into the mike so we could hear it.

	PUBLIC COMMENT

		MS. PROCTER: I'd be happy to.  Thank you.

		I am Deborah Procter.  I am the principal toxicologist at Exponent,
and I am the project manager of this study.

		I am here with my colleague Jeffrey Thrudall (phonetic) who also
conducted this study with me, and Dr. Fowler.  And we are here on behalf
of Forge (phonetic) Product Research Laboratory which is the sponsor of
the study.

		My primary objective in coming here today is to make sure that the
board has an accurate understanding of the study to clarify any
misconceptions about what was done.

		We submitted to EPA 700 pages of documentation, so I'm not surprised
that there is some confusion on certain issues.

		I also wanted to briefly overview the risk-benefit ratio as we see it.

		Probably first and most importantly, the reason there was no
documentation about the preliminary Exponent employee study was because
it was not conducted.  We considered doing it.  However, and it was
referenced in a draft protocol that we had submitted to EPA that we were
hoping to get their input on the scientific information, sort of a draft
protocol that had not gone to the IRB.  We decided to remove it because
they thought that it was unnecessary.  So it was removed before the IRB
saw the protocol.  But there were typographical errors that IRB caught
and asked us about it, and we responded that it was not part of the set.
 And I'm sorry that there was confusion on that.

		With regard to the recruitment process I'll go over it specifically. 
First over the telephone there - Dr. Fowler has a relatively large
practice.  And there are five research nurses who do nothing but
research; they don't work at all with the clinical practice.  So they
have a research operation and then they have a clinical practice.

		So all of the review of the finals and the contacting of the
individuals was done by his research nurses, not by his clinical staff
members and not by  Exponent.  Exponent did not go through his medical
files.  In fact Dr. Fowler was sensitive to HIPPA in saying, no, you are
not allowed to look through the files.

		These telephone calls were with an IRB-approved telephone script.  In
terms of whether or not people were interested; whether or not they were
eligible.  And then they were scheduled to visit the clinic.  Once they
were at the clinic for the first visit, again they were explained the
study protocol, and they were given the informed consent forms to sign
and review.

		After they signed the informed consent forms, Dr. Fowler examined
their skin.  They filled out a more detailed questionnaire, and then
attached samples were applied.  

		So there weren't any chemical exposures until informed consent was
given.

		With regard to payment, what we did is a literature search to
determine what a reasonable payment would be for this study.  We found
that there was a range of about $25 per day to $125 per day for a visit,
so we decided to take the midpoint of this range, of $75 per day.  Dr.
Fowler thought in his experience doing testing that that was a
reasonable amount of reimbursement.

		The ROAT study involved coming to the clinic five days a week for two
consecutive weeks.  So it is considerably inconvenient.  And so we
increased the payment for a particular patient in the ROAT to $90 per
day.

		Payments were made for each visit at the end of each phase.  So after
patch testing payments again, after the ROAT payments were done. 
Participants were free to withdraw at any point during the study, and
that payment on a daily basis we thought was just impractical, because
we would be writing more than a thousand checks with people coming and
going every single day.

		With regard to minimal risk to the participants, the exposure was
generally consistent with common clinical tests, specifically patch
testing.  Responses are transient in nature, lasting one to three weeks.
 The highest dose in the ROAT in this study was below clinical patch
test dose.  It's also lower than the highest dose in the Nethercutt
study.  And I note that the Nethercutt study was said to have posed
minimal risk.

		It's also as noted by Mr. Carley 2,900 lower than the EK reference
notes, and small areas of skin were exposed, responses were monitored. 
Applications were discontinued for allergic reactions, and treatment was
actively made available.

		To give an example of the responses that we observed, just to give you
a visual, I gave you the pictures.  This is a mild, a moderate, and a
strong allergic response that we observed.  We only had one strong
allergic response in our study to Copper Shield, and only 17 of the 60
people reacted at all.

		With regard to the value of the research to society, most importantly
is the information that can be used for risk assessment.  Specifically
what we  is dose response data for application of Copper Shield, a
chemical that has not been tested before.  We also provide information
we believe can be a set of skin references technically on studies of
repeated open application tests which is more representative of
environmental exposures.

		Also our results generalize to the general population by normalizing
for the - with the patch data test for all the allergic people in the
U.S. and then we think it's applicable for projection of induction for
the general population.

		The distribution of societal benefits would be to workplace exposure
to hexavalent chromium, both in wood treatment and in the many other
occupations where there is hexavalent chromium exposure.

		Consumers with treatment wood who could potentially be exposed to
hexavalent chromium.

		And let's see, it's the wrong slide, it won't go back.  Okay I'll just
have to finish up.

		And then the benefit to the greater community would be that this data
could be used to set soil standards.  Certain states, Massachusetts and
New Jersey have already used patch test data to set soil standards for
hexavalent questions.

		And I'd be happy to clarify or address any questions.  There have been
some questions that were not quite correct, or confusing in what we
presented, and I'd be happy to try and address them again if I can.

		DR. FISHER: Thank you.  Before we go to questions, I have a question
myself.

		The next public speaker, Howard Mayback (phonetic), where is he, is
that about this study?  

		(Off-mike voice)

		Oh, okay, then why don't we put questions to Ms. Procter.

		And I was wondering, Ms. Procter, I'd like to give you an opportunity,
are there specific points that you heard us questioning about that you
feel that you would want to --

		MS. PROCTER: I made a list as I sat in the audience so I could go
through it.

		First, on the scientific issue, and maybe some of this has been
covered, but I'll just make certain clarifications, no guaranty that I
got all of it, so please feel free to interrupt.  With regard to how
it's relevant to environmental exposures you can wipe wood with wipes
and measure hexavalent chromium when it is freshly treated with these
chemicals so a direct touching it would give you exposure to solutions.

		Why we had a disproportionate number of plus three more sensitive
individuals, we did a preliminary patch test at the beginning of the
study, Phase One, so did Nethercutt.  We didn't lose -- we lost 30
percent of the people who we thought we allergic did not have a positive
reaction.  Nethercutt lost almost 50 percent.

		So these -- when we go back and look at what these people's score was
historically, well the first time they were patch tested, they are
almost all plus ones.  So I think what has happened is that their
allergy has waned over time.  And they are no longer allergic or just --
not allergic enough to react to a patch test.  So I think that is why we
have more plus threes in our population.

		More of the males also were plus twos and plus threes.  And I don't
have a specific explanation for that.  Some research has shown that you
are more sensitive if you are sensitizing exposure was higher.  So of
the people who were sensitized by cement, which is classic for exposure
-- you know allergic contact dermatitis from hexavalent chromium, most
were men and had stronger responses so that could be a reason why we had
a larger number of plus two and plus threes that were men.

		We normalized the ten percent MET because we realized how very
sensitive the ten percent MET is to the makeup of your study population.
 And it was a hindsight that we did this.  And it actually came from a
comment that EPA made about how does our population compare with the
general population.  So we did that kind of as a hindsight.  But I think
it is pretty important because, you know, one additional plus three, one
less plus one really can create quite a bit of variability.

		And we looked for historical data on the Nethercutt population to do
the same with his data but unfortunately we couldn't find the patch test
scores.

		Why were the controls different with regard to irritant responses? 
I'm unsure.  I will comment that in the potassium dichromate study, we
did have one irritant response to potassium dichromate.  That was to the
lowest dose of potassium dichromate.  But there was one response.  It's
unclear.

		The irritation that occurred, it occurred in nine individuals in our
study.  In four of the nine, all they had was a response that Dr. Fowler
deemed to be irritant.  The other five had allergic and irritant
responses.  The responses didn't occur on the first day.  The first
irritant response occurred on day four.  There was one on day five.  And
after that, the responses were in the second week of the study.

		With regard to the NAGDC database, Dr. Fowler is a member of the NAGDC
and has been contributing data to that database historically.  And
because he does more patch tests than any other physician in the United
States, you know, more data go into that database from him than from any
other physician.

		And it is reasonable to believe that any of the people who were patch
tested originally are already in that database although some of the
people had patch tests that were quite a bit older or newer so they
didn't fit the same time frame.  But otherwise they should have been in
the database.

		The two people who we didn't know if they were allergic to hexavalent
chromium or not and they were patch tested.  And in that patch test,
there was a response.  I think there was some question as to whether or
not they really were allergic, one had a plus three response.  The other
one had a plus one response.

		In the rote, the plus three reacted to very low doses just like most
of the plus threes did in the study.  The plus one did not react just
like the rest of them did.  So I think that they had been sensitized
previously.  They just weren't aware of it.  It surprised us as well.

		Patch testing was conducted between 1.5 and four months before any
rote testing was done so there quite a lot of time had gone by.  And Dr.
Goudrall (phonetic), who is here with me in the audience, recalls that
the nurses asked before the rote started whether the patch test site had
healed.  And the answer was yes.

		Exponent (phonetic) was the primary designer of the study.  Dr. Fowler
provided input.  And then with regard to the use of only one physician,
I have been involved in both Nethercutt study and there was a 1999
Fowler study in which we used multiple physicians and those studies,
though, weren't rote so they didn't have to be there every single day. 
They just came in for patch test reading.

		And what we found was great consistency in the way the different --
two different physicians read the same result.  That was a patch test
and this is a rote.  So it's different.  But we just didn't think it was
feasible to get another physician to come in and do this at the same
time with the potential exception of other dermatologists that work in
that practice.  But we didn't include that.

		The loading of chromium in the Phase One patch test, we used the .25
percent potassium dichromate but we didn't weigh the Finn chambers
before they were applied because it was the standard patch testing
approach.  And so we weren't really thinking about that specific aspect.

		In researching Finn chambers, they make eight millimeter, 12
millimeters, and 18 millimeter Finn chambers.  Most patch testing, the
vast majority is done with eight millimeter Finn chambers.  Those are
much -- they are smaller, obviously, about half the diameter or half the
surface area as the 12.

		And in part because they test 65 to 100 chemicals all at the same
time, so they need not very much -- you know they use a little bit of
area.  And also I understand from Dr. Fowler they are a little less
expensive.

		We used the 12 millimeter because Dr. Fowler felt more confident that
he could judge the difference between an irritant and an allergic
response with a larger exposure area.  So that is why we did the 12
millimeter.

		With regard to the ethical issues identified by Mr. Carley, you know,
I guess I apologize for the confusion that was generated in those
reports.  We did submit the IRB stamps documents ultimately.  We had
included them for just typographical reasons.  We had retyped them so
the page numbers would be right of all things that now seems rather
silly.

		There were clearly things that needed to be revised in the draft
protocol that EPA reviewed in April.  And I believe that was rectified
by the IRB review.  They made a lot of changes in the important percent
that we wrote it completely.  So many of the things were modified by the
IRB with regard to ethical issues.

		And with regard to the benefits of society, you know, we did -- the
potential benefit associated with the less use of copper, less
introduction of copper into the environment was really portrayed as a
potential secondary benefit.  And, you know, there is about -- more than
50 million pounds of copper is used in treated wood in the United States
annually.  So there is a tremendous amount of copper that is put into
the environment in the form of treated wood.

		Some of these wood treatment chemicals contain 75 percent copper.  So
using a chemical that has half that much copper,, you know, potentially
could be beneficial.  Like I said, it is something that we put in there
for EPA consideration.

		Steroid cream was offered to all of the participants who had a
reaction but none of them wanted any cream.  So it turned out to really
not be a significant issue.  Perhaps maybe these are people that have
dermatitis so the small reaction that they observed was not anything
that significant.

		I know that Dr. Fowler is very sensitive to the requirements of HIPA. 
And if there are specific questions with regard to whether or not HIPA
was violated as part of this study, I know he would like the opportunity
to reply to them and I won't attempt to do so because I'm not a
physician.

		So feel free to ask any questions.

		DR. FISHER:  Thank you very much for such specific responses.  I think
both your presentation and your responses were --

		DR. FISHER: Okay, and then Rich. 

		DR. CHADWICK: Just sort of following on to what Sean was asking, what
is the relationship between Exponent and Dr. Fowler's practice?  Is
there any corporate entanglements?

		MS. PROCTER: None whatsoever.  They are a separate entity.  We hired
them as a subcontractor.

		DR. CHADWICK: So none of Dr. Fowler's research nurses or clinical
nurses or administrative people are part of Exponent?

		MS. PROCTER: Correct.

		DR. CHADWICK: Exponent is entirely separate as I understand then,
except I guess there are some relative relationships or not?  No?

		MS. PROCTER: There is no relationship other than the fact that we
subcontracted with them to work on this study.

		DR. CHADWICK: Okay.  So basically as I understand the role then,
Forest (phonetic) - if we come back to the FDA model in drug development
and so forth, Forest is the sponsor, Exponent is essentially a contract
research organization then, and Dr. Fowler and his practice are the
site, the investigative site and the investigative site investigator? 
Would that be -

		MS. PROCTER: I think that that is a relatively accurate
characterization.  Exponent is a consulting firm that does engineering
and scientific consulting.  So we're not more of a traditional entity
that would manage research.

		DR. CHADWICK: Okay.  And then you said on about the second or third
slide that you had this in house study that you were going to do but
didn't do.

		Now as I understand it, that was really a sort of proof of concept, is
it going to work or whatever.  But that was not done at all? 

		MS. PROCTER: Yeah, it was actually my idea.  I thought it would be a
good idea to see if we were administering the right amount of solution,
if it would run all over the arm, what were the practical implications
of this.

		But it turns out you can figure that out with water as well, so we
didn't need to do it.  And then we were going to try to administer
chrome, and then wipe it off with wipes, but the wipes were contaminated
with copper.

		So we just decided that it was a bad idea, and my boss encouraged me
that it was a bad idea.

		DR. FISHER: Richard.

		DR. SHARP: I'm going to just follow up on a couple of questions I
posed to the EPA group.

		The - it seems as if the load on the skin during the patch test was
higher than any of the open application loads, but we're not quite sure
what it was.

		MS. PROCTER: That's a correct interpretation.  The fin chambers are I
think the predominant way of doing patch testing in the United States. 
An eight millimeter fin chamber holds I believe 15 microliters of
solution.  If you calculate the math that gives you a does on a typical
fin chamber that is administered somewhere around 25 micrograms per
centimeter square.  And this is patch testing that is done when people
come in with allergy all the time.  People who are not allergic to
chromium; common clinical practice.

		We use a 12- millimeter fin chamber which can hold more chromium than
an 8-millimeter fin chamber, and based on the manufacturer's
recommendation we calculate a dose from that that would be around 45 -50
micrograms per centimeter square.

		So the larger fin chamber would hold more chromium, and then the mass
per area would be larger.

		DR. SHARP: Okay, so if that is the case, if the match per area in the
patch testing for this study is greater than what is in the NACD
database, couldn't that explain why you had more two plus and three plus
reactions?

		MS. PROCTER: Potentially, yes.  We did examine that in detail in the
study, because what we did was, we went back and looked at what the
original patch test grade was for these individuals when they were first
patch tested with 8-millimeter fin chambers.

		The vast majority of them had no change; they had exactly the same
response as they did before.  Also keeping in mind that 26 people, or
approximately 30 percent, who were patch tested with an 8-millimeter fin
chamber, when repatch tested with a 12-millimeter fin chamber didn't
have a positive reaction.

		There is some variability with response over time, clearly.  Of those
who had a different response, who didn't have the same response with the
eight and with the 12, a few had a higher response with the eight, but
more had a higher response with the 12.  

		So it's the situation where the data are not perfect, but if you look
in the report there is really detailed information on that.  But we
think of that, it's an important consideration when calculating that,
because of the great sensitivity to what, a 10 percent response level is
to the people in your population.

		So that was one element of the study that would be nice if we could
obtain it, but it was done.

		DR. SHARP: And a follow up to that then, since the adjustment to the
empirical findings of this study were done with this national North
American database, you obviously have that data base.  EPA said they did
not, but you must have it, right?

		MS. PROCTER: Actually Dr. Fowler being the president of the North
American Dermatitis Group calculated for us the percentage of each of
the responses, and he gave us that information, and he put it into our
report.  I don't have the database, but he has access to it.

		DR. SHARP: I guess I am just raising a kind of question about
representativeness.  The report refers to it as a representative of the
general U.S. population.  It in fact, doesn't it, the data collected
from a certain number of relatively small number of dermatologists who
do patch testing.

		MS. PROCTER: Perhaps Dr. Mybok (phonetic) can explain further.  But
there I believe are 14 members, and I don't know how representative they
are of dermatologists.

		I know that Dr. Fowler, a lot of patients that participated were, he
was not their primary dermatologist.  They were referred to him for
testing, so they came, some of them quite long distances, to be patch
tested by hand.  And then they returned to their regular dermatologist
for followup.  And that is one reason why he has a very large database
of information.

		But I'm not sure if it's representative.  I think there is about, I
know in the most recent publication there is almost 6,000 people who
were patch tested and their results incorporated into the database. 
It's just that not very people react to hexavalent chromium.  There is
only less than 500 for that time period, that had a reaction to
hexavalent.  It's a relatively rare allergy in that regard.

		DR. SHARP: Have you visited the North American Contact Dermatitis
Group website recently?

		MS. PROCTER: No.

		DR. SHARP: It struck me as funny.  It's run by somebody in Denmark,
and it was last updated by someone in 2001.  So I guess they are not
really web people.  So I guess they are not really web people.  Anyway,
just an aside.

		You mentioned that you felt that this study would benefit workers.  I
wonder, given that workers have pretty different exposure experience,
what is the connection for you?

		MS. PROCTER: Well, for example, OSHA has updated the permissible
exposure limit, in fact the entire rule for hexavalent chromium.  And
they reviewed the dermatology literature, but identified that there was
a data gap in really understanding elicitation of dermatitis from
hexavalent chromium.

		And I think that what you could do for like a work environment is you
could measure levels of dust in a mass per are, or levels of liquid
chromium in different work environments, and then you could get an idea
as to whether or not direct contact with a piece of equipment could pose
a potential allergic response in individuals.

		So that's what I was thinking the potential application would be.

		DR. SHARP: Well, I guess we would expect those exposures to be much
higher than the dose range in this study.  So I guess you'd have to
extrapolate.

		MS. PROCTER: I guess it depends on the environment that it is.  For
most work I would assume you have to wear protective equipment. 
Hexavalent chromium is well recognized to be highly irritating, so there
is a certainly amount of technical equipment that is required.  So I was
thinking more like secondary type exposure.

		DR. FISHER: I'm going to take one last question and we're going to
move, only because we're running so late.  I don't know why ROAT is so
exciting to us. 

		Okay, Kannan. 

		DR. KRISHNAN: The last question will be in two parts.

		(Laughter)

		DR. KRISHNAN: The control were all females.  Was it random,
intentional, or it doesn't matter?  You didn't comment on that.

		MS. PROCTER: No, they were all females, and it happened just because
those were the individuals who volunteered to participate from Dr.
Fowler's practice.  It would have been better to have a gender specific
match control.

		DR. KRISHNAN: And it's still not clear to me whether the original IRB
documentation or the untyped copies were provided to EPA or not.  I know
you addressed it and it wasn't clear to me.

		MS. PROCTER: They both were.

		DR. KRISHNAN: It was provided or no?

		MS. PROCTER: Yes, they were ultimately provided, they identified that
as well as several other things they wanted to review, including the
minutes from the IRB meeting.  And we hurriedly put that information
together and sent it in.  It would be  my team that I didn't include the
version with the stamp on it.  I was just trying to make the case
numbers.  So case numbers on these documents have to be 66 of 389, so
having it in the document allows you to do that, but it was not a good
choice in hindsight.

		DR. FISHER: All of the official IRB correspondence is scanned in the
supplement document.

		Thank you very much, and now Mr. Mayback - Dr. Mayback.  Were you a
doctor by the way?

		MS. PROCTER: No, I'm not.

		DR. FISHER: Okay.

		DR. MAYBACK: Madame Chair, colleagues, surely those of you who know me
know that I will be mindful of your gastrointestinal tracts and lunch.

		And also I think it is just fair for freedom of information and
equality to state that I am one of those five-day-a-weeker
dermatologists; not really, but I mean in theory I appreciated that
remark.  And I'm also one of those web impossibly disabled individuals
responsible but I will see that it is corrected.

		You know my young colleagues love the web, and they felt they had to
get it on the website, so we found some young guy in Denmark who really
wanted to do it, so that's why it's there.  But it's there; we will
update it.

		My point is very simple.  This panel and the agency has a very serious
responsibility.  I looked over my notes several months ago of the
extensive discussions of what allergic contact dermatitis is -- given to
the other group that has been referred to, and it was not as clear as it
should be.

		Point one: allergy as defined by classical immunology is growing all
around you.  The estimate which I cannot verify today is true but
probably is is that of all the chemicals submitted to the European group
that is reregistering reach one third of allergens.  What are we going
to do about that?

		Furthermore, now that we have even easier animal testing to work with,
the local lymph node assay, many chemicals that I deal with in my work,
and I have been a worker in the past, are allergens.

		Clearly we have to separate the left from the right hand part of the
brain, or the upper half and the lower half, a way of separating all
chemicals that can produce some type of an immunological type IV
response from those that can be appropriately and wisely handled to
subserve a function that one of the species on earth, the homo sapiens,
wants to do.

		The patch test is wonderful.  We've had it for 100 years.  We're still
advancing the science and the art, and it can be used for assessment of
what and how you are exposed to chemicals.

		But the open application test which we have been using slightly for 40
or almost 40 years in our first publication has in this particular study
as an exemplar for going forward, gives us a highly refined tool or
technique to determine what are the levels that are likely to cause
trouble.

		So I know all of you are tired, all of you are hungry.  But the
exercise that the sponsor of this idea, the agency staff working with
the sponsor, has really given us the largest and for all of its slight
flaws, the most elegant step forward to go to the hundreds of other
chemicals - fragrance chemicals, preservatives, you name it - will
eventually - because so many of them in the animal assays are
allergenic, gives us the chance to really refine how do you notify the
consumer of the chemical what is the level that is probably going to be
tolerated.

		That's all, have a good lunch everyone.

		DR. FISHER: Dr. Mayback, would you just name your affiliations and I
guess your tie to this database, because I am one of those people who
don't know you, and we need it for the record.

		DR. MAYBACK: Howard Mayback, University of California, San Francisco
Medical Center, San Francisco, professor of dermatology, long interested
in dermato-toxicology.

		DR. FISHER: Thank you very much.

		Yes, Dr. Lebowitz.

		DR. LEBOWITZ: I want to know what you think of the specific protocols
used by Dr. Fowler in terms of the - where the different doses were
placed, the intervals between the doses, and how they were read, et
cetera. 

		Has that become a standard protocol that's used by allergic contact
dermatologists?  Or do you think that there are any questions about that
specific protocol that would affect our judgment?

		DR. MAYBACK: Well, Dr. Lebowitz, I really appreciate your asking me
the only question I might know something about.  That's a good feeling.

		Point one, it like so many other things that faces animal and man, is
a choice, and the science behind the choice is modestly well understood,
but the rationale of getting the job done, because we do some of this
work ourselves, is also understood.

		The further apart you spread them, the less likely you are to turn on
the hundreds of chemicals that circulate in your blood to produce what
we referred to earlier as excited skin syndrome.

		But if you really do only one level at a time the test goes on
forever, as far as I can see, which is as good a balance as I've seen. 
The next time this is done I suspect that two years from now or whenever
it is, that people will probably randomize.  But compared to what we had
before, the Nethercutt study, we didn't have this data. 

		So I'm sure the regulators who use that will be thrilled when this
becomes available.

		DR. FISHER: Lois.

		DR. LEHMAN-McKEEMAN: A question for you in regard to the
interpretation of the data in front of us.  And in particular, I'd like
to query your experience in the distinguishing of irritation and
allergy.

		Under this particular case where data were analyzed without the
irritation component considered, what is the most appropriate and valid
way to analyze these data?

		DR. MAYBACK: There again is no simple answer.  The simple answer will
be that it won't help us clinically that for 50 years now, the people
who preceded me and the generation going forward have received research
findings to simplify all of this.

		We get funding, and if I wanted to start another lab today I probably
could, to try to avoid using human testing.  Instead we do this all with
a blood test.

		And since 1963, that's the person publication you can occasionally
make it work.  But A, it's not quantitative; B, it's not clinically
interpretable; and C, even though we failed for 50 years it'll happen.

		But in the meanwhile the Procter family here chose to air on the part
- and I don't like to use the word conservative, radical, liberal any
more - but on a way that would give the regime, would make the chemical
look more allergenic than it is by including the irritation responses.

		But in the big picture, and looking at the data, it looks like that is
not a significant issue in the interpretation today.

		DR. FISHER: Gary.

		DR. CHADWICK: Just to clarify, are you part of the NAAB group?  

		DR. MAYBACK: Well, for all of its horrible failures, including not
keeping its website up in Denmark, I am the father of it, and still a
member, yes. 

		DR. CHADWICK: And as I understand there are 14 people?  How large is
this group?

		DR. MAYBACK: I think that is a good point.  I would like to say that
the country in which I have been raised and spent most of my time is the
most advanced country in the world dermatologically and in fact in a few
areas it's true.  But diagnostic patch testing and what you are trying
to do, setting rational levels of knowing what you are dealing with, is
a minor part of the American dermatological community.

		I would suspect that two little groups - one, the one that I founded,
the North American Contact Derm group, 12, 13, 14 people, you are
dealing amongst 300 million obviously a tiny microcosm.  

		There is another group which was founded and is part loosely related
to this; there are about 300 of the 300 million people who do enough
diagnostic patch testing so that they belong to a little society that we
have.

		Now, when you compare that to Scandinavia, in Scandinavia, if I wanted
to be - say I was a real dermatologist, I have to have a year's
fellowship in learning how to do that.  We don't have this in the United
States.  Most of our people are just sort of self taught.

		However, and I know what you are driving at, we recently did an
exercise that nobody in this room unless they were the reviewers know
about.  We recently took advantage of the power of the computer to
identify isolated studies done in India and in Italy and ones that we
did in the United States to try to determine what is the proved
relationship of the frequency of positive patch testing in a normal
population, meaning not going to see a physician, to a general
population.

		It turns out that there are other institutional review boards of
various types, and there are now enough studies, enough bits of data,
doesn't help you with the sophisticated quantification in terms of dose
of a use test; it's just patch.  But there is not the - many people felt
for decades that you really couldn't use these specialized groups of
people that went in to see a physician to give you any hint of what goes
on in the general population.  But in fact when we tabulated all that's
available now, it's not a huge amount of data, there is not an enormous
discrepancy for many allergens.  Namely, the normal population has
within one order of magnitude the same level of patch test sensitivity
as the patients who see dermatologists or allergists.

		DR. CHADWICK: And then I think my last question can be answered
probably either yes or no.

		Do you have any connection with this study or with Exponent?

		DR. MAYBACK: No.  I know of Exponent's existence for many years.  Some
five or six years ago I shared with them all of the data we had on
chromate.  But I am not an active participant.

		DR. FISHER: Thank you very much.

		Are there any other public comments?  Yes?  Okay.

		MR. FELDMAN: Madame Chairman, members of the committee, thank you.  

		My name is Jay Feldman.  I'm executive director of Beyond Pesticides,
which is a national nonprofit organization that works on review of
pesticide hazards and promotion of alternatives.

		We submitted a letter to EPA back in July that I hope you all have
seen, and then another letter on the 10th of October on this issue.

		I'd like to today address the issue of the societal benefit review
question which you heard about earlier.  In this case, in the case of
ACC, we're not satisfied with the discussion that you heard earlier, the
activities of EPA regarding societal benefits review, and we don't think
you should be either.

		This committee must answer the question we believe, is there a
potential societal benefit to the use of hexavalent chromium and ACC? 
What is your process for evaluating this?  And where is EPA's analysis?

		In fact, based on the conversation or discussion we heard earlier, or
I heard earlier, I would say that EPA staff has an inherent conflict of
interest on this discussion and you the panel need an independent
assessment of the determination of societal benefit.

		And I don't say this lightly.  This is an agency or this is an office,
OPP, that is embroiled in controversy.  Nine thousand scientists have
challenged its process and substance on review under the Food Quality
Protection Act, including scientists associated with the American
Federation of Government Employees.  

		The Inspector General issued a report back in August that raised
serious concerns about whether EPA's substantive review was adequate in
protecting human health as opposed to it meeting benchmarks for
timelines under the statue.

		And earlier a year or two ago the IG again issued the concern that
EPA's decisions were politicized.  That's why your role as an
independent outside committee is essential to this process, and we
submit going forward without an adequate societal benefit review, and
you did not get one earlier today, is a travesty, especially when we're
dealing with a chemical such as hexavalent chromium.

		Now our understanding is that the HSRB did not review the test
protocol prior to its execution, but is now in the process of doing that
ROAT study review, reviewing the analysis at this meeting.

		And I should mention, this issue of societal benefits is not an
existential question.  It is not an existential question.  This is not a
question fo when the study was done, but whether you , this committee,
this board, will allow EPA to rely on an unethical or ethical study - an
unethical societal benefits review process.

		This is not an existential question.  The question of whether the
societal benefits review process is an adequate review.

		I would like to note that from an ethics standpoint we believe that
the committee should also determine whether the study - or others should
determine whether the study - how and when there is disproportionate
risk in different population groups.

		I don't know what the situation is here, but if we're going to do a
full and fair review of benefits to whom, and who is tested, we need to
look at disproportionate risk to people of color, people of low
education, and people of low income.

		Human testing was never intended to be operationalized in a vacuum,
outside the context of societal benefit.  An outside panel such as this
cannot and should not rely - must rely on the highest ethical standards
which we believe have been violated in this instance.

		Now I say that turning to our October 10th letter in which we cite the
Nuremberg Code, as you know.  The scripts - quoting the experiment -
should be such as to yield fruitful results for the good of society
unprocurable by other methods or means of study, and not random and
unnecessary in nature.

		And as EPA stated earlier to you, believes that this process is too
complex, and that the factors are not easily equatable, then EPA is not
the entity to do this review, and certainly you cannot glean from a
statement like that that you were given earlier that the review is oh,
the best we can do, and therefore must accept it.

		Now, we are dealing with a situation where virtually all of those in
the wood preserving industry have withdraw the registration of chemicals
that include hexavalent chromium.  We have proven - we have proven, even
though EPA still after all these years of criticisms from the U.S.
General Accounting Office - or U.S. General Accountability Office - has
failed to adequately review benefits of pesticides, and in so doing
cannot come before this committee with a benefits analysis.

		But if you look at what the industry has said to you, the industry has
said to you this chemical has no societal benefit.  The wood preserving 
industry can exist profitably, which is what EPA looks at, the benefits
to the industry.  The consumer can be served adequately.  And those -
somebody earlier referred to the marketplace determining societal
benefit, which I do not believe is appropriate for this committee to
embrace.  But be that as it may, let's say you take that position, the
marketplace has said we don't need hexavalent chromium.

		So alternatives exist and are widely used.

		We know already that hexavalent chromium is associated with adverse
effects.  We know it's a carcinogen for example, we know there are other
kidney and liver effects.  And certainly the data we have could suggest
when weighing the use of a human study, that this is totally
inapporopriate.

		The research may primarily benefit FTRL, but certainly does not
benefit society at large.   And again my closing comment on this is that
the process you've gone through and that I witnessed this morning,
hearing the discussion - I just saw the benefit review, and I have
certainly seen all the paperwork - does not constitute an adequate
societal benefit review.

		And I believe to meet the burden and the duty to protect human health
- my sister, my brother, my mother, my father, my wife, my cousins - the
people that might be engaged in these studies, to protect your family
and mine, you need to as a committee, as an independent committee,
receive an independent review, not from the EPA staff, but from an
outside reviewer.

		Thank you very much.

		DR. FISHER: Thank you.

		Are there any questions?

		Thank you very much.  Any other public comment?

		Now I have to figure out how long we should go for lunch.  Can we come
back in half an hour, 1:30?  I apologize for going through a long - and
I guess I will also suggest that when we come back, those who are the
lead - I think we've talked about this so much, I don't think we need a
summary from our primary and secondary, but to address the positive and
negatives, either one, and just get those points, because I think we've
really gone over this, for whatever reason.

		(Lunch break)

		DR. FISHER: Okay.  So we promise we are going to - I've asked anybody
who is commenting on the board, the primary and secondary reviewers, I
don't think we need a summary of the study.  We seem to have done that
pretty well.  And I think we should go over the strengths and weaknesses
and any recommendations that we might have.

		Then I think I will also suggest that we go with the science, we go
with the ethics, and then we have a discussion rather than asking each
other questions, because I think we did that also already.

		Okay, Richard?

		DR. SHARP: Okay, here are my thoughts about this.

		A clearly stated purpose of the study, a sample size was chosen as a
kind of - without a power calculation.  There was reference to the
Nethercutt study being about the same size, so I think it was an
investigator judgment that this would be adequate.

		It did seem to be adequate for their purposes of calculating MET 10.  

		Repeat open application testing provides more realistic evaluation of
human exposures than does patch testing.  And I think that is an
important point that was underscored by Dr. Maybock.

		Use of mass per unit area rather than concentration is the appropriate
metric for dermal exposure study, so having - that is a good point of
this study.

		I thought that dose levels were thoughtfully constructed.  They
explained why they chose their doses and they provided a range.  I think
this was also - this was a very well written study by the way, whoever
wrote it.

		The first concern comes up with the recruitment of controls from
employees of the group doing the work.  And the fact that they were all
female, when it would have been very easy to have purposefully chosen a
control group that was representative of the treatment group.  I
consider that to be a deficiency in the study design.

		None of the controls had any allergic response, and only one had an
irritant reaction to potassium dichromate, so maybe it wouldn't have
made any difference.

		When I did look - I mentioned this already so I won't belabor it - but
when I did look at the data in terms of one plus ranking versus two plus
and three plus ranking on the sensitized group with the patch test, it
did seem that there was a difference between men and women, and yet
apparently the conventional wisdom is that there is not.  So I think
that is a point we should raise in our discussion.

		The distinction between allergic and irritant responses, I was
uncomfortable with because there were 22 percent of the treatment or
sensitized group had irritant responses and none of the controls did for
Copper Shield, and one did for potassium dichromate.

		While those numbers are small, so that could be by chance.  It wasn't
clear to me at this low level whether or not the distinction was made
effectively.

		Probably the largest issue that we should think about from a
scientific point of view is that all of the observations were done by
one individual who wasn't blinded, knew all the members of the control
group, knew all the people in the sensitized group, knew what doses were
placed where, and it's just human nature that some judgmental issues can
come in when you are using - particularly when you are using a
semi-quantitative or kind of a scoring system for these kinds fo
appearances on the skin.

		So I think that that is a major weakness of the study that there was
no independent way of spot checking even that work.  That was all done
by one person, and that person was very much involved in the study and
its outcome.  So I won't belabor that.

		And then I think that I have some - I have concerns about the use of
the North American Contact Dermatitis Group database for adjusting these
data.  We noted during the discussion earlier that the load on the skin
in the patch testing done for this particular study was perhaps about
twice as high as what would be done in the database data.  Can't say
whether that caused more three plus reactions or not.  Clearly Exponent
did a good job of trying to deal with this question, but I think in the
end there is enough uncertainty that the adjustment is - if you could
put confidence intervals around it, they would be awfully big.  So I
would probably not advise the agency to drop the actual empirical data
in favor of an adjustment of that kind.

		DR. FISHER: Thank you, Richard.

		Lois?

		DR. LEHMAN-McKEEMAN: In general I agree with much of what Dr.  Fenske
has already indicated.

		I think overall this is actually a fairly high quality study,
particularly in the open application design and the dose selection
rationale I think was indicative of reasonable judgment in study design.

		I also agree that the use of a single individual to review the arms is
potentially a weakness, but I don't see it as a major limitation to the
interpretation and use of the present data.  In some respects it does
provide some consistency, and with the caveat - and I'll digress for a
moment and say that my biggest concern with this study was the
investigators' inability to decide how to interpret the data.  That is
to say, seeing multiple METs calculated based on whether or not you used
irritation, it would seem to me that there should have been from their
familiarity with the data, a perspective on what was the most reasonable
thing to do.

		And so my recommendation is that the data including irritation should
be used in the final analyses, and if you do that, I'm less concerned
about the use of a single individual to lead all of this, because in my
own simple mind what I sense is that that means that anything from red
skin and up is actually being qualified as a response.

		So I think in that regard I tend not to feel that that is a major
weakness, again, withe the recommendation that the data from those
responses including irritation be used.

		And I would echo and actually reinforce the inappropriate use of the
historical database.  My own perspective in using historical databases
for things like spontaneous carcinogenicity rates in laboratory animals
is that they provide a very good qualitative comparison with which to
evaluate your present cohort.

		And so it's valid to use that data to say that the patients int his
particular study were seemingly showing more severe responses to
chromium.  But it ends there.

		It is really not appropriate to use those data in a quantitative way
to begin to normalize this data.  So it informs us as to the overall
responsiveness in a qualitative way, but it is in no way appropriate to
be used in the ultimate calculation of the MET 10.

		DR. FISHER: Thank you, Lois.

		Let's see, Kannan?

		DR. KRISHNAN: I guess I'm basically in line with what we heard from
our colleagues, and I may be repeating some of what we heard.

		In my reading it's scientifically sufficiently sound study to help
estimate a safe level for chromium six and ACC.  The current assessments
are the current MET 10 currently is derived from the Nethercutt study. 
The present one uses the repeat open testing using the realistic
exposure scenarios, and these obviously enhance the scientific basis.

		And the rationale fo the study I thought was clearly spelled out.  And
the total dermal dose applied through this technique is much lower than
the RFD, and the dose selection was accordingly justified as well.

		I'm not so concerned about the pooling of the irritants with the ACD
response.  It think it's along the lines of what I learned as well.

		However I am also concerned about the treatment of the historical
data.  If with time people's sensitiveness or the response level has
changed, then lumping is not appropriate.  Because that is one of the
responses we heard.  I thought previously when they saw more of plus
one, now there is more of plus three, but then why compare one with the
other?  There kind of is a counter argument to that.

		If data do suggest a possibility of homogeneity, then maybe they
should be pooled before normalization instead of separating them out,
making the new data part of the old database if it's justified.

		And then do the normalization, rather than separating one and
comparing one with the other.

		I'll leave it at that.

		DR. FISHER: Thank you.

		Okay, if it's okay I'm going and ask Sean to start with the ethics.

		DR. PHILPOTT: Okay.  Of course one of the key aspects of whether any
study is fundamentally ethical is whether or not it is scientifically
valid to answer the question.  So I'm going to defer to my colleagues on
that part.

		I'm going to address just a couple of points.  I believe that Mr.
Carley's ethics review was quite in depth, and I agree with the majority
of the comments that you had. 

		I just wanted to highlight a couple of key things.  I think with
respect to risk that really the study does represent minimal risk to
participants in terms of harms that may relate from exposure to the
compound.

		In particular you've got the exclusion of pregnant and nursing women,
even women who may not be aware that they are currently pregnant by
having them take an over-the-counter pregnancy test prior to enrollment.

		It also seems to me that excluding individuals for receiving
immunosuppressive or steroid medications, or have recent or concurrent
dermatological conditions helps also to minimize potential risks.

		Because you are using doses of hexavalent chromium for the ROAT that
are considerably less than is the standard exposure dose using patch
testing for skin allergies, and that the allergic response induced
really is as Dr. Liccione pointed out is similar to poison oak or poison
ivy.  And this really allergic contact dermatitis is generally mild,
particularly in  this case with limitation to one square centimeter
patches, and can be treated pretty easily with something simple like
calamine lotion or hydrocortisone cream.

		And the use of patch testing itself, I might point out, is considered
to be one of the generally accepted definitions of minimal risk for
contact dermatitis study.  So the fact that we are using doses
considerably below that should also qualify as minimal risk.

		It would have been nice as with the Nethercutt study they had used an
escalated dose approach to the testing, but I agree with Dr. Mayback's
comments that while that would have been ideal, that would probably be
pretty difficult to do and would definitely require some logistical
nightmares.

		I do have a couple concerns however regarding equitable recruitment of
subjects.  It appears to me as though the selection of the control group
was based primarily on a sample of convenience, and that should never be
used as a consideration when designing a study of this type.  It would
have been more appropriate since they had access to this data to contact
other patients of Dr. Fowler's who are known not to be sensitive to
chromium and invite them to participate.

		HIPPA concerns aside, I'd like to point out that informed consent
isn't a document.  It's not a period in time.  It's a process, and I
have some problems with the notion that there was this pre-participation
phone interview which collected some potentially stigmatizing medical
information such as immunosuppression, yet there doesn't seem to be any
documentation of verbal consent for the collection of that information.

		Ideally I would have liked to have seen at least documentation, the
people contacted by phone verbally consented to provide that information
prior to coming in for the initial screening by Dr. Fowler and his
colleagues.

		Another question that can be raised is whether or not the amount of
monetary compensation, which is up to $1,215 total, reaches the level of
undue influence.

		We could probably go around this table and everyone could come up with
a different answer as to what dollar amount would be undue influence.  I
don't think that there is any evidence to suggest that based on the fact
that you are asking individuals to travel to a dermatology clinic 13
times over a several week period implies that this level of monetary
compensation is - does constitute an undue influence, but some of my
colleagues my disagree with me on that.

		Regarding the idea of a noncoercion addendum for the control subjects
who were employees of Dr. Fowler, every time I see these they make me
laugh, because if the person can be coerced into entering the study,
they can be coerced into signing the piece of paper that says they
weren't coerced.

		So it would have been nice to have excluded the employees and once
again to have had a much more equitable selection of the control group. 
However I don't believe that this particular deficiency makes the study
either fundamentally unethical, nor do I believe that there is clear and
convincing evidence that any of the deficiencies I've talked about could
have either resulted in serious harm or seriously impaired the informed
consent process.

		DR. FISHER: Thank you, Sean.

		Richard.

		DR. SHARP: Well, thanks, I don't have to add very much to that, so
I'll be very brief.

		My comments were twofold, that if you wanted to think about what were
the major places where the study might be faulted in some ways, or where
is it vulnerable to ethical criticism, I think that there are two
places.

		The first has to do with whether or not the researchers were
collecting some types of research information prior to having obtained
informed consent.

		There is still some ambiguities about whether or not that telephone
script was sufficient to qualify as a type of pre-consent or the first
step in that consenting process that Dr. Philpott is alluding to.  So
even though they may not have had that official signature on the form,
in some ways there was an implied or a presumption of consent there.

		The second question that came up for me was whether or not the study
might be vulnerable to critique on the grounds that it didn't have an
acceptable risk-to-benefit ratio for those control subjects.  And here I
think that different IRBs that might be asked to review a study like
this might actually reach different conclusions.

		So my own assessment of this is that there is room for considerable
disagreement, reasonable disagreements amongst IRBs that might be asked
to review studies like this.  I think that the IRB that looked at this
protocol clearly made decisions that were within the scope of what we
would call generally reasonable practices for IRBs.  So I don't think
that any of the ethical considerations reach the level that we would say
this was significantly deficient with regard to its treatment of ethical
issues.

		DR. FISHER: Thank you, Richard.

		Jerry?

		DR. MENIKOFF: I generally agree with the comments of my prior
colleagues.  I'll be very brief.

		Just in terms of informed consent, these are issues we've raised in
terms of other studies.  Firstly when it described the risk I thought
the consent form was sort of wishy-washy about you might, this might
happen to you.  In fact the study was designed in a way that they were
hoping that at least some of these people would have significant skin
reactions, and they should make that clear.

		And the second point - again we've made this earlier - the intent of
the study was probably the goal to allow more of this stuff to be used,
and the subject had a right to know that, as opposed to thinking that a
primary purpose of this probably was in fact to make it a stricter
standard.   That was highly unlikely to be the case.

		But by and large I think this study was relatively ethical, and
certainly doesn't meet the two standards for being declared unethical
that we are applying here with respect to retrospective studies.

		DR. FISHER: Okay.  So what I'm going to do is, I'll summarize what I
heard as the science, and then we'll discuss that if there are any
disagreements or amendments or corrections to how I said it.

		In terms of the strength of the study, it was a fairly high quality. 
The purpose was clearly stated.  There was no power analysis.  The
investigators used their judgment, but the sample size did seem
adequate.

		The repeat open application is a better testing measure than the
patch.  They used an appropriate metric, and the dose level appeared to
be thoughtfully constructed and had a good range.

		With respect to the concerns, the recruitment of controls as
employees, and all female, was a concern, especially since the one plus
and two plus ranking seem to indicate a gender difference even though
the literature suggests there is no gender difference.

		The fact that all observations were by one individual who was not
blinded, knew the hypotheses, knew the doses, knew who was a control and
who was not, is not quality assurance, and could be problematic.

		This combined with the fact that there was - the distinction between
irritant and allergic response is questionable, especially since the
controls did not have the irritant response.  It is not clear at the low
level if this is a legitimate distinction, and a further concern was the
investigators' inability to interpret the data.

		So putting together the issue of a single experimenter observer as
well as the problems in distinguishing between irritant and allergic
response, then a more conservative and perhaps recommended way of
looking at the data is to combine the two so that as soon as there is
redness or something like that, that would be considered a sign of a
response or a potential allergic response.

		Another issue of concern was the use of the Northern database within
the normalization, how it was used.  The agency is advised not to drop
the actual empirical database based on this data. 

		In addition there may have been an inappropriate use of the historical
database.  There was some argument that it is valid to use it
qualitatively to compare whether or not individuals are similar in terms
of cohorts, but not to use it quantitatively to normalize the data.

		And then was also the suggestion that maybe if in fact normalization
was desired, that the new data be incorporated into the older database
before normalization would occur.

		So that's what I think was said about the science.  Anybody have a
comment?

		Yes.

		VOICE: Just to reiterate that there was a difference in the patch test
method used in this study as compared to what was typically used in the
database, the 12 millimeter versus 8 millimeter, and it's arguable that
that didn't make any difference.  It's also arguable that it did.

		So I think it just raises the uncertainty associated with using that
database in a quantitative manner as an adjustment factor.

		DR. FISHER: Michael?

		DR. LEBOWITZ: My interpretation of combining the new data with the old
data was that if the old data were to be used at all what one could do
is add the new data to it, make it Gaussian, which is modeling and then
calculate the MET 10 percent from that.

		But that's just an option, and I'm not sure based on the other two
comments that you want to recommend it be used at all.

		DR. FISHER: I know there were suggestions for how to pool it, but it
does sound like because the patch dosing or whatever it's called that
maybe that would not be legitimate.

		Yes.

		VOICE: Dr. Krishnan will have the possibility of refusing these
databases, but I just think I lean much more strongly toward Lois'
suggestion that it's just simply not appropriate.

		You've got these data.  You did a study.  This is what it told you. 
And if I do a study in my lab with a bunch of rats, and everything is
higher than it used to be, I don't go back to all my other lab studies
and mix - I'm talking about very objective data, which this is not.  I
think it's totally inappropriate.

		If the old data - I think we'd have to regard it as suspect even
though it's in a databank anyway.  If it's so reliable why did we need
to do this study in the first place?

		VOICE: May I?  Let me suggest that the wording should be that for
multiple reasons the old patch test data is not appropriate to use to
normalize the new data.

		VOICE: For the purpose of calculating a MET.

		VOICE: For the purpose of calculating a MET.

		Would that satisfy?

		DR. FISHER: Everybody?  

		VOICE:   All of us are saying the same thing in different terms,
trying to look at the alternatives and so on.

		DR. FISHER: Okay.  So from what I gather, then, this is our
recommendation.  If we were to answer the question, the agency has
concluded that the study contains information sufficient for assessing
human risks resulting from the potential dermal exposure to which,
treated with ACC, containing hexavalent chromium.  And our response is
that it could be - our recommendation would be that it could be useable,
but we're recommending that the irritant and allergy - allergic
responses be combined as first evidence of a reaction because there is
no way to know when one began and the other ended; and that the data be
taken as is and not be subjected to normalization based on the old
database, because for various reasons it's inappropriate.

		Okay.  All right.

		DR. KRISHNAN: Again, we haven't really seen the old database in the
context of this report.  For the value.

		DR. FISHER: From an ethics perspective it seems that the study is
scientifically valid for the scientific benefit.  The risks appear to be
minimal to participants in terms of harm from the compound exposure.

		The exclusion factors were good and protective.

		The dose of hexavalent chromium was less than usual doses, and
allergic contact dermatitis can be usually treated.

		An escalating dose might have been ideal but difficult to do.  And
while people might disagree there is no evidence that the compensation
was an undue influence.

		Some of the ethical concerns were subject recruitment.  This was a
sample of convenience that was employees, and it might have been better
to have other patients of Dr. Fowler's for more appropriate controls.

		Prior to consent, we are unable to determine whether or not the
telephone script was appropriate to appropriately inform individuals for
the type of information that they were providing over the phone.  The
informed consent could have been more straightforward with respect to
risks.

		And to risks given that risks were anticipated - not risks, but
reactions were certainly anticipated.

		I don't think we place much value in the noncoercion addendum as
evidence of lack of coercion for reasons that Sean spoke about.

		And there is some concern about a lack of risk-benefit independent for
control studies and for the other studies, which I think is something
that is becoming more popular in terms of the component analysis that we
might recommend more protocols that are going to be coming out, that the
risks and benefits to both groups need to be evaluated separately, and
not just combined.

		So based on that it appears that the ethical recommendation would be
that there is no evidence that the study was either fundamentally
unethical, nor that there were significant deficiencies that could have
resulted in harm to the participant or deficient informed consent.

		Any discussion?  Okay, that's our recommendation.

		VOICE: May I just make one slight change to the recommendation?  I
don't think it's that the control might have been chosen differently; I
think it should have been chosen differently.

		It's not clear that given that we have the study before us it makes a
demonstrable difference in the outcome.  But if it were to be designed
now I don't think it would be acceptable.

		DR. FISHER: Right.

		VOICE: In other words would you want to say that there should be more
controls?  And better gender or demographic mix or something?

		DR. FISHER: Well, I think we've made it as a criticism.  Whether it
should be better, it's a study that was already done.

		I think the criticism is there, I think the ethical issue part of it
is really the issue of coercion, because they were employees, and we're
not saying they were coerced.  But there certainly was the potential
there that is not remedied simply by asking them also to sign a
noncoercion form.

		I don't think we take a vote.  What do we do?  Are we done?

		Okay, so any other comments?  We have a consensus.  Yeah.  Okay, very
good.

		We're on to insect repellent, and I don't think I'm going to give my
little spiel -- is over so we're not redundant either, if that's okay. 

		But clearly if there is an important question, something you think you
want clarification right away, then just let me know.

		All right. 

	SCIENCE AND ETHICS OF IR3535 INSECT REPELLENT

	PRODUCT EFFICACY PROPOSALS

		MR. CARLEY: Thank you very much, Dr. Fisher.

		My colleague, Dr. Clara Fuentes, and I will present these two
protocols.

		These - I will kind of wrap up what's happened since the June meeting
where you saw them before.  The subtitle of that is how Scott Carroll
(phonetic), John Carley and Clara Fuentes spent their summer vacation. 

		And then we'll do the - present the science and ethics reviews of each
of the two protocols and for the first time in our presentations to you,
this time we have done an integrated review, where we both used the same
framework and resolved any differences, so that we didn't have to both
characterize the design of this study and things like that.

		In that way the review framework no longer simply defers on the
science questions as you have seen and the ones that I've done in the
past.

		Ah, Dr. Clara Fuentes has officially arrived along with her name tag.

		Bottom line: the protocols that were submitted by Dr. Scott Carroll,
of Carroll-Loye biological Research describe two studies of repellent
efficacy of three new formulations of IR3535 which is an already
registered active ingredient of demonstrated repellent properties.

		These are the protocols that you saw in the June meeting, but they
have been very substantially revised since that meeting.  EPA now thinks
they meet the appropriate scientific and ethical standards, and we seek
your advice.

		What's happened since June?  Well, immediately following the board
meeting in late June, Dr. Carroll substantially expanded and revised the
protocols based on what he heard at the meeting, and resubmitted them to
us and to the IRB informally a couple of weeks later.

		Then while he was working on that we took what we'd gotten from the
presentation of criteria, and we substantially revised the framework
that we were going to use for assessing protocols.  So we asked a lot
more questions in within the same basic headings that we had adapted
from the Emanuel (phonetic) in the first place.

		And what we've gotten to with this expanded framework is quite distant
from the Emanuel original, because we've done a lot of adapting and then
further adapting.

		So I think it is best thought of now as EPA's current iteration of the
framework for assessing protocols.

		We applied that revised framework to the resubmitted protocols and
turned around the preliminary reviews that we got back to Dr. Carroll in
August. 

		We used as I say the new framework for these reviews.  We will discuss
with you tomorrow the framework itself in the context of the draft PR
notice.  So we should defer that discussion, if you've got some issues
with the framework, until we're looking at it tomorrow.

		After we turned our preliminary reviews of the revised things back to
Dr. Carroll, he made some more changes and then formally resubmitted the
protocols to the IRB and to the California Department of Pesticide
Regulation.

		As you will recall from the discussion in June, there is a California
state regulation that requires that any pesticide exposure study
conducted within the state of California has to be approved in advance
by the California Department of Pesticide Regulation after IRB approval.

		The IRB then reconsidered the protocols that were submitted, and
responded to Dr. Carroll's request for all of the documentation of the
IRB business by sending all of the records of the meetings at which the
protocols were considered, but they sent their procedures, which are
among the requirements of 1125 for documenting your work, they submitted
their procedures directly to us under a claim of confidentiality.

		And we will discuss that, the general phenomena of claims of
confidentiality, in more detail now.  I don't want to digress to address
it now.

		Then Dr. Carroll pulled together all the information required by
261125 except those IRB procedures subject to the confidentiality claim;
resubmitted the packages to us; we updated our preliminary reviews,
considered the last stuff, and tried to get all of this in by the
deadline for submitting stuff for your consideration on the CD.

		There was about a week there in early September that was so busy we
may have overlooked some details.  We don't really think of your as
proofreaders, but you may have caught some things we didn't.

		So that's how we got to where we are now.  And we're now going to talk
first about the study labeled EMD-003, which is a laboratory test of
tick repellency, and here's what we sent you, and here is what our
review is based on.

		There is a transmittal document.  It happens to be addressed to me
personally, but it was the one by which Scott Carroll sent the bundle
in.  Then the protocol.  Then a few corrections to typographic mistakes
a day or two later.  Then we have in this package training materials for
subjects covering tick handling, and how to spray your arm kind of stuff
so they could figure out what a typical consumer does was.

		We have the IRB review documents of this revised protocol.  We have
the IRB's correspondence record.  In every case both protocols were
submitted, both responses returned at the same time, so there is only
one correspondence record that applies to both protocols.

		And then we have our science and ethics review of the study.  And
Clara is now going to continue with a discussion of her science
assessment.

		DR. FUENTES: Good afternoon.

		This test is in the revised protocol, this test is a preliminary study
to determine the amount of drug that will be applied for the efficacy
testing both the studies, the tick and the mosquito studies later on.

		The safety - I want to point out that the safety and composition of
the test material is known, and the composition is confirmed by
analytical methods prior to and following the efficacy studies.

		The active ingredient is of low toxicity.  The inert ingredients are
in the 4A and 4B EPA list, which are considered relatively safe
ingredients.  Other ingredients are cosmetics.

		The protocol also includes MSDS documentation.

		The products are produced under good manufacturing practices, and the
ingredients are not pre-active.

		The materials are kept in Dr. Carroll's lab with a chain of custody
documented.

		The repeated applications during the dosimetry study are of short
duration, and the test material has been tested on animals for toxicity.
 This data will be submitted to EPA for registration later on.

		The sample size of 12 subjects, replication for three rips (phonetic)
per treatment is justified, and I will be talking about that later on in
the presentation.

		Each subject is his own or her own control.  The variables measured in
this study would be the forearm skin surface; self dosing behavior of
the subjects; and the weight of the test material that is applied.

		The dosimetry data will be analyzed using the Freedman (phonetic) to
weight analysis of variance test.  The dose will be calculated on the
basis of treated skin surface area.

		The objective of this laboratory study is to measure the efficacy of
three formulations proposed for registration against Ixodes scapularis,
ticks, which are a vector of Lyme Disease.

		The end point will be the average time to repellency break down
expressed as the average time of complete protection from application of
the formulation to the first confirmed crossing, with the purpose to
fulfill EPA's registration requirements for new products that claim
repellency against human health tick pests.

		The study now includes an initial training and steps to determine the
average amount of material applied, and that's what I just said about
the dosimetry study.  And this calculation of dose tries to estimate the
typical amount that will be applied by consumers.

		The sample size in this study, 10 replications for treatment, and the
sample size as I said before is justified, and I will be talking about
it later.

		The protocol also discusses the rationale for controls.  Positive
controls are eliminated from the study design, and the negative controls
are replicated to 10, and they are used mainly for prescreening of the
tick (phonetic) questing behavior.

		The negative controls are an untreated arm of the same treated
subject.  It's treated identical  to the same way as the treated arm is
treated, except that the negative or untreated arm does not have the
formulation present.

		And the negative control having no formulation matrix without the
active ingredient is not used because the protocol also explains that
it's beyond the scope of the study, and so basically the negative
control is just an untreated arm.

		The negative control in this study are not used for statistical
comparison of treatment needs, because that is not the objective of the
test.  All treatments are randomly and blindly assigned to subjects.

		The statistical analysis for estimation of dose rate in dosimetry
study and for data determination of the complete protection time in this
study, it's the data is analyzed using the descriptive statistic, the
mean within 95 percent confidence interval.  It will be reported also
with the associated standard deviations as a measure of uncertainty.

		So the principal changes in the protocols are, in the revised
protocols, are, the incorporation of the general description of
recruitment and informed consent which was formerly in a supplemental
protocol, CL 001, now is part of the revised protocol.

		The revised protocol also adds a new preliminary phase to establish a
typical consumer dose in the dosimetry study; expands discussion of
risk, risk minimization, and benefits.  The protocol also expands
discussion on sample size and the statistical concerns.  Change primary
end point to time to first confirmed crossing; eliminates positive
controls; and incorporates training materials for subjects in the
dosimetry phase and in handling ticks.

		So in response to the comments made by the board originally,
specifically they revised the protocol as to the following points.

		The inadequacy of controls: for that the positive controls have been
eliminated; the number of untreated arms is a rate identically the same
as the treated arm on the same subject.  This type of design reduces
among subject variability.  And the use of the matrix formulation with
the active ingredient as a negative control is not used but it's also
explained why in the protocol.

		The negative arms are used for preselection of the active ticks prior
to their exposure to the repellent, and the untreated arms are not used
for the study because it's a difficult comparison of treatment means.

		The results are summarized, and expressed, as a mean protection time
within 95 percent confidence interval with the associated standard
deviations as measures of uncertainty.

		The rationale for sample size, which is explained this cost and
justifies the revised protocols, is that the sample size reflects a
compromise between financial and ethical concerns.

		The sample size is hard to predetermine without knowing the
distribution of the outcome values.  EPA guidelines recommend six
replications based on adequate performance of most products.  Six
replications have been widely regarded as sufficient to show statistical
significant difference between means, at a P value probability of 5
percent significance level.

		That replications does likely improve accuracy in estimation of the
population mean, and each additional subject beyond 10 increases
precision of the mean progressively.

		The upgraded protocol provides justification for this and also cite
literature supporting this argument.

		In compliance with the scientific standards, this protocol is likely
to yield scientifically reliable information, because it satisfies the
following scientific criteria from the framework recommended by the
board.

		The study would produce important information that cannot be obtained
except by the amended research, and also because the formulations being
tested have not been tested before.

		The study has a clear scientific objective, and explicit hypothesis,
and the study design, subject selection, sample size, dosing, quality
assurance and quality control should produce adequate data to test the
hypothesis.

		And this study is conducted under good laboratory practices, and it
would be audited by an independent quality assurance unit.

		And that concludes my presentation up to this point.

		MR. CARLEY: Continuing with the ethics assessment of the same
protocol, which is the lab study of tick repellency. 

		The proposed research would test the efficacy of three new
formulations of the previously registered active ingredient, IR3535. 
Effifacy testing is, as Claire pointed out, an EPA requirement for
registration of these products, and the requirement applies to the
product as formulated and sold.

		The changes in these formulations from previously registered
formulations are to make the products cosmetically more appealing to
users, and therefore, to make users more likely to use the product.  So
the efficacy question is not about the active ingredient.  And therefore
there is no reason to test the efficacy of the matrix exclusive of the
active ingredient.

		The question being investigated is whether the product as formulated
works to repel ticks.

		It's important to understand the efficacy of these new formulations,
because consumers rely on the fact of EPA registration to inform them
that it's likely to be effective.

		Subjects were recruited among communities of friends, neighbors and
scientists near the laboratory, and that gave us all some concern when
we looked at these protocols before.  This is an area that has been
heavily beefed up.  There are explicit exclusion factors applying to
children, pregnant or nursing women, those in poor health or physical
condition; those unable to speak in English; and particularly important,
students or employees of the principal investigator.

		Anyone who is dependent on him either academically or for employment
is excluded as a candidate to be a subject.

		Bottom line assessment from our side is that none of the subjects who
met these criteria would be from vulnerable populations.

		Now the same is, because of these factors, not perfectly
representative of the broader population that is ultimately likely to
use tick repellents.  There is not much that can be done about that,
because the broader population that will use tick repellents is going to
be likely to include, children, pregnant and nursing women, people who
are compromised in some other respect.

		So as far as we can tell a more perfectly representative sample would
be neither practical nor ethical to use for this sort of research.

		The risks were characterized qualitatively - remember this is a lab
study with ticks - as possible irritation, headaches, dizziness or
temporary stomach distress from exposure to the test materials.  That is
risk category one; it is the materials themselves.

		And the other characterization was possible exposure to
arthropod-borne disease.  There were three general categories of
exposure characterized - or of risk characterized in the mosquito study
which I - the third one being exposure to arthropod bites.

		And I was interested that in this presentation the qualitative risk of
disease was identified but not the risk of bites.  And that seemed to me
like maybe that was backward.  These are lab-reared populations of ticks
that are disease free, but then it doesn't make much difference either
way because the probability of these risks is characterized quite
accurately as extremely small.

		Skipping over the first bullet, what we know about the materials is
that they are very unlikely to cause effects.  But going to this
question of the risks of bites or disease, there is - we're using
disease-free ticks, and the design of the research is such that the
ticks are going to be gone long before they have time to embed
themselves.

		So the probability of either of those two categories of risk is
negligible.

		The risk-benefit ratio, there is no direct benefit to subjects, and
there is - I agree with the characterization in the protocol that there
is a low increment of risks to subjects.  And that it is offset by the
potential societal benefits of adding an appealing and effective
repellent to the means available to consumers to protect themselves from
tick bites.

		The ethics review, the IRB, this is the same IRB, Independent
Investigational Review Board incorporated at Plantation Florida, it is
the same one that had previously looked at the earlier generation of
these protocols that you saw in June.

		They reviewed and approved the protocol in the informed consent
materials.  They are independent of the sponsors, EMD, and the
investigators, Carroll-Loye and the IIRB is registered with OHRP.

		I will mention in passing that their registration lapsed in June, and
according to the OHRP website there is nothing to indicate that it's
been renewed.  I haven't been able to verify exactly what the status of
that is. 

		The protocol includes a very extensive, and I found it entirely
satisfactory, description of the recruiting and consent processes.  As
Clara mentioned, this is based on the material that was formerly in the
supplemental protocol.  It's now been incorporated into the specific
protocol, and considerably expanded and clarified.

		All of the IC materials which used to be separated have also now been
incorporated into the protocol as approved by the IRP.  

		The methods proposed for managing information about the prospective
and enrolled substances, by subject, will ensure their privacy is not
compromised, and that includes information about the results of the
screening pregnancy tests. 

		Subjects will be free to withdraw at any time, and will be reminded of
this at several points in the course of their initial discussions and
recruitment, and then when they come in for the dosimetry study, and
again when they come in for the actual efficacy study.

		And medical care for research related injury will be provided at no
cost to the subjects.

		The applicable ethical standards for this protocol, it's a proposal
for third party research involving intentional exposure of human
subjects to a pesticide with the intention of submitting the resulting
data to EPA under the pesticide laws.

		Therefore the primary ethical standards applying to this research are
40 CFR 26 subparts K and L.

		K of course includes the characterization of everything that needs to
come in with the protocol.  It also is the part that extends the
provisions of the Common Rule to third party data.

		L is the prohibition against intentional dosing research with children
or pregnant or nursing mothers.

		A point by point evaluation of how this protocol addresses each of
their requirements in subpart K, section 1125, and L, section 1203, and
the additional criteria that were recommended by this board in their
June meeting appears as attachment one to the EPA review of this
protocol.

		In summary we concluded that all of the requirements of 26.111
concerning the basis for the IRB judgment that it was acceptable; 1116
addressing the requirements for the elements of informed consent; and
1117 concerning documentation of informed consent; all those
requirements are satisfied.

		We concluded that the submission to us was complete in the sense that
all of the requirements of  26.1125 were met.

		You have seen everything that we got on this with the exception of the
one item, the procedural package that was claimed to be confidential.

		At the June meeting the ethics subcommittee recommended specific
attention to NAS recommendations 5.1 and 5.2.  We reviewed this protocol
against the specific elements in those items, and concluded that they
were satisfied as well.

		Charge questions, which I think we will skip for now, because we are
going to keep going and run through the other protocol.  The charge
questions are the same except for the substitution of the names of the
protocols.

		I do want to say at this point, be sure to understand that subpart M
does not apply to the review of protocols.  It concerns only the
requirements for documenting conduct of completed work that is submitted
to us in the form of a report, such as the red study that we were
talking about this morning.

		K and L are the only subparts that apply to protocols.

		And now we'll move on to the mosquito study.  This is what our review
reflects.  

		It's a similar set of data, differing only in the penultimate bullet,
the California DPR approval of the EMD-004.  This was in hand at the
time that dr. Carroll sent us this package in early December, and the
CDPR approval for the other study was not yet in hand.  It's come in
since.  I think he sent it to the docket, and it is in any event while
it's a requirement of California that that approval be in hand before
the work begins, it's not specifically a requirement of our rule or
something that you have to have to have a complete data set.

		And now Dr. Fuentes again.

		DR. FUENTES: This is a field study for testing the repellency of three
formulations against wild populations of mosquitoes.  

		The test sites are in California, central valley; and Florida Keys.

		The end point is the average duration of complete protection time,
from time of application of the test materials, to the first confirmed
bite.  The first confirmed bite now has been modified.  The concept of
biting has been changed to landing with intent to bite in order to
reduce risk from being bitten in the field.

		The study includes preliminary training in the lab for test subjects
to learn how to handle mosquitoes prior to exposure in the outdoors. 
Subjects will be randomly and blindly assigned to the treatments.  There
will be three treatments replicated 10 times.

		The subjects will be arrayed in pairs in order to facilitate
observations.

		Negative controls are not replicated beyond two.  These are not used
for a statistical comparison of treatment means, and as a way also to
reduce exposure in the field.

		These two untreated subjects are experienced in field biology and
entymology, and they will quantify the biting pressure in the field. 
There are no negative controls using the problem matrix, as John already
explained.  And the exposure to mosquito is limited to one minute every
15 minutes.

		The threshold for biting pressure is one  landing per minute as
recommended by the EPA guidelines.

		The test site, as I said, of California central valley and the Florida
Keys.  The expected wild population of mosquitoes found in these areas
are the species Aedes vexans, Ochlerotatus melanimon, Ochlerotatus
taeniorhynchus, and Culex pipiens  whatever, I know who they are, but I
can't pronounce them.

		The variables that will be measured are the biting pressure, the
number of landings with intent to bite per minute, and the first
confirmed bite, and the time to first confirmed bite, which is also the
protection time or complete protection time.

		The test results will be analyzed using discrete statistics, then
treated controls are not used for comparison with treatment means, just
for measuring the biting pressure in the field.  And the measurement of
uncertainty at a 95 percent confidence interval of a mean, and as
associated with standard deviations.

		The principal changes in the protocols, the revised protocol now, are
also the incorporation of the general description of equipment and
informed consent, which formerly was in supplemental protocol CLCS01.

		The revised protocol adds a new preliminary phase to establish a
typical consumer dose.  This is the dosimetry study explained before.

		It expands the discussion of risk, risk minimization, and benefit.

		It expands discussion of sample size and the statistical concerns like
the power of the test.

		It changes the frequency and duration of exposure to reduce risk of
untreated controls; eliminates positive controls from the design; and
incorporates the training materials for subjects in the dosimetry phase,
and also in aspirated (phonetic) landing mosquitoes, before going into
the field, the subjects will be trained in the lab using mosquitoes that
are free from pathogens, for them to learn how to handle them and how to
aspirate them before the subjects go outside in the field for the field
test.

		So in response to the comments made by the board before, the revised
protocol addresses the following points: 

		The positive controls in terms of inadequacy of controls, the positive
controls now have been eliminated, and most of the relative protection
time measure of repellency has been eliminated.

		Two untreated subjects are used to monitor the biting pressure
following the EPA guidelines' recommendations.

		The dose supply is predetermined through the dosimetry study.

		The results are presented as an average period of complete protection
from time of application of the test material to the first confirmed
landing with intent to bite, which indicates the break down of
repellancy.

		The mean will be reported within 95 percent confidence interval, and
it's associated standard deviations for measurement of uncertainty.

		The rationale for sample size in this study is the same as the other
dosimetry and EMD-003.  The sample size represents a compromise between
financial and ethical concerns.  It's hard to determine without knowing
the distribution of the outcome values.

		EPA guidelines recommend six replicates based on the adequate
performance of most  products.  And six replicates have been widely
regarded as sufficient to show statistical significance at a P value of
less than five percent significance level.   Ten replicates is like an
improved accuracy in the estimation of the mean, and each additional
subject beyond 10 increases precision of the mean progressively.

		And as I said before the protocol includes citations from literature
supporting this argument.

		In compliance with scientific standards, this protocol is likely to
yield scientifically reliable information because it satisfies the
following scientific criteria, and these are taken from the framework
recommended by the board.

		It would produce important information that cannot be obtained except
by research with humans.

		It has a clear scientific objective and explicit hypothesis.  The
study design, subject selection, sample size, dosing, quality control,
quality assurance, should produce adequate data to test the hypothesis.

		And again I want to add that this study is also conducted under good
laboratory practices and standards.

		MR. CARLEY: And we're rounding the final turn.  This is going to go
fast.

		And this level of detail that's in this presentation there are very
few differences between the presentation for 003 and 004.

		There are no differences in this area: we require testing.  People
rely on our regulatory reflection of the results of that testing.  And
this is a well designed test to serve those purposes.

		Again the exclusion and inclusion factors are good, and they address
the concerns about people who might have been dependent on the
investigator.

		But one of the results of this is that we don't have a perfectly
representative sample, but it would be either or both impractical or
unethical to include a fully representative sample.  I think this is a
good enough one, a typical enough one, to use for the purposes of a
study.

		On the qualitative risks, there were three listed for this study,
including the one in the middle, exposure to biting arthropods, which
was not listed for the lab tick study.

		Again, the really significant point is that the probability of any of
these risks is extremely small.  Appropriate steps have been taken to
minimize both exposures and accidents.

		One of the things that Clara mentioned was that the untreated subjects
in the field used to verify biting pressure would have two attendants. 
What those two attendants are going to be doing is making sure there
aren't any landings that nobody notices so that those bugs get aspirated
before they bite.  So there's been some really good work done here in
reaction and response to your discussion in June to make appropriate
changes in the design to minimize the risks to all the players including
those who are not actually testing the material, but are just doing -
verifying biting pressure.

		There has been a new expansion with this training on aspirating
landing mosquitoes in the lab, so that they can expect to be good at it.

		Something that I forgot from time to time and had to remind myself of
when I was reviewing this is that once you get out there in the field,
the exposures are so short that you are not talking about having to
aspirate a flock of mosquitoes really fast.

		You are also not assuming that there is some efficacy to the
repellent.  Most of the time you are going to have no landings with or
without the intent to bite.  And it's only as the repellancy breaks down
that there are going to be any landings, and by putting the subjects in
the field in teams to watch each other with only just the treated
forearm exposed, there is a very low probability that people are going
to get bit.  And if they don't get bit they are not going to get West
Nile or whatever other dreadful disease these mosquitoes might be
carrying.

		Same calculus here, there is no direct benefit to the subjects, but
there is a very low risk to them, and a potential significant benefit to
society if we have another effective attractive repellent in the armory.

		These comments are unchanged from the ones I made for the previous
study.  These are also unchanged from the ones on the previous study,
and so are these.

		So we're getting down to the standard, same thing.  It's a proposal
for third party research, intentional exposure, pesticide, intent to
submit under the pesticide laws, so it's subject to subparts K and L but
not M, and we ran down everyone of those requirements in the appendices
to the review and concluded that all of the applicable requirements were
satisfactorily addressed, including all of the elements of the two NAS
recommendations, which now brings us to the charge questions, and you
can in your mind's eye read the same thing only changing all occurrences
of the character - no, not all occurrences - all occurrences of the
string, 004, to 003.  Then you'd have all four charge questions.

		Does the proposed research appear likely to generate scientifically
reliable data useful for assessing efficacy of the test substances?  And
does the proposed research appear to meet the applicable ethical
requirements of 40 CFR 26 subparts K and L?

		Oh, there is - please, back up, that one.  I had forgotten about this
one.

		This is something that we want your feedback on, how much of it comes
in this context, and how much of it comes when we talk about the PR
notice tomorrow, it doesn't really matter.

		But what we were trying to do is figure out how to meet both our needs
and yours by using this structure to pull bits together, and we want
some feedback from you on how well it worked.

		DR. FISHER: Thank you, John.

		Just so you don't leave, I think we may be getting to the PR guidance
today and combine those two questions.

		MR. CARLEY: I wouldn't dream of leaving before you all do.

		DR. FISHER: I know.

		Any questions?  Yes, Jan.  Jan and Lois. 

		DR. CHAMBERS: Do you want to do 003 first?  Because my question is on
004.

		DR. FISHER: I don't think it matters.

		DR. CHAMBERS: Okay, I had two questions about the protocol.  One, this
is the mosquito protocol, and the study sites will be California and
Florida.  And the recruiting is really only described as I can tell for
California.  And I was just kind of curious as to how are they going to
recruit in Florida?  Or are they going to transport the California
people to Florida to test them?

		MR. CARLEY: I could speculate, but I'd rather not.  Is Dr. Carroll
going to comment?

		DR. FISHER: Carroll's here, so we'll leave that for his presentation.

		MR. CARLEY: If you would save that for him.

		DR. CHAMBERS: The other question I have, and there may have just been
a typo, there was a little confusion on correcting, on page 37 where you
describe complete protection time about the first confirmed light with
intent to bite, followed by another one within 30 minutes, and the
numbers just don't seem to make any sense in the example. 

		And I'm kind of confused as to how this is going if those numbers are
accurate.

		So for example, light with intent to bite, one 20 minutes following by
another at 35 minutes is not confirmed.

		MR. CARLEY: That is corrected by one of the errata at one of the items
in -

		DR. CHAMBERS: The errata on this one was for ticks.

		MR. CARLEY: There was a correction sheet for each of the two studies. 
And I spotted that same thing after we got the protocol.

		DR. FISHER: Can somebody just point it out so we have the answer?

		MR. CARLEY: Dr. Carroll, can you step forward?

		DR. FISHER: I don't need him to step forward.  I just want the
document that corrects this.

		VOICE: It was in the bundle that we sent to you.

		DR. FISHER: These are the corrections that came with -

		VOICE: Yes, the errata sheet for EMD-004 which is dated 14 September
speaks to the -

		DR. FISHER: The errata on the -

		VOICE: Let me read you the text as it should read on page 37.

		Complete protection time, CPT, is measured as the length of time from
initial application to the first confirmed crossing.

		DR. FISHER: Yeah, that's the thing.  It refers to ticks.

		MR. CARLEY: Dr. Carroll sent the correction labeled AMD-004, and he
corrected the tick study.

		DR. FISHER: But the errata and the tick study is also for ticks.  So I
think I know what it means, but it's inaccurate.

		MR. CARLEY: It needs to be fixed.

		DR. FISHER: Okay, let me ask, do you have something written, Dr.
Carroll?  Do you have your addendum that's written and the one we need? 
No, I know that, and we don't seem to know where it is.  

		Those are the errata on 003 and 004.

		DR. CARROLL: Well, there is a 6 October 2006 letter that says summary
revisions, mosquito repellant protocol EMD-004.  In the form of a letter
to Dr. Fisher.

		VOICE: What we have here, to the best of your knowledge, there is no
correct correction in the files.  There still needs to be a correction
of the original blunder, but it hasn't been fixed yet.

		DR. FISHER: Well, is it going to be possible to have that information?

		VOICE: This is really not important.  It's really procedural.  It's
already been defined for us.  This is only an example that happens to
have one wrong number in it.  So it's trivial.  Let's insist that it be
corrected.

		DR. FISHER: Jan, it is not going to be essential for your delivery? 
Oh, okay, thank you.

		VOICE: I just want to make certain that I understand the dosimetry
link.  So this is going to include 12 subjects.  Are they different than
the subjects who will then be used in the efficacy leg of the study? 

		DR. FUENTES:  Protocol said that it could be the same.

		VOICE: Okay.  And is it the intention that those same 12 would
actually serve for the dosimetry for protocols 003 and 004?

		DR. FUENTES: Well, the studies on dosimetry will apply to both.

		VOICE: Right, so you're effectively doing it once and it will provide
the dosimetry for both studies?

		DR. FUENTES: Yes.

		VOICE: That wasn't clear. 

		There is, and this is perhaps a bit trivial, but in looking at the
dosimetry, I had a couple of questions particularly as it related to the
tick study, because the tick study is intended to be a laboratory study
understanding that the use is in the environment. 

		But the way the dosimetry reads is, it says that it will be conducted
out of doors with a temperature greater than 57 degrees, and no more
than a 7-mile-an-hour wind, and so on and so forth.

		So from your vantage point is that adequate for interpretation of the
tick dosimetry given that that's a laboratory study?

		MR. CARLEY: Remember that the actual dosing in both studies is not
going to be done by spritzing from a can or pump spray.  It's going to
be done by pipetting a measured quantity and then smearing it around.

		The dosimetry study is designed to approximate the typical consumer
dose.  And the presumption is that a typical consumer applying repellant
whether against ticks or insects, is going to be outdoors with
temperatures above whatever, and not do it in a gale.

		So in that sense yes, we think that testing the - doing the dosimetry
phase which is entirely independent of the efficacy phase, doing the
dosimetry phase under those conditions is appropriate for its purpose
which is to approximate the typical consumer dose in terms of mass per
unit area.

		DR. FISHER: David, and then who is it - David, and then Kannan and
then Richard.

		DR. BELLINGER: I just had a question of clarification about 003.  The
phase two, where you and protocol say that the treatments will be
applied blindly.

		I wasn't clear on what blinding meant since everybody is getting the
same dose applied in the same way, what is blinded?

		DR. FUENTES: It explains in the protocol that nobody knows what is
being applied to them.  

		DR. BELLINGER: Which of the three?

		DR. FUENTES: Yeah, which of the three formulations are applied.  There
is a coding of those, but it's not - the subjects don't know.  They
won't let them know.

		DR. SHARP:  You said they are all being applied.  They are not being
applied by the pump or the aerosol in phase two, but by someone with
latex gloves smearing on the skin?

DR. FUENTES: Yes.

		DR. SHARP:  What does it matter if they're blind?

		DR. FUENTES: No, blind to what treatment they are receiving.  You
know?

		MR. CARLEY: Your specific question: what does it matter?  Not much.

		Dr. Sharp asks if some of the subjects are getting none, the only
subjects who are untreated in the field study are the two that are used
to verify biting pressure, and they know who they are.

		In the tick study the circumstances is entirely different, because
everyone is testing the questing behavior of each tick on their
untreated arm before they put the tick on their treated arm for the
trial.

		DR. FISHER: Sue, did you want to follow up on this?

		MS. FISH: It is my understanding that the dosimetry studies are
informing the laboratory study as to the dose, and therefore, is it
possible in fact, or probable that the aerosol spray for example may
have a more consistent or a higher dose let's say than a pump spray, or
vice versa, a lower dose, so therefore the blinding is really around
dose levels and possibly some degree of formulation.

		Is that correct?  Or am I wrong here?

		MR. CARLEY: You are on the right track.  There are qualitative
differences in the three formulations that contain different
nonpesticidal ingredients and in different proportions.  So that's one
of the things that people don't know.

		And the other thing is that there may be different results from the
dosimetry phase in the actual dose.  The effective quantity of IR3535
that is  what it is in a typical consumer dose when they spritz it with
an aerosol, when they use a pump spray, or when they smear on a lotion.

		So both of those factors could vary from individual to individual in
the test, and they would not have any idea which of the three recipes
was being applied or whether it was in effect stronger or weaker
solution of repellent than the guy next to them.

		DR. FISHER: Thank you.  Kannon?

		DR. KRISHNAN: A clarification and a question.  Clarification relates
to the statement that I saw more than once that EPA historically
required a minimum of six subjects.  Was that the case?  

		MR. CARLEY: The slide said replicates.  The reason it doesn't say
subjects is that historically if you recall from the discussion of
guidelines at the last meeting, some of these studies are run where each
limb is considered an independent replicate.  So I could go out in the
field and be four replicates all at once, depending on how each of my
extremities was treated.

		(Laughter)

		DR. KRISHNAN: My question relates to -

		(CD Change)

		DR. CARROLL:  - which is to suggest that I am open to any specific
questions you may have.

		I don't have a PowerPoint presentation, which to me feels like a
relief.  Hopefully to you too, but you may have to retune your nervous
systems over the next several seconds to deal with that.

		As John suggested this has been a rich summer for all of us, and I
very much appreciate the input on that from you in June, and the input
from EPA staff.

		In a sense to me this is an ongoing process.  It's been a melding of
cultures.  For many years I have been trying in a sense, and I use the
word loosely, sneak somewhat better science into this deal that
parallels the science that I practice more in my basic research
programs, and I've had some success with that, but this has allowed me
to formalize that more thoroughly, and I appreciate that very much.

		In addition the ethical oversight has been, compared to what I've
experienced, with this group very minor.  Some of you may recall there
were some adjunct protocol, called the generic protocol, called TL-001. 
That was the protocol used in subsequent years that was approved several
times by the UCSF IRB, their in-house IRBs, and as I've come to
appreciate your discipline more thoroughly, when I began thinking about
it intensively for the first time this year, I realized that for
whatever reason that oversight was not up to contemporary standards. 

		But that was the only feedback I've ever had, and I certainly know
that UCSF Med School is a famous place.  So to me it was sort of well,
this is easy, piece of cake.

		So I appreciated their willingness to rely on my goodwill or good
nature to the extent that I possess it to treat my subjects well, and I
tried to comport my research responsibly in the context of my home
community.

		And the additional oversight has allowed me to make the process much
more transparent and be more thoughtful about it myself, and I do
appreciate that very much.

		Why don't I just go on to a couple of questions that came up at the
end. 

		One was the monitoring of pathogens in insects.  That is something
that is continuing to develop.  Those data are becoming more and more
available throughout the nation with the concern about West Nile.  The
only things we're doing with respect to that is consulting with mainly
our university and stage vector biologists to find better ways to
collect those data.

		Our very best approach is to conduct tests against mosquitoes
currently in the spring and early summer, because then the titers of the
viremic populations are very low compared to what can happen later in
the summer.  Specifically we'll take approaches like that.

		In addition to sentinel (phonetic) chicken flocks being used, which
have been used for many years, there is now much more trapping and a
much denser grid throughout the state of California, and other states as
well.

		So we have access more and more up to the week date, and my notion is
to incorporate those things further into the protocol once I'm sure of
their reliability.

		I thought Dr. Krishnan's comments about the dosing relation to the
Noel (phonetic) was very cogent.  It was something that is an obvious
thing to include.  Our approach is simply - what is included is based on
discussions we had this year and in previous years with EPA
toxicologists.

		A function of the efficacy data isn't just to show whether the product
works or not.  We anticipate that it will work.  We know from
discussions with EPA toxicologists that we are likely to be well below
the limit unless reapplication is necessary every 15 minutes or so.

		So a very important adjunct, or some would say principal function of
such a test - and I understand this even better now having thought about
it more this summer - is to - when calculating the labeling information,
based upon the duration of the reported efficacy, what we are doing is
really defining the margin of safety as well.  We know how often a
person is likely to have to reapply and therefore that tells us more
about what an actual dose is likely to be.

		DR. FISHER: I think there was a question about Florida and California
recruiting you might want to answer.

		DR. CARROLL: That's also a good point, because that's something that's
not addressed as specifically.  We do have subjects move to remote field
locales with us, but also sometimes we use professional vector
biologists.  And many times when we tested in Florida we have had help
specifically from the Florida Keys mosquito control division from their
staff.  And we use the same consenting procedures, and use the same
consent forms.  And even though we are given permission by supervisory
personnel there to have access to their personnel - this is typically in
the winter when there aren't many mosquitoes in California, and they are
not very busy in Florida, but there are still mosquitoes in the Keys. 
So they have staff that don't have a lot to do.

		But I'm not sure how I should write this more explicitly in the
protocol.  I imagine I might be able to.  But we make it clear to the
supervisory personnel that we want to present ourselves to their staff
providing - presenting a situation that is entirely optional, it's not
part of their work experience.  It's not part of their job
responsibility.

		DR. FISHER: So just to clarify you are recruiting in Florida with
people who are professionals or experienced in that area; you are not
transporting individuals from California to Florida, or you are?

		DR. CARROLL: The seasonality on these tests that we do conduct some of
them in Florida.  We would have both types of subjects present.

		DR. FISHER: Okay.

		VOICE: I thought you did a very good job, and I want to commend you
for replying to all of the concerns that the HSRB listed last time.

		There was one concern that you didn't respond to.  There was a
suggestion that another investigator, outside of the study, screened the
control investigators.  And you didn't in your - that the control
investigators would be there too, the investigators that are part of the
study, and that they should be screened by someone who is not in the
study, do you remember that?  They would be screened a little bit
differently maybe than the participants?

		DR. CARROLL: These are control investigators?

		VOICE: The investigators who are going to serve as controls might be -
make sure they are not screened by the study director, he's serving as a
control, he's screening himself, or someone - to make sure that there is
an independent person who didn't really have a relationship with the
persons being screened to make sure they actually make a decision.

		DR. FISHER: So at this point there are no controls, but there are two
people out in the field who are going to measure, the two people who are
employees are going to be measuring pressure bite or something>

		DR. CARROLL: Measuring ambient bite.

		DR. FISHER: Right.  So I guess the question is, how is the screening
of these two individuals, so they're appropriate, they're not
vulnerable, they're not at risk, going to be handled?

		DR. CARROLL: I've left that objective, because I did not devise a way
where I felt personally as comfortable in ensuring safety if I was
relying on the judgments of another person to do it.  There just aren't
many professionals who have this kind of experience.  I do have people
who worked with me for many years who are mosquito afficionados and know
how at a professional level comport themselves very safely.

		And I also have preexisting subjective knowledge that they seem to be
rather typical, not far from the average in terms of attracting - and
scientifically this is the very weakest point of this general study
design, where we, to minimize risk, and maximize safety, we are using
such a low intensity of data collection to measure ambient biting rate
that my preference is to maintain - as a scientist maintain more control
over the quality of data based on my experience with particular
assistants.

		DR. FISHER: I guess the issue is, maybe you've handled this, or maybe
this is something you could do, but one thing we suggested in the report
was that those two investigators who serve as subjects are treated just
as any other subject; therefore, their exclusion inclusion criteria and
procedures are submitted to the IRB, and reviewed.  So that was one form
of quality control.

		I guess I don't remember if you were explicit in terms of - in another
words, what I think you should do if you haven't done it is describe the
inclusion-exclusion criteria that you would use for anyone who would be
subject to what these two individuals will be, and demonstrate that
these two people meet that inclusion-exclusion criteria.

		I assume what you're saying is, they don't have some horrible reaction
to mosquitoes, and whatever else your inclusion-exclusion criteria would
be, but it's important I think what Suzanne was raising, how important
we felt it was to apply the same ethics and inclusion-exclusion criteria
to people that are employees or experts in the study as not.

		DR. CARROLL: I agree with that.

		VOICE: And also to add to what you just said, to document that the
control subject had methods, inclusion-exclusion criteria, just as you
do with the test subjects.

		DR. CARROLL: I agree with that, and frankly it hadn't occurred to me
to treat them any different.

		VOICE: Let me ask a very narrow clarifying question here.  The two
untreated subjects in the mosquito field study who are used to verify
biting pressure, are they or are they not employees of Carroll-Loye?

		DR. CARROLL: They're not employees.  They are people who have worked
with me many times, that have participated in many studies, two or three
a year.

		VOICE: But they don't hit the exclusion criterion of people who are
dependent on you for employment or academically; is that correct?

		DR. CARROLL: No, they have other careers.  They just like doing this
kind of work on weekends.  A few weekends a year.  They are not in any
way legally or practically employees.

		VOICE: And in terms of the information in the protocols about
recruitment and informing and consenting, they are part of the same pool
as everybody else, right?  They just happen to be people that you have
known for awhile, but they are from the university community, subject to
the same exclusion factors, they're getting the same briefing?

		DR. CARROLL: Yeah, they are not treated in any way different at all.

		VOICE: Okay, that was our understanding, but I wanted to make sure it
was clear.

		DR. FISHER: Yes, Sean. 

		DR. PHILPOTT: So one question that I've been mulling over in my head
for the past couple of minutes gets back to what Janice originally
talked about when she was in Florida.

		And you said in response to that that there are times when you bring
your students with you, subjects.  But that raises a number of issues as
well  in terms of voluntariness of withdrawal, particularly if you have
flown them to Florida with you.

		Why are they coming with you to Florida?  Just solely for
participation in this study?  And how is that reflected in the consent
documents and in their compensation?

		DR. CARROLL: You've asked - their compensation is at the same hourly
rate.  They don't pay any costs for transportation.  It does not say in
the current documents that if they acquire transportation that would be
regarded as unusually expensive to successfully complete their
withdrawal purpose that that would be covered.  And in fact it's just
implicit at this point.  It is not specifically addressed.

		DR. FISHER: So I think the recommendation would be if it's not already
that for those who are being transported that the informed consent quite
clearly include information that says you can withdraw without any
penalty, which means we're taking you back and you still got the same
hotel room.

		DR. CARROLL: Well, we're flying you back.

		DR. FISHER: Right, we're flying you back and you still have the same
hotel room.  Some times when you - there are times when the fact that
there will be no penalty for withdrawal needs to be more clearly
explained depending upon the context in which the subject is in.

		And I think what Sean is implying is that you might want to enhance or
we might recommend that you enhance the informed consent for those
subjects so they don't think their travel back is dependent on their
participating.

		DR. CARROLL: Dr. Fisher, another clarifying question about this, our
understanding is that withdrawal from the field of study phase amounts
fundamentally to declining to pull your sleeve up at the next time act. 
But whether you are a 30-minute drive away from home in a bus in
California or a half a day flight across the country in Florida, you
still have to get home.

		I don't think Florida makes a very significant difference here in the
whole question of withdrawal.  You could make the protocol regardless of
what the place is more explicit that on the day of field testing here is
how you will get from the collection point to the test site and back
again, or something like that.  And this would not be affected.  We will
get you home if you withdraw.

		But I repeat, it doesn't seem to me that the freedom to withdraw is
affected very much by whether you are on foot in the boonies in the
central valley or on foot in the Florida Keys.  You still need to get
home, and you still have the same circumstance to withdraw from if you
choose to do so in the middle of the field study.

		VOICE: I agree with Mr. Carroll.  And I think the only real difference
is that subjects who go to Florida to participate in the study have
themselves an ethical obligation to complete it.

		DR. FISHER: I'm going to ignore that comment and go right on to
something else.  No, it's the same issue, it's the Florida issue.

		I actually didn't think about this before, but in searching the
document I didn't see that the Florida part of the study is explained in
the consent form.  That's separate, and there might be a transportation
issue and a time issue.

		How is it that you might discuss or describe to the California
participants their opportunity to travel to the other study site?  Or
did you intend to use mostly Florida people?   Because it's not in the
consent form at all.

		DR. CARROLL: In the past we've used mainly Florida people, and one or
two experienced could come along.  I think Dr. Philpott's question was
excellent, and yours as it extends it.  The reason it gave me pause
initially is because we've never really had anyone who didn't want to go
to Florida.  Usually that's not the opportunity.

		But we're mainly dealing with people who want to participate.  We get
calls throughout the year.  Is there another test coming up soon?  You
think you might go to Florida again?  We often - you know it's fun to go
to the Florida Keys.  It's more economical and more practical to do the
work in California.

		So I have not formalized that as much as I would have had I had more
experience.  With the current study, since we will be relying on more
subjects, although we've been proposing and exploring ways to be most
efficient, to minimize the number of subjects, we would need to
transport more to Florida, several people.

		And again, we are principally dealing with attitudes of people who
just really would like to go along, because you can go to the Florida
Keys in the winter.

		But it would entail describing the logistics of the situation very
clearly.  And I do think there would be concern of a person who might
wish to withdraw because they are uncomfortable with what they are
experiencing in the test would be, I might say, unintentionally feel
coerced by an obligation since they've had the luxury of being flown
across the country to Florida for free.  And I wouldn't them to feel
like they are in that position.  So I think that merits additional
clarification.

		In contrast to what Dr. Lebowitz has experienced, these are not tests
where you just end up seeing how few bites you can manage to get.  We
have very very few mosquito approaches.  With the modern generation,
these alternatives are much better repellents.  The subjects are just
waiting for hours to get their first approach.

		So even in the Florida Keys where it seems like it might be a mosquito
jungle, it's not often likely to be in a situation where the person who
is in love with their tropical vacation and suddenly feeling they really
need to get out of there immediately but they don't feel they can ask.

		I don't think that will happen, but I would like to make it explicit
to subjects that they do have the ability to withdraw at any time, even
under the Florida vacation scenario.

		So I think more words should be added, I will add more words for any
test that is going to go to Florida.

		VOICE: That issue by the way came up in looking at how one protects
deployed forces in theaters, and it's part of an Academy of Science
committee I served on.  And the armed forces had a phenomenal amount of
data as did the contractors to the U.S. government that sort of
reflected even within the FIFRA reflected a different scenario as you
described which is why I realized I had blocked on what I had read on
pages five and six, it conflicts with different levels of types of
knowledge.

		And so it wasn't - although my own experience has been terrible, and
I'll never go back to Florida, that didn't have anything to do with it. 
That must have been a subjective kind of rationale.

		But in fact there if you say from your experience that they have to
wait hours to be approached by a mosquito, you probably have knowledge
about those specific locales that the armed forces never studied.

		DR. CARROLL: That the armed forces never studied?

		VOICE: Or didn't report on because they might have to kill us.

		DR. CARROLL: Well, I'm not consulting for the armed forces pesticide
management board.  That's a management board on mosquito repellants. 
And in fact I'm doing my best to transfer the kinds of ethical insights
we've had to the practices that I use there.

		DR. FISHER: Lois?

		DR. LEHMAN-McKEEMAN: I'll preface my question by saying that I know
very little about the biology or the husbandry of ticks.  But in reading
the protocol, just a question of clarification.

		The tick protocol indicates that the ticks used in the study which are
laboratory ticks are reared on quarantined rodents.

		DR. CARROLL: Yes.

		DR. LEHMAN-McKEEMAN: Now I didn't know precisely what that meant.  But
as a person who is in fact highly allergic to rodents I was wondering
whether one of your criteria for exclusion in that study might need to
be individuals who are in fact sensitive to rodents?

		DR. CARROLL: From a biological standpoint I cannot imagine why that
would be necessary.  One thing that the ticks do after they take a blood
meal from a hamster that is kept in quarantined in the sense of
protected from disease contamination of the rodents, the ticks ingest
the blood meal, and then they really reform their body very substantial
away from the rodents, because they undergo a tremendous metamorphosis
from one stage to the next.

		So they're barely the same arthropod, let alone having the true taint
of a rodent still on or near them.

		It's a very interesting point, principle to bring out.  But as a
biologist I can't imagine a real world concern from that.  But I will
let it bounce around and think about it.

		DR. FISHER: Jerry.

		DR. MENIKOFF: I have a question about the medical monitoring that you
have in place.  It sounds like these anaphylactic (phonetic) type
reactions that might occur here are extremely unlikely.  Is that
accurate?

		DR. CARROLL: Yeah, I've never seen one myself, nor do I know anyone
personally in the field or anyone that I know personally who has seen
one or knows anyone who has had one.  

		DR. MENIKOFF: But at the same time you've also made the decision that
you are going to notify the hospitals about the fact that you are
conducting a study and give them some information in advance I'm sure as
to why you made that decision?

		DR. CARROLL: Just A, because of the potential inherent inefficiencies
of the emergency rooms.  And the possibility that giving them
foreknowledge might smooth things should we need to utilize their
services.

		And because we can't control everything in this kind of interaction. 
I realize that is true of life itself, but we've got individual subject
variations, individual mosquito variation, the interaction of subjects
and mosquitoes, the interactions of these repellents, all mixed in
together.

		So I think it's a variety of low risks being mixed together, but we
can't control all the epiphenomena. 

		DR. MENIKOFF: In light of that would it be overly burdensome to
actually identify a physician who could serve as an emergency contact
and who you could brief in advance so that there could be some person
who is much more familiar with the study itself?  I mean is that
something that is possible?

		DR. CARROLL: That would be very possible, and very straightforward. 
Such people already exist in my community.

		DR. FISHER: Any other questions?

		Okay, thank you very much, Dr. Carroll.  And also thank you for all
the efforts you've put in, and certainly the consideration and respect
for our comments.

		We were impressed.

		So I guess we should do the - should we do the studies separately from
our perspective, or together?  Separately?

		Okay, okay, so Lois would you like to begin to speak to 003?

		DR. LEHMAN-McKEEMAN: Certainly.  I will be very, very brief.

		I would also echo the sentiment regarding the fact that these revised
protocols clearly reflect what was careful and conscientious
consideration of all the discussions several months ago.

		So to the question, it is my conclusion that this particular study
will generate scientifically reliable data.

		The study now has a clear rationale, and again, relative to some of
the earlier questions, these are new formulations for which this kind of
an evaluation is required.

		And again, in light of the questions and discussions we had, these are
- this is a far more complete protocol with emphasis again on limiting
the weaknesses that were described previously, and I'll just highlight
several of them, including the fact that the formulations themselves are
now going to be characterized including their stability.

		Each subject serves as his own control, and the study increased from
six to 10, and with these data now there is a description of the
analysis of the data yielding the complete protection time as well as
potentially a relative protection value.

		So it seems to me that the study as it is designed and now presented
to us will generate the kinds of data that will be useful for evaluating
the efficacy of IR3535 against ticks.

		There are two comments that I want to make of minor substance with
respect to potential improvement to these studies.  One has to do with -
and I think this was mentioned in the EPA review, but not highlighted in
your comments.  There is a suggestion in the protocol that because of
the aerosol in the pump has the same concentration of ingredient, albeit
different excipients, might be potentially tested in the same subjects.

		My suggestion would be that that should not be the case; that the
lotion should be tested separate from the pump, which should be tested
separate from the aerosol, which would mean then the utilization of 30
subjects and not potentially 20.

		And the other comment I had, and in part for me it was a little
difficult to follow exactly.  But I think one fo the strengths of the
study is going to be the fact that there will be some better dosimetry
data here, so it's not just a function of whether this thing works, yes
or not.  But because of the dosimetry leg there will be some indication
of the amount of product that has been used, which can go to assessing
retrospectively that we have not caused any safety issues.

		What wasn't clear to me was that that was going to be done in a way
that was going to generate really accurate data.

		As I understood the gauze bracelets, the lotion made sense to me.  You
basically just weight the bottle.  But the aerosol and the pump appear
to be done by measuring the weights of these gauze bracelets being put
on the arm.  And if that's being done outside, then I immediately have
the question of, well, these are somewhat volatile with respect to their
formulations.  So what is done to basically minimize the change in
weight that is going to take place?  Are they being weighed immediately?
 Are they being put away somewhere?  That kind of information just
wasn't there for me.  And I think the spirit is there to do it
correctly, and perhaps the information you have a plan.  But I think
that that would be the one place where that could be improved with
respect to detail just to make certain that that was in fact going to be
an accurate and valid assessment.

		But those are really my only two comments with respect to any negative
or weaknesses.  Otherwise I think this is going to represent quality
data.

		DR. FISHER: Thank you.  Richard?

		Okay, and David?

		DR. BELLINGER: I concur with Lois.  I was very impressed with the
improvements in the quality of the protocol.

		My only comment is sort of a more general one, first of all, I liked
the addition of the dosimetry piece.  I think that will be very helpful.

		But in some ways I think it got things a little backwards.  I would be
more interested in knowing what the effective dose is and then
instructing people to apply that amount rather than finding out what
amount people applied, and then find out if that works.

		I mean it's a different question.  But perhaps it can be answered in
another study.  But otherwise I think it's a very strong study as it's
written presently.

		DR. FISHER: Okay.  So I think we'll do the science comments.

		DR. KRISHNAN:  Just for the record I wanted to repeat the comment on
the dose.

		DR. FISHER: Okay, the dosimetry or the dose.

		DR. KRISHNAN: I think so that the studies on dosimetry and efficacy
are done without compromising safety, it would be useful to indicate the
known NOIL (phonetic) or safe level.  In this case there is a 300
millogram per kilo identified as an animal NOIL for example.  So that
they can ensure that the dosimeters do not contain more than the code
approximate safe levels.  One could always take the animal NOIL and
divide by 100 or so as a margin of safety, something like that.

		So I will be interested to see that lowest level that could be used in
the dosimeters for doing these studies.  That's basically my concerns. 
Don't compromise anything, but I know because of the smaller negligible
risk associated with this.

		DR. FISHER: I think we'll go to the ethics, then we'll come back to a
summary of both.

		Dr. Philpott?

		DR. PHILPOTT: So I'm going to focus of course on just ticks right now,
and the charge question for the ethics relate specifically to whether
this study as submitted now complies with the requirements of the
subparts K and L.

		And I'm going to start with subpart L because that specifically
addresses the question of exclusion of children, pregnant women, and now
with the revision, nursing women, and it's very clear from our exclusion
criteria that the requirements of subpart L are met.

		Subpart K relates to submission of documentation from the IRB to
demonstrate appropriate  scientific and ethical review.  The issue of
the confidentiality of procedures aside, I - we've received the minutes,
and I see no evidence from them that we are not in compliance of subpart
A at this point either.

		Now it was very hard for me, however, since the HSRB is not supposed
to be an IRB, but it's hard for us to take off our hats.  I'd like to
start by commending Dr. Carroll for this revision.  I think he took all
of our comments at the last meeting to heart, and he was actually very
gracious about it.  And I wish people who submitted stuff to my IRB
would be as kind.

		I just have a couple of minor suggestions.  I believe that he has
addressed all of the concerns that we raised at the previous meeting
with respect to the concerns about coercion and voluntary informed
consent, particularly came up with a novel approach for dealing with the
confidentiality issue as it relates to women and the over-the-counter
test for pregnancy to meet compliance with subpart L.

		I did notice two minor things that should be considered, and that is
on the informed consent document for AMD-003 under the benefit it
specifically lists compensation for your participation as a benefit, and
that should not be there.  I was surprised that the IRB let that go
through.

		I also noted under the section for consent and signature, the last
page of the informed consent document, it authorizes the release and
disclosure of medical information.  I don't quite see why the release
and disclosure of medical information for this particular study is
required.

		DR. FISHER: Let me just ask, is this for - this would probably be
beforehand?  I assume is this required for safety information?

		(Off mike response)

		VOICE: Just following up on that point, maybe some of the eligibility
information they're collecting might fit under something in terms of
that.

		Bottom line, I like what everybody else said, both Dr. Carroll and EPA
did a great job inasfar as the earlier comments.  I agree with Dr.
Philpott's conclusions that basically this needs to be applicable
statutory standards

		One very minor comment I'll make, in general in risks I think it's a
good thing to give as much available information as there is.  And in
particular, the current consent language is relatively vague about the
knowledge we already have about the possible side effects of the base or
the active ingredient.  It says you may experience blah blah.

		To the extent in fact we know there is a very low likelihood of
experiencing that, I think it's beneficial to actually say that instead
of just using the vague words such as may.  Give greater details, and in
fact that would be more encouraging to subjects if you tell them one
person in 100 has skin reaction to this or something to that effect.

		Other than that, generally very good.

		DR. FISHER: Richard.

		DR. SHARP: Just two points.  The first is that the concern that I had
initially put out there about the fact that this may be a well designed
study  but we also need to raise that bigger question about whether it's
also going to answer an important question that we don't have an answer
to yet.  The discussion seems to clearly indicate that in fact there is
reason to conduct this study from the point of view of assessing the
efficacy of the product.

		So I want to make it very clear that I'm withdrawing that, and make it
clear that I'm convinced on that point.

		Second has to do with the suggestion that we've already touched upon. 
I think it would strengthen this protocol from the point of view of
ethics to identify a physician who is clearly designated as responsible
for assisting in the very unlikely event that there is a serious adverse
event; somebody who is accountable in some way, and familiar with the
protocol.

		So again if that isn't particular burdensome, I think that that is a
strong addition here.

		DR. FISHER: Okay, good.  So I'm going to go back over the science.

		So in terms of the strengths it's careful and conscientious.  It has
new formulations for which the data - it's going to develop new
formulations for which the data is required.  It's a more complete
protocol. 

		This study will generate scientifically reliable data.  There is a
clear rationale.  The formulation is now better characterized with
stability.  The increase in N is good, with subjects as their own
control.  And analysis of the data is now provided, which will also
provide information on quick and relative protection.

		It could be improved.  The lotion should be tested on subjects
separate from the - separate subjects, which means you may need 30
subjects, not 20 subjects.

		Dosimetry data is valuable in and of itself.  There was a question
about with using the gauze bracelets then outside, what the volatility
is in terms of measuring the weight.  So it might be important to give
additional information on that.

		Also it would be useful to indicate the Noel (phonetic) and Loel
(phonetic) information that is currently available, as one is talking
about dosimeters, to ensure that they are actually at a safe level.

		And I think the suggestion for future research is whether or not
measuring what the effective dose is, and then kind of informing the
application.

		Any other points, or did I get something  just totally confused?

		VOICE: The only thing you confused was the two that should not be
combined are the aerosol and the pump.

		DR. FISHER: Great.  

		Oh.  Got it.  Okay.

		So any other comments?  Now go to the ethics, the study met
requirements of subpart L in terms of exclusion of women and children,
and went beyond with the nursing women.

		Subpart K it appears as if it meant those criteria based on the
materials that we received.  There appears to be sufficient benefit
based on finding out the efficacy of the product formulation. 

		In terms of improving the study, benefits needs to be taken out of the
informed consent - I mean, I'm sorry, compensation needs to be taken out
of the informed consent as a benefit.

		The release of medical information statement needs to be deleted.

		Risk information in the informed consent might be made more specific
since there's information that's there, rather than may be, that there
is a low probability, there is a likelihood some people get this,
whatever is appropriate.

		And in addition to the hospital that it would be helpful to identify a
specific physician in the area in case of the unusual event that there
is an adverse reaction that there is a physician who is aware.

		Any other comments on the ethics?

		VOICE: Just a point of correction.  L now requires that nursing women
be excluded, so it didn't go beyond.

		DR. FISHER: Thank you.  Okay.

		VOICE: Let me just add for clarification. We put out a proposal, I've
forgotten exactly when, it was sometime just before your last meeting
that we were going to do this refinement of the rule, to add nursing
women to the prohibitions.

		And we said this would become effective if there were no comments on
it, that we received no comments on it.  So it magically became
effective on August 22nd.

		So now the - all of the various provisions, the prohibition sections
of the rule, the ones that prohibit EPA testing, the ones that prohibit
third party testing, and the ones that prohibit EPA consideration, all
three of those passages in the rule now refer to children defined as
under 18; pregnant women; or nursing women.  In all three places.

		DR. FISHER: Okay, yes?

		VOICE: Can I ask a question?  What was the statement on medical
information that you wanted to move from the section, where it say
because of the need to release information?

		VOICE: So on the AMD-003 consent form last page, which is I think page
seven - I don't know the page in the document, right above the signature
line where it says consent, it says, release of medical information. 
What's interesting is that is not in AMD-004.

		DR. FISHER: But anyhow, there is no medical information that is
needed.  It was just kind of plugged in there as something that was on
more clinical ones, so he's not collecting any medical information.

		(Off-mike comments)

		DR. FISHER: Do we have a consensus?  Any other comments on 003?  

		Okay, we'll move to 004, and that's Jan I believe you have the brief.

		DR. CHAMBERS: I'd like to add my comments to some of the others that
have been said, and commend Dr. Carroll for his careful revisions and
being very responsive to comments we made a few months ago.

		What I did in my critique last time was go through the scientific
questions that we as a board have, and I'm going to run through those
same things again quickly.

		This time a scientific question was stated, that is, to test the
efficacy of this IR3535 for repelling mosquitoes.  The existing data
were not adequate to answer the question for these new formulations.

		Because existing data were not adequate to answer the question of
efficacy, new studies involving human studies are necessary.

		The potential benefits of the study were clear, that an effective
repellent would be available; it would have either greater efficacy
and/or fewer drawbacks than what was currently approved.

		It's likely the benefits would be realized, that is, the efficacy of
the repellent, because there is a long positive history of this compound
from its European uses as a repellent.

		The risks have been more extensively described as have the strategies
to minimize risk.  And the most likely relevant risk would be disease
transmitted by the mosquitoes, and that's been dealt with to minimize
the risk.

		He also indicates that the inert ingredients in the formulation lack
toxicity at exposure levels anticipated, and that was one of the
questions we had earlier.

		With respect to study design criteria, the purpose of the study was
clearly defined.  There were specific objectives and hypotheses, the
hypothesis being that IR3535 is an effective repellent.

		The studies as described can test this hypothesis.

		Sample size is now a definite 10 individuals; it was a little fuzzier
before; and two negative controls, and no positive controls.

		The number of subjects would be repeated in two locations.  There is a
plan allocating individuals to treatments.  And the findings from the
study can probably be generalized beyond the study sample.

		With respect to participation criteria, there are more extensive
justification for selection of the target population.

		The participants were representative of the population of concern, but
however, as was pointed out earlier, that wouldn't include everybody,
but there's really no way around that.

		The inclusion-exclusion criteria are appropriate.  Sample is not a
vulnerable group.  With respect to measurement criteria, the
measurements were expected to be accurate and reliable.  The
measurements were appropriate to the question being asked, and quality
assurances were in place.

		Statistical analysis criteria, data should be able to be analyzed
statistically, and my spin on it is the statistical method is
appropriate.

		Laboratory and field conditions.  No laboratory experiments here. 
Field conditions were representative of the intended use, and the
protocol now includes a stop rule plan and medical management plan.

		So in conclusion the revised protocol contains considerably greater
detail than the original and it answers all the scientific questions
that were posed by the HSRB in its initial review.

		The PI has been extremely responsive to the original review comments,
and the revised protocol should generate scientifically valid results of
efficacy in repelling mosquitoes.

		And I'd just like to add with one editorial comment.  I rode up here
on the plane yesterday with somebody from Mississippi who is studying
the incidence of West Nile Virus in our state, and we've just had the
worst year that we've ever had.  I think we've had 147 cases with six
deaths this year.  And so I will certainly be enthusiastic about the
develop a product that would be more effective, and more likely to be
used by people, because we really need it.

		DR. FISHER: Thank you.

		Steve?

		(CD Change)

		STEVE: Following Janice's lead, I went through the exact same list of
points and I came up with almost identical language. And I too was very
impressed by the improved quality of this protocol, and the great
thought that went into it. 

		DR. FISHER: Suzanne?

		DR. FITZPATRICK: I'll third that.  Dr. Carroll did a good job.

		The only two things that I would have liked to have seen would be a
copy of the California Department of Pesticide Regulation experimental
subject bill of rights.  It's probably more of an ethical thing.

		And also one of my pet peeves - this is an ethical thing - the
expiration date should be on the consent form.  Even though they are
following continuing review, the consent form is out of date without
that.

		DR. FISHER: Thank you.

		Any other comments on the science?

		Okay, let's move to the ethics.  I can't read the - 

		DR. PHILPOTT:  It's me again.  I'm busy today.

	ETHICS

		So once again, great job.  Thank you for being so responsive.  And I
can - I can conclude that once again with the addition of the minutes
from the IRB meeting, and the exclusion criteria now expanded to include
nursing women, that the AMD-004 does appear to meet the applicable
requirements of both subparts K and L.

		Just a couple of comments in addition to what Suzanne has already
brought up regarding ethics comments that I agree with.

		I'm a little concerned, as you might have been aware from my questions
of Dr. Carroll, regarding the idea of recruiting subjects in California
and then flying them out to Florida.  That could raise questions of
undue influence, both for recruitment purposes as well as for
voluntariness of withdraw, and I echo Sue Fish's comments earlier, her
questions about the two negative controls that serve to determine biting
pressure, and that there needs to be a lot more explicit discussion of
how they're recruited, and also the consent procedures involved in
bringing those individuals into the study.

		DR. FISHER: Thank you, Sean. 

		Jerry - or somebody.  Jerry.

		DR. MENIKOFF: Yes, I agree with Dr. Philpott's comments.  One thing
I'll just note for the record.  The risk section is actually very well
done here.  In addition to noting you may get West Nile virus, and even
though they correctly note - I'm assuming it's correct - that your
likelihood of getting it is so low, they go on to point out what might
happen to you from West Nile virus, including pointing out
disorientation and possible coma and paralysis, which is just a good
example of just letting a person know that this stuff could happen to
you.  So I just wanted to note that.

		But I think it meets the appropriate statutory standards.

		DR. FISHER: Thank you.

		Richard?

		DR. SHARP: Apart from the comment that we made earlier about medical
monitoring, I have no other additions.

		DR. FISHER: Great.  Okay.  So I'm going to read what I think we said. 


		For the science, I didn't get everything Jan said, because she was
reading so quickly.  Fortunately she'll be writing this.  But based on
our criteria, the question was stated, that the existing data is not
adequate to answer the question.  The benefits are clear in terms of
finding out something about the efficacy of the formulation.  The
benefits have great potential to be realized.  Risks and strategies to
minimize the risk are much better described and designed.  The purpose
is clearly defined.  There are specific hypotheses and they can be
tested.  The sample size is better.  There are no positive controls. 
There are of course two different locations.  There are plans for
treatment.

		The generalizability is as good as one is going to get in these
situations.

		There is a better justification of subject selection.

		Measurements are appropriate, adequate and reliable.  The data can be
analyzed statistically, and the field conditions are representative of
the intended use, and therefore, there is the conclusion that data
generated will be helpful in producing scientifically reliable data,
useful for assessing the efficacy of the test substance for repelling
mosquitoes.

		Any other comments?  Yes.

		VOICE: There is one observation that I wanted to make, which I think
we need to clarify before you with Dr. Carroll, and that is whether this
protocol calls for testing these materials in both California and
Florida, or whether it is set up to describe a test which could be
conducted in either California or Florida.

		I'm not certain of the answer to that, but from the way Dr. Chambers
summarized it, my sense is that she thinks it's both-and.  I read it and
I thought I read either-or.

		So let's get that cleared up before we get agreement. 

		(Extended off-mike comment)

		VOICE: Habitat not meaning state, meaning something on a much smaller
scale.

		DR. FISHER: Okay.  With respect to the ethics, the study meets
criteria of subpart K and L.  The rests as shown were very informative,
and the informed consent.  It would be good to have a copy of California
bill of rights given to the subject; is that what you want?  Okay, so
given to subjects.

		There should be expiration dates on the consent form, and if Florida
is used, then as we discussed, the logistics need to be explained so
that the freedom to withdraw is clear in the consent, and I think as
John Carley is pointing out, this may be needed both for somebody - it
may be needed even if they are not going to Florida, if they are taken
into the wild somewhere or something like that.

		Also, the two untreated subjects, it must be clear that they are
treated with the same inclusion -exclusion criteria as others in terms
of vulnerability and risk and that the IRB has reviewed them as well.

		And the medical model, monitoring should be similar whether you're in
California or Florida; there should be some arrangement with a local
physician in case of an adverse event.

		Any other comments?  So we have consensus on 004? 

		Congratulations.  This is our first revised approved protocol - well,
not approved, but recommended for approval protocols.  Recommended or
whatever, positive.  So I think that's great.

		Okay, what I'd like to - it'd be great to - I think we should take a
break and then go even if it's a little over 5:00 just finish up on
going about the guidance for people submitting protocols.

		Oh, I'm sorry.  But I think we'll combine that, right?  Because the
review format - is the review format different from the -

		VOICE: The framework that is attached to the draft PR notice is the
same framework that used for the assessment of these protocols.

		DR. FISHER: Right, but I thought we'd handle it altogether.

		VOICE: It's a bundle.

		DR. FISHER: Right. 

		So 15 minutes?  Ten minutes?  What do you want?  Ten.  

		(CD Change)

		VOICE: I think there are some points that you do need.  I will
characterize my presentation in one word: brief. 

		I think there are some points that you do need.  I will characterize
my presentation in one word, brief.  So give me a break.

		First a correction.  The date is not October 19th; I didn't think we'd
get this far today.  But you continue to amaze me.

		What is a PR notice?  It stands for pesticide regulation notice.  They
are designated by the year in which they are released, a hyphen and then
a sequential number.  

		It's a longstanding series.  Informal nonregulatory guidance
documents, they are advisory in nature.  OPP uses them to communicate
with the regulated community and other stakeholders.

		The audience intended for this particular one is anyone who either
plans to conduct research involving intentional exposure of human
subjects intended to be submitted to us under the pesticide laws, or
anyone who submits the results of completed research to us whether or
not it involves intentional exposure for consideration under the
pesticide laws.

		Our main concerns - well, the main points in the notice are the
schedule of meetings, and that of course would be something that needed
to be renewed periodically to tell people what their target dates are. 
And then how and when to submit protocols; that's the principal focus;
how and when to submit the documentation of ethical conduct for
completed research.

		Our main concerns are that submitters be able to understand the
applicable requirements, and then be able to meet them on the first
pass.  The experiences that we've had over the last several months with
Scott Carroll's protocols, with the AETF protocols , with the
requirements for multiple submissions on the red study and so on, we
want to get past that transitional stage.  We want to put something out
that tells people what to do clearly enough so they can get it right or
nearly right first time out of the box.

		And then our basic questions, which kind of telegraph the charge: if
submitters comply with the guidance and its notice, will we have
everything that we need, and will you have everything that you need? 
Will we both be able to find it easily?  Is this an appropriate way to
organize, index and summarize this stuff?

		In implementing 1125, this is the submission of protocols, the
protocol submitter has to write the protocol and pull together all the
supporting materials as required by 1125.  That's a much larger and more
complicated job than it used to be for these people simple to knock out
a protocol.

		They have to get IRB approval.  They have to get all that additional
documentation from the IRB.  They may have to obtain approval from state
authorities; it varies by state.  They have to compile an index.  The
index that we expect them to use is attachment eight to the PR notice.  

		It's the same document that we used as our completeness check for 1125
submissions.

		Then they have to send the whole bundle to us far enough in advance of
the HSRB meeting that we can review it, make the necessary decisions,
get it queued up for your consideration, and make it available to you
and to the docket.

		Then when we get it we have to assess the  - doublecheck the
completeness of the package and notify the submitter of any
deficiencies, and then we complete the framework, attachment B, for the
protocol.

		Now we're not going to do the framework until we're satisfied that we
have the complete package.  But once we've crossed the completeness
hurdle then we'll do the framework, then the assigned ethics and science
reviewers will do an integrated narrative review based on a single joint
execution of the framework.

		If it's acceptable we'll send the review out to the submitter and
schedule the protocol for your review.  If it's significantly deficient
we'll notify the submitter for correction, but it will fall off out of
the queue for HSRB review.

		The framework as I mentioned this morning is heavily adapted from the
original work that Emanuel did.  What are we doing it?  We want to make
sure that all the critical questions are addressed.  The framework as we
have proposed it here contains some points that were not within the
lists that you all provided us after the June meeting.  I'll point out
which ones those are.  These are things we still think are important,
and there may be others yet that either you or we think of that we want
to add to the framework.

		We want to organize information from the protocol that is relevant to
our acceptance judgment.  Our intention is that the framework, or the
unshaded portions of the framework, are going to be filled up with
quotations from the protocol, from the submitted package; mock
judgments.

		We don't want to answer questions yes or not.  We want to say, here's
what it says in the protocol about that, and then put in parentheses
that this is on page 46 or whatever, so that there is a link between the
big pile of stuff.  And you saw that  the Carroll protocols as amended
end up being about an inch thick.  Well, it's hard to master all that
information.

		So the framework is kind of the intermediate point, it's the set of
pointers to the real deal.  And then the final purpose is to provide the
supporting documentation for the narrative review, which summarizes the
judgments and conclusions, and makes a decision without the
acceptability in terms of the applicable standards.

		There are the items two through eight on this list are the same ones
you're accustomed to seeing on the other adaptations of the Emanuel
framework.  There is an inelegance.  I probably should have called the
identification block A, and then I could have left the rest of them
numbered one through seven instead of changing the numbers.

		But it's the same topics you've seen before. 

		The parenthetical numbers here are the numbers in the list of science
criteria from your - it was from the draft final report from the June
meeting.  There don't happen yet on the slide to any ones from the
ethics committee, but I'll explain those when we get to them.

		So for example Item C there that's in the framework are all the
prerequisite studies in hand, that was not on the list of science
criteria.  You talked about it, but it didn't make its way into that
list.  We wanted it in here so that we could be sure that it had been
addressed.

		And the last point on this slide, how would the data be used by EPA,
was also not on your list, but it's addressing that question, how would
this fit into the weight of evidence.  What sort of poll in our
understanding would it plug?

		Next one.  This is the first of multiple ones on study design.  Let's
see, B, is there adequate statistical power to definitively test the
hypothesis?  This question was raised explicitly in the NAS
recommendation 5-1, but it wasn't on the list of science criteria. 
Obviously it belongs there.

		The "how will subjects be exposed" is just the description.  So many
subjects exposed this way, that way; that needs to be in there
somewhere.

		The rationale for the choice of test material and formulation is as
important as the rationale for the choice at those levels and the
staging. So we wanted to be sure that one got covered.

		Next one, let's see, we added in K; are there adequate and appropriate
controls.  All of the others of these, obviously you can see that we
shuffled the order quite a bit.  But the questions are there.  We
thought that this sequence was consistent with our experience of
extracting tidbits from studies, and would be a little easier to fill
out by the sequence that you all suggest. 

		Then when we go to subject selection, next slide please, the one area,
1111A3, the ethics subcommittee, used very different types of citations
in your last draft report, so they weren't numbered one, two, three,
four, five.  And these are references that were in the stuff the Skip
Nelson presented at the June meeting.

		Subject selection in general is covered under 1111A3.  The very
important question how and from what population will subjects be
recruited, but we want to know that, so we plugged that in.

		On risks and benefits, we separated, as you saw in the assessments
that I presented of the Carroll protocols, we separated the presentation
of qualitative risks from the probability of the occurrence of each of
those risks.  And I think that is a very helpful way to break down what
we know and don't know about risks.

		And the question about weather post-exposure monitoring is - would
continue long enough to be able to see potential adverse effects is one
again that was part of your discussion but it didn't turn up in the list
of criteria so we added that in.

		Continuing on the next slide, none of these were specifically listed
in the ethics subcommittee, but the bottom line, are the risks
reasonable, we broke down these five predecessor questions as a good way
to work our way up to being able to address the question, are the risks
reasonable?

		Bear in mind, however, that in the framework, what we would put in
there in this section, 5K, would be what the protocol says about the
reasonableness of the risks.   It's in our narrative review that we say,
and here's our judgments about that.

		Independent ethics review is implicit in the references that were made
to 1111, which is the IRB's criteria for decision.  It was explicit in
the cited NAS recommendations.  Who reviewed it?  Are they independent? 
Are they registered?  And so on.

		Then on informed consent, the D and E were added.  Obviously relevant
to protocols that we've looked at last time and this time, and they
needed to be made explicit.

		On the next slide, we added all of these.  These are particularly
relevant, I think, to agricultural worker exposure studies where there
are much more likely to be language differences, and if you will power
differences, between - I don't mean statistical power either - power
discrepancies between the investigators and the subjects, that have to
be compensated for, and we would expect to see them addressed in
protocols.

		Then on respect for subjects, this was mentioned by the ethics
subcommittee, these are the same questions we have been asking.

		That's what we've got in the framework for the protocols.  All but
three of the numbered science criteria from your previous report are
covered.  The three that are omitted, one was an oversight; it's the one
about Q/A measures.  The next version of this will plug that in.  It was
just a mistake.

		The other two that were omitted were the two that specifically
questioned the concordance between lab and field conditions, and those
only apply in a relatively narrow range of studies, so we decided to
drop them out of the generic framework.

		In cases such as some of the efficacy, repellant efficacy situations
where that is important, obviously, it becomes part of the science
review.  But in terms of being a general characteristic for this overall
framework, it didn't seem appropriate to include it in all cases, so we
skipped those.

		1303 is a little bit different situation.  You saw what happened when
the size of the ROAT study more than doubled when we went back to them
and asked them to comply with 1303.  This is a temporary phenomenon. 
Most studies like that are going to have to come in for protocol review,
and most of those documentation requirements will be addressed in
responding to 1125.  There won't be much left.

		Next slide, please.  1303 requires submission of documentation not
previously submitted to EPA.  So if something has been through the
protocol cycle, there won't be a lot more left to do here.  But the
study submitter, in a case like this, which spanned the implementation
date of the rule, or in a case where it's like an observational study
where it doesn't require prior protocol review, when they finally submit
it, they'll have to put all this stuff together.  They will collate it
and index it, again, indexing it according to the checklist that we
provided in appendix C that says these are the requirements of 1303, and
send it in.

		Then we check the submission for completeness.  When it's complete, we
review the science.  Here we're talking about separate science and
ethics reviews.

		Then if the decision is that we think we want to rely on this study in
our science assessment, then and only then will we do the ethics review.
 If it's not going to pass scientific muster, we don't need to do an
ethics review.

		We send the reviews that we conduct back to the submitter, and if it's
required, we schedule it for review by you.  I won't remind you, unless
you want me to, of the exceptions, which cases don't have to come to
your attention.

		The ones that do are intentional dosing studies, conducted since the
new rule, or conducted prior to the new rule but which measure toxic
endpoints.  Those are the ones that are required to come to you before
we rely on them in our actions.

		There are the charge questions, but before I stop Bill wants to say
something.

		DR. BRIMIJOIN:  Actually, two very short points.  The document that we
prepared and asked the board to comment on is a draft document.  Our
plan is to revise it in light of the comments that we received from you.

		One of the things I am confident we will do is add additional material
relating to submission of protocols for which parts of the submission
are claimed to be confidential business information.  So this PR notice
will likely have additional paragraphs of text dealing with how
submitters should handle CBI claims.  It may be nothing more than to
refer to existing documentation.  But we wanted to alert you that we
will be making that revision, and that tomorrow's discussion will be
helpful to us in thinking about what we want to put into it.

		Once we have revised it, our plan will be to make it available to the
public generally so that they can comment on it, and that comment may
lead us to make further revisions.

		And finally our experience as we continue to deal with protocols and
completed studies and so forth may lead us to make further revisions.

		MR. CARLEY: Let me add one other point.  I was told earlier today by
somebody who was here in the audience that we should expect to receive
within the next few days a couple of new study submissions, subject to
1303.  They were initiated before the rule, submitted now, and I had
sent this guy this draft a few weeks ago.  He said they responded to it.
 So we'll have some other tests of how it works with real life stuff,
which may also influence some changes that we make.

		DR. FISHER: Bill, I just wanted to underscore again and get a
clarification.  The board may not be in a position to give you advice
tomorrow about how we view CBI.  We're going to be talking about it for
the first time.  I think it's something we're going to be specific and
very careful about because of issues that we've discussed before.

		So I'm a little taken back that this will go out to the public.  You
are assuming that you will have sufficient information from us tomorrow
to move ahead with a portion related to CBI.  I don't necessarily think
the board has yet committed to be able to make recommendations to you
about that.

		BILL: I understand, Dr. Fisher.  And I'm always the optimist, first of
all, about how useful the board's advice will be.  And even the
discussion itself I think will be informative for us.

		And I expect that at the very least we can include in this PR notice a
description of how we currently handle CBI claims, and documentation and
formatting and that sort of stuff, which will be very useful to remind a
group of people, Dr. Carroll for example, who has not been accustomed to
dealing with EPA protocol.  This is a new audience, and we will be
looking at it, and who would probably benefit from that very - that at
the very least. 

		And if we go beyond that and elaborate on something, that's good news.
 But it is - both John and I have said, these notices are not binding,
nor are they necessarily the final word ever spoken.  We expect to
revise and improve upon the guidance.  So if the board comes back and
gives us advice sometimes, three, six, nine months from now on CBI
issues, we can revise the guidance to reflect that.

		MR. CARLEY: let me just add one more thing about that.  We feel
considerable urgency to get something out there that has some kind of
official status that says, dear world, here is how to put the package
together.

		If we don't get that out there, we have no control over the form that
the package assumes, and believe me, bringing order out of these piles
of things is not quick or easy.

		DR. FISHER: It reminds me of some of these bills in Congress, that all
of a sudden all these other things are amended to it.

		And so I guess I'm very much for what we saw here, and you've asked us
to provide you comments with.  The fact that there has to be a tag on
about CBI on this document, I don't see us not making any kind of
specific recommendation on CBI tied to whether this document can or
cannot go out.

		So I'm just a little concerned about the fact that you can make them
two documents, or you could make it an appendix, I don't really care. 
But I don't think CBI should hold up this, number one; number two, I
wouldn't want anything to go out - and that's up to you - but anything
that would imply - this what you have done here is so really I think
such a tribute both to you as well as to the board in terms of taking
how you've observed us and the criteria, taking what's best from it,
taking what we've stated as our criteria, and along with your
responsibilities, integrating them into a document that can really
provide very useful information to registrants, not only in terms of how
you are going to respond, but what the board is expecting.

		I'm concerned that if attached to this there's something to do with
CBI, it's going to misinform the public that what you are putting in
there is something that the board may have some expectations to show. 
Clearly I also understand you want to get in perhaps the information you
are going to be presenting to us tomorrow, which is detailing what CBI -
how CBI might be defined, what the EPA process is, if you are going to
try to evaluate something as EPA.  Those are all facts.

		But to assume that there is a specific type of process that will be
amenable to the way that the HSRB sees their responsibility, I just hope
that that - you're very careful in not communicating that if in fact
there is not a decision that's reached about that.

		MR. CARLEY: The distinction is clear, and clearly understood by us,
between saying something about how the board is going to handle CBI,
which we have no intention of doing in this PR notice, and telling
potential submitters if you want to register a claim of confidentiality
for any of the material within the scope of this submission, here's how
to label it.  That's the sort of thing that we're talking about.

		We can't tell people that you can't send us anything until we've
resolved how the HSRB is going to handle it.  We also don't want to not
tell them how  to identify exactly what it is that they're claiming to
be confidential.  And I hope that is helpful.  The scope of this notice,
the purpose of this notice, is to tell people how to label it, how to
package it, when they send it to us, and how to tell if they've got it
all, so that we can all get a better package the first time out.

		It's not to say what we're going to do with it; it's not to say what
you are going to do with it.

		DR. FISHER: I think we're clear except for the way this is written. 
The way that this is written is how not only they're presenting it to
you, but what would be helpful in presenting it to us.  The HSRB is
constantly mentioned in this document, which I think is wonderful.

		All I'm saying is that if the CBI is within this same document then it
may misinform registrants that there has already been a decision about
how that information is going to be presented to the HSRB.

		So I just want to make that distinction that there is not this
misimpression that what was integrated in terms of our criteria in terms
of this document, that anything - that we've expressed any kind of point
of view, or there has been some resolution about how CBI is going to be
handled with respect to the board.

		And that's a discussion we will have tomorrow, but I don't want us to
be forced into - or for you to have an unrealistic expectation that we
might come up a specific way of handling it.

		Sue and then Mike.  CBI?  Are we finished with that?

		(Off-mike comment)

		Yes.

		DR. LEBOWITZ: If CBI is not part of 261125 or 261303, then if we deal
with just the part that has to do with those parts, then I think that's
what they want to send out quickly, then the CBI thing is a separate
issue.

		And I think as you said, very clearly, we have to consider that
separately, because we don't want it to interfere or hold up the other
discussions.

		But as long as we're talking predominantly about what should be
submitted as far as 261125 and 1303 then I think we can clearly be able
to discuss that, and make some recommendations as to what might go out
for public comment.

		VOICE: I mean we're going to - the point of this of course is to
discuss these issues, and I think we need to talk tomorrow about the CBI
issues and all that.

		But what I'm hearing Dr. Fisher say, which I initially would kind of
agree with is that it would be - it seems that this is a very coherent
document that could go out with its own 2006 dash number, and subsequent
CBI document could go out.  I don't think we need to decide that today,
because we haven't heard enough about - but that's the flag you're
raising.

		DR. FISHER: Maybe you want to put it out simultaneously, that's fine. 
But it shouldn't be under the same - I don't think it should be under
the same umbrella.  It infers that the same kind of collaboration and
discussion on the part of the board is also part of that CBI assessment.

		VOICE: Would it be possible to put it in a PR notice that says, if you
want to make CBI claims about any of the material that you are
submitting, Dr. Lebowitz, under the requirement of say 1125, here's how
to do it, but if you do it, until the question of how the HSRB will deal
with CBI is resolved, we won't be able to schedule this for review.

		DR. FISHER: I think that may happen, but that's for the discussion
tomorrow.

		All right.  So - yes, Susan.  

		Let me just on how we should do this, I'm trying to figure out. 
Should we just go page by page, and if anybody has a comment on the
pages?  Is that how we should do it?  Or what do you think is the best
way to do it?

		What did you say?

		VOICE: I would suggest we look at appendix B, and realize that A and C
are relevant.  They have to respond to A and C specifically.

		Appendix B actually covers the format, which is what we're asked to
comment on, and lays it out appropriately to cover the points that were
raised in the presentation.

		MR. CARLEY:  To avoid having to refer to page numbers in the package,
we could also go back to the slides in the beginning with number 11,
which, it's not a perfect transcription, there are some slight changes
in words for brevity, but these are the points that are in the
framework.

		DR. FISHER: I think, Michael, there's actually two charges.  But we
can start with the one that you were just talking about, which is the
appendix, and what we have been familiar with with respect to how they
organize the information.

		And then I believe what we were going to talk about tomorrow, but I
think we're going to talk about today, is just the rest of the material
which I had a few comments on.

		VOICE: I was just suggesting that because it in fact does cover a lot
of the stuff that we created, and is easier to deal with in a way.  And
it represents a format.

		DR. FISHER: Jerry, do you want to take the lead on that?

		DR. MENIKOFF: Sure.  Actually I only have one semi major comment.

		I was noting if you go back to our ethics, HSR ethics rules, or
suggestions, from whichever meeting we addressed this, one of the
categories we had was on risk minimization.  And it struck me that much
of what we've actually been talking about today, and even in prior
meetings, has related to risk minimization.  And that currently doesn't
show up as a separate numbered heading under your framework.

		MR. CARLEY: That's 5(c), what steps are proposed to minimize risk.

		DR. MENIKOFF: It's a subset.  In other words it seems to me we're
spending huge amounts of time on risk minimization, and it is one of the
major headings under the regulations; it's one of the four major
headings under our ethics criteria.  And in fact look for example at
2(c), (d) and (e) for example, it struck me, the whole issue of do you
need to be exposing human subjects in the first place, it seems to me a
lot of the subheadings, like those three, 5(c) and a few others, could
all be put together under a major subheading, a numbered heading of risk
minimization that would go right before risk-benefit analysis.  It just
- at least from the ethics viewpoint, and from the regulations, this is
an important enough concept, and we seem to be spending so much time on
it, that it would not be getting enough emphasis unless we actually have
a heading which says, risk minimization, and that's a good place to put
together all the sub questions that relate to risk minimization.

		And in fact you can expand on some of these and say you know - because
the current question is a little vague, and you really want to push them
on that, you know, what things have you thought about in terms of moving
to reduce the risk, or something. 

		Again, I haven't worked out all the subheadings.  But it just strikes
me, that's an important enough issue that we keep coming back to again
and again that it should be highlighted even in terms of the submitters
so they think about it, so they're going to know ahead of time that
we're going to be worrying a lot about this.

		That was the major thing on that.  Other than that, I think you did a
great job on this.  Why don't I end on that?

		DR. FISHER: Michael?

		DR. LEBOWITZ: I was I think a secondary reviewer.  An earlier part
that we are combining with this, if you want to talk about the framework
first, it's okay with me.

		VOICE: I just have maybe a question, Mr. Carley, or a suggestion.  And
that is, under framework five, with benefits, there are three questions
about subject remuneration.  And we talked a number of times in here
about remuneration not being a benefit.  I don't think it's a risk.  

		But I wonder if it fits better under respect for subjects, number
eight.  Because we talked there, one of the questions is about
voluntariness and withdrawal from the protocol.

		It seems that the remuneration questions might fit better there, and
help emphasize to the submitters that subject remuneration is not to be
considered a benefit.

		VOICE: I had the same thought, only I thought it should go under four,
which is subject selection, how you pick subjects, and how you get them
to enroll.  So I think I agree 100 percent, I think those in the next
slide need to go to this slide.  So with the remuneration - slide 16.  

		DR. FISHER: Oh, how will they be recruited, and who is vulnerable, and
what kind of remuneration, because that really speaks to the
vulnerability.

		I guess I don't really care, one way or the other.

		VOICE: Well, the only - if we included it even under risks and
benefits, it still may be - I think it's important to include that they
are not - but isn't that still kind of leading them to think about it as
a benefit, if it's under risk benefit?

		VOICE: Right, that's what Sue and I are saying, we need to get it out
of there.  We need to get it off the benefit list, and on to either
subject selection - number four is subject selection, number eight is
respect.

		VOICE: I would defer to Gary.  I don't think - I can see arguments for
either four or eight.  The big thing is to get it out of five.

		VOICE: Those of us who actually do human subjects research are
familiar with the requirements of our local IRBs, and they are usually
not structured anywhere remotely like this.

		What they normally do is, they consist of many many more discrete
sections with very, very specific questions that the IRB wants you to
answer.  So it isn't questions about, does your protocol have a stopping
rule or a medical management plan, but how are you going to monitor
patient safety, something very specific that they want you to have a
specific answer to.

		So I think then instead of using a manual framework, what's done here,
this is a reasonable starting point, don't get me wrong.  But the manual
framework was developed as a way to summarize all the different
guidelines that have been developed over the last 50 years.  It's a
philosophical document.  It's meant to capture the big normative
commitments and research.  It is not meant to serve as a guide for
preparing a protocol.

		I mean if you wanted to develop such a comparable thing here, I think
the starting point would really be to start with the IRBs, and see what
different institutions are using with regard to these questions.

		So for me, when I thought about it in those terms, I think there are
lots of things that aren't on this checklist right now that probably do
need to be on there.

		One that comes to mind has to do with who is going to be obtaining
informed consent?  How is that person being trained to answer questions
about the study?

		Again, all sorts of things like that really come to the surface for me
when you think about it in those different terms.

		VOICE: I may be getting confused, but I think there are two kinds of
documents that are going out here.  One is advice to submitters about
how to prepare protocols, what has to be there.  This is for EPA's use,
right?  This is what John and Fuentes (phonetic) and all are going to go
through, to summarize their conclusions, it's their initial review that
we will be guided by, or let's say, will stimulate our discussion here,
and it's quite a different thing.

		And of course if the submitters had this in front of them, then they
might structure their protocol a certain way.  But they are two entirely
independent things, and I don't think one is going to impact the other.

		VOICE: I don't think they are independent.  Under procedures,
regarding submission and review of the proposals for new research, point
two says, organization of submission.  This is directed to those who
will be submitting, and it says that a section of the protocol should
address all the topics identified in appendix B.

		So I think it is meant as an attempt to guide the submitters, so
additional light on it would be appropriate.

		DR. FISHER: I think Steve's point, if I understand it, is a little
different.  I think that where - one of the things we're doing is where
- first of all this follows what our guidance was and our criteria that
we established.

		I think Steve was making an argument that what an IRB requires we're
going to be reading from the IRB report, and what they submitted to the
IRB.  So your question - and this is really what we and EPA have found
are the questions that keep coming up even though we have the IRB
information.

		So this really is in addition to help them - to help registrants
clarify the kinds of questions that we have been asking.  So in that
sense I think we need to keep separate the kinds of questions that are
IRB, and those that have been keeping coming up in terms of our being
able to assess the benefits.  These are the questions that aren't always
asked, or at least explicit, in the IRB materials that we have and we
keep asking that.

		But by the way, we all agree with you about the amendment.  Sue and
then -

		VOICE: Just one small point, too. Does everyone know that Emanuel
himself thinks that this framework is flawed?  He actually has a follow
up paper in which he describes additional principles that he thinks that
are important supplements to -

		DR. FISHER: But I just would say on John's behalf, he's really - he
keeps saying he's still doing it from the Emanuel model, but this has
really moved in the direction that we have discussed with him.

		VOICE: I thought I made it pretty clear this morning that we have gone
a long way from referring to this as the Emanuel framework.  And I never
used the word, Emanuel, in any of my presentation materials, let the
record show.

		DR. FISHER: Right, right, yes.  And so I think for those of us who
have seen the transformation we are incredibly impressed, as we always
are with John.

		VOICE: Let me point something out.  This is a very good slide to
illustrate it.  Most of the questions that were in the scientific
criteria list from your report of the last meeting, like these, are
framed as yes-no questions.  And that's all very well when you are doing
the sort of assessment that Dr. Chambers just ran down when she was
doing her summary on EMD-004. 

		But for purposes of - for the purpose that I described earlier where
we are trying to use this as a way of assembling and collecting the
information relative to our assessment, these questions worded in this
way are not very helpful.  And certainly as a template for how we want
the investigators to write their protocol, this is a lousy idea.

		We don't want them to say, 8(a), yes; 8(b), yes.  That would not be
useful information.  

What we want is to frame this, how will information be managed, or how
in concrete detail will information be managed to ensure subject
privacy?

		There might be a subordinate question that says, if you are going to
use an over-the-counter pregnancy test, how are you going to handle the
results, and sharing them with the subjects, or calling names in the
room saying, the following six people are pregnant; please leave.

		How are you going to do this?  So there is a lot of room for
refinement.  My observation with respect to Dr. Sharp's comment is that,
yeah, these are more philosophical than some of the questions asked on
IRB applications, but at the same time these broader questions are not
often asked on IRB applications, and what my experience has been that
when I frame this broader question, then I rummage through what I've
got, and I may point to different places where different parts of the
answer are located.  But there is a requirement to do some assembly
above the level of just aggregation that is typical of an IRB
application.

		DR. FISHER: I'm a little unclear about where we're going right now. 
It seems to me we were given this.  And then it seems as if you're
saying that the way these questions are phrased may not be helpful, that
you'd rather phrase them in the how-to.  So I'm just trying to get from
you, John, what you want from us?  Was this supposed to be a final copy?

		MR. CARLEY: First off, it's not final.     It's a discussion copy.

		DR. FISHER: Okay.  Because I agree with what you just said.

		MR. CARLEY: I'm kind of joining in taking pot shots at the draft here.
 Let the record show that Rich Fenske wrote these questions.

		(Laughter)

		VOICE: Let the record show that framework seven, all the questions are
not yes-no.  You just chose eight to make me look bad.

		DR. FISHER: Okay, okay.  What I want to do is, I was to have Sue,
Mike, Steve, and Sue talk as they haven't had any - even though we're
still on the same topic.  And I want to come around and try to focus. 
So if this is about focus, let's talk about focus.

		So who's next?  Mike?  Sue, you don't want it?  Okay.

		VOICE: Under the framework, the instructions specifically say, respond
with a quotation or paraphrase from the submitted material is going to a
specific page reference, which is what Mr. Carley said.  So it's not
just a yes-no.

		Number two, it's already been stated and I want to emphasize, the
information that is submitted by the applicant includes all the IRB
material from whatever IRBs they had to go to, singly or multiply, and
have to respond also not only to these questions but to questions - but
the parts of appendix A and C from the law that specifically they have
to specifically apply.

		So I think the overall, I think this is a great effort.  I think we
may quibble with some of the categories and the order of the categories,
but I don't want to get into that.

		One of the areas that we never covered in our criteria that I would
like to see covered, the chair mentioned an area today, was, it'd be
nice if we know what in fact EPA - what questions EPA has and whether
it's a potential problem.  In the same way, every time you see a
proposal, a good proposal, in most of the reviews I do, you see a
section that talks about limitations.  It talks about potential problems
and how they resolve it.  And I would like to actually see that in the
submission as well as in EPA's review, the DER or WOE, whichever you
prefer, because it's critical.

		We know that you have these reservations.  We want to know as
discussed by some of the applicants in public commentary, we want to
know what they think their limitations are and how they intend to deal
with it or have dealt with it.  In the same way we had a lovely
discussion of how the insect repellant protocols were approved by
interchange, and the new thoughts about possible issues and problems. 
So I think that has to be a part of it as well.

		The other part, you in the charge specifically ask about specific
kinds of studies, and the fact that for instance one of our criteria, or
part of our framework, had to do with field versus lab.  Well, I think
there needs to be some guidance about specific kinds of studies
somewhere in here, maybe ancillary to it.  So they're gone through the
whole thing.  And then there were specific aspects of the study like
they're doing both field and lab, and they should talk about those, that
aspect, or those aspects of the study.

		I don't know that we know all of those.  You may have a better feel
for the kinds of submissions you've had, and what different guidance for
various types of issues.

		I think that is sort of ancillary to the primary thing.  First you
want to have general guidance for all research.  And then potentially as
this will be a living document potentially, at some point you will want
to have specific guidance for different kinds of research, but you don't
want to hold up the process or the notion that they have to answer these
- give information on these general questions primarily.

		So I in general think that this would be a phenomenal help to you guys
as well as us, and help the applicants tremendously as well.

		DR. FISHER: Let me - we're going to have Steve, then Sue Fitzpatrick,
Gary, then Ken.  But I just want to list what I've heard so far.  And I
also think it's an extraordinary document.  I found it really easy to
read for the Carroll study.  And I really - so I think these
modifications are just an improvement.

		So but I thought it was really good.

		So so far I've heard the risk-benefit analysis should be separated
from the risk, is that what you said basically?  Risk minimization.

		Remuneration shouldn't be within the benefit.  I think it was
suggested it be in section four.

		John Carley had a suggestion that the questions be rephrased for a
how-will rather than something that is just simply a yes-no.

		Limitations of the study - and once again, that's not a negative,
because we are all used to writing limitations of studies, which is your
greatest opportunity to say I know, but this is why it has to be this
way.  So it's always a positive which we always, many of us do all the
time in our studies.

		And then guidance on specific types of studies.  And I guess guidance
here is really what guidance has already been expected by EPA.  For
example, once the insect repellants are out, or the guidelines, things
like that.  So I agree that would be helpful.

		So Steve, so let's keep listing things in the way we're going to help
them. 

		VOICE: Maybe I should just subside, because I'm still somewhat
confused.  It seems to me that we're trying to get at sort of two things
almost at the same time.  One is the guidance to proponents of what
questions need to be addressed.  And I think we've got a great list of
them, but I do agree that they should be rewritten.

		Right now they're written in a way as if we were checking off answers.
 John was going through the protocol and sort of summarizing answers, at
least many of them but not all.

		So I think this should be looked at again in terms of telling the
proponents exactly what we want them to provide.  And certainly there
should be no yes-and-no questions; they should all in the form of, tell
us what you are doing on this point.

		If this - and what I'm unclear about is the extent to which this
framework for submission, organizing submissions, or at least guidance
for organizing submissions, also translates into how you are proposing
to present your scientific and ethical reviews to the board.  And they
may be similar, they might even be almost - they might even be
identical.  But I suspect they are - they could be different.

		So I think here is where I am.  I may be a little confused.  I think
perhaps an entirely separate although related framework, if you don't
intend to strictly follow this now, answering these questions or
assessing how the proponent has addressed these points, that would be
one good way to carry us through your view of the protocols.

		But if you don't intend to do that, I think you'd be free to do it in
another way.  But in any event your framework will be different from -
the precise wording and terminology that you use will obviously be
different from the registrants.

		MR. CARLEY: Shall I try to clear up some of that?

		I think that at some sort of level of an ideal we would like to be
able to use the same structure as a guide to submitters and as a tool in
our review and as a way of presenting things to you, and then in turn,
you could use it as a tool in your review.

		I would submit that at least in the course of our review, assuming
somebody sent us something that was an attempt to respond to this draft
as we expect to see in the next few days.  It will be interesting to see
how it worked.  I would think in preparing our reviews we would at the
very least write new topic summaries, in the shaded blocks at the top of
each section.  And then we would make our independent judgment about the
applicable standards, and whether they were satisfied, and what
discrepancies or deficiencies we found, and how much weight we gave to
them, and those sorts of things that you've seen in our previous
reviews.

		And then similarly, at the next stage, where we've done our review,
and we were going to present it to you, you would make different
summaries, different perhaps saying that some of the things that we
noted weren't important, or adding other things to the list or whatever.
 The need for some sort of convenient sort of annotated directory to a
big pile of stuff like this is going to apply to every step along this
review process.

		And the more complex the study the more important it is to have a
handy tool.  And if the same took will work for all the players, then
nobody has to convert it as we move from one stage to the next in the
process.

		The labor component, both from Jack Carroll's perspective and from
ours, in getting from where we were at the June meeting to where we were
with these in mid-September is huge.  And what we are trying to do here
is just make that whole thing not have to be done over and over again as
the framework shifts.

		DR. FISHER: This is a great start, what you did.

		Suzanne.

		DR. FITZPATRICK: So I'm a little confused.  You are proposing that the
sponsor or investigator fills out this form?  Or this is the form that
you are going to fill out once you get the materials?

		MR. CARLEY: We would like them to take the first crack at it, and then
we are going to do something similar, but with independent conclusions,
and perhaps different conclusions.

		DR. FITZPATRICK: I mean I think you're right in having them do it,
because I'm a little concerned that you've got very short timeframes
once you get this review.  And my suggestion is, why aren't you having
them submit this electronically?  Then you can just add your comments to
what they have already filled in.  If they fill out the form in
reference to that part that's in the protocols, then you can just add
your comments to it after that, and save a lot of time on your part.

		Because basically, and I don't know if you have 75 calendar days or
working days.

		MR. CARLEY: No, we were talking about calendar days.

		DR. FITZPATRICK:   Calendar days, so you have 25 days from when it
gets to the board, only 50 calendar days to do one of these things. 
That's pretty short.

		MR. CARLEY: Well, we have noted that that's not a very long time. 
What we've been getting, and what we would hope to get routinely, and we
need to add something to the PR notice about this as well, is the
specification of the form of submission.

		If we get PDFs generated from word processing files for example, we
can select and edit and diddle and plug in perfectly well.

		If we get scanned PDFs from handwritten IOB documents, as we did in a
lot of this stuff, that's a different matter.  But those are basically
read only documents anyway.  All of the formatted stuff that we got from
Carroll we got in the form of PDFs generated from word processing.  So
it was perfectly straightforward to select and copy or diddle, you know,
edit in whatever way we needed to, yet we still had a read-only PDF to
preserve the original submission.

		And that's probably the way we'll handle that as something simpler
than agreeing on another kind of protocol for submission, which could
take years.

		DR. FITZPATRICK: But your notice only asks for it in triplicate,
you're not asking for an electronic submission.

		MR. CARLEY: Yes.

		DR. FISHER: All right, and let's try to keep this to the
recommendations.  I think having too much of a how-to how-to.  So I
think your recommendation was really good.  The details of how to we can
leave it up to John unless there is a question of clarification that he
has.

		I think it's Gary and then Ken and then Lois.

		DR. CHADWICK:   To follow up on Suzie's suggestion, if you make this a
form that is filled out by PIs, will that require OMB clearance?

		MR. CARLEY: Yes, it would.

		DR. CHADWICK: Whereas if you do it yourself it wouldn't?

		MR. CARLEY: That's correct.

		DR. CHADWICK: Just checking.  Things have changed since I left the
government.

		A couple of concrete suggestions.  On slide 18, items D and E, I
suggest to you are not really informed consent, but might more readily
fit into what you call respect for subjects on eight or even four.  But
I would suggest you move those, because although they are tangential to
informed consent, they really don't fit in with the others you have
there.

		And in line with Jerry's comment on slide 20, which is the respect for
subjects slide, I think C is actually risk minimization, and that should
go there.  And that might serve as the basis for Congress to build a
separate side for risk minimization.

		The last question, I've got a major question, probably tackle this
tomorrow at about 11:25, but the last question I have concretewise on
this form is on slide 17 C, keep the IRB registered with OHRP.  Any IRB
can register with OHRP.  What is it that you are looking for there?  I
mean there's registration.  There's insurance with OHRP, and there is
also accreditation.

		Is there some level of quality or qualification that you are going
after?

		MR. CARLEY: That's the general idea.  We've worded that question
various ways in various generations of the framework.  And we don't want
to answer questions that require us to do extensive or time-consuming
research.  What we want to do is something that we can do a quick look
up on an OHRP website.  This is kind of the minimum question.  We look
to see if OHRP has ever heard of these folks, and then if they say that
they have also got an assurance of this scope, or they worked with these
institutions, I would pull that other information.

		DR. CHADWICK: I would suggest to you that registration with OHRP is no
indication of quality or anything other than maybe name recognition.

		MR. CARLEY: If you can suggest something else that would be a more
appropriate way of trying to determine whether this is a legitimate
outfit, a specific screening question, we would welcome it.

		DR. CHADWICK: I would suggest leaving it in I suppose in the meantime,
but you might expand it to, are you registered, do you have insurance,
are you accredited.

		DR. FISHER: Okay, Kannan?

		DR. KRISHNAN: Question and a couple of comments.  As others said the
framework nicely conveys the expectations fo our science and ethics
analysis.

		In terms of the study design, the slides 12 and 13, if I understand
correctly you are going to have a question on QAQC?

		MR. CARLEY: Yes?

		DR. KRISHNAN: Because that one is missing right now.

		MR. CARLEY: Yes, that's where it would go.

		DR. KRISHNAN: That should go.  And then in the same category as let's
say slide twelve when we talk about the dose, I would put in one or two
additional questions, the first one being, how will the dose be
measured?  Because oftentimes as we look at some of the dermal studies
and so on, that is always a question as to what was the amount delivered
and so forth.  

		So just giving the concentration of the - or the exposure level as we
have in C - I mean G, 3G or slide twelve, where it says what is the
basis of the charge to the dose/exposure level?  I think that is not
alone sufficient, because often we are asking the question of how is the
dose being measured or quantitative.  They have to have some idea of
what was actually provided to the individuals.

		The other question still related to the dose, either can stay here or
in risk minimization, it would be, how does the dose compare to the
known NOILs (phonetic) or safe levels.  I think that is a critical one. 

		If that comes about in the discussion of choice of dose that would be
nice, but it doesn't always come out as we look at some of the protocols
we have seen.  We have to raise a question clearly, so I wouldn't mind
asking the question separately.  How does that compare with the known
NOILs or the levels.  I think that's important, so that they get an idea
of the margin of safety.

		One other question that I have for you is in section five, the first
question.  It says qualitative risks, maybe slide 15.  I was more
looking for something related to irreversible serious effects.  So I
don't know if it's a qualitative risk, as used here, is routine
terminology that's used in ethics evaluation, or would it be better to
ask a question of the nature of the risk, I mean something in
parenthesis that relates to irreversible and serious, or that's not what
you had in mind in this question?

		MR. CARLEY: What I was trying to do with the wording here was simply
separate the nature of this from the probably of its occurrence.  And
then with that in mind, the question of whether the risk was something
reversible, or something of a serious consent would be attributes of the
nature of the risk it would belong in my expectation in 5A.

		We got substances here that have some remote probability to increase
serious risks, and I've been trying to make sure that those things don't
get mingled so it's clear.

		DR. FISHER: I think Lois has a comment, and then I want to ask a
general question and move towards a conclusion.

		But Lois?

		DR. LEHMAN-McKEEMAN: I have a couple of specific and then one general.

		In section three part - of appendix B, basically study design, the
last G, it asks what is the basis for the choice of dose level exposure.

		I think the thing that's missing from that particular section is that
this board has gone on record and advised against single dose level
studies.  So I think we should actively include after G something to the
tune of, if the study involves a single dose level what is the rationale
and justification for only using one dose level.

		That was the element that I thought might be missing.

		And I'll say that I'm kind of on the same page as Steve is in that -
and maybe one of the things that was done here is that in the text
you've provided, and I don't have the page number, but it's under
procedures regarding submission and review of proposals for new
research.  Point two describes the organization of the submission.  And
bullet one says, the protocol should address all of the topics
identified in appendix B.  That's very clear.

		What might be useful there is to add another active statement
somewhere in this document to indicate that EPA's review will in fact be
based on the completeness and comprehensiveness of the response to all
of the issues in Appendix B.

		That makes it very clear what you guys are going to do with the data. 
Where I come out in being a little confused with ultimately how we used
this is, our priorities, we're not going to see these studies unless you
have decided that they do meet a minimum criteria of quality for us to
see, as I understand it.  If you decide that they're deficient we won't
see them.

		So we can use this as a starting point, but in reality I think we're
going to be more focused on, were there limitations that we see.  So
it's useful for us, but ultimately it's not going to be the definitive
criteria that we use.  Because we're not going to see this.  We're only
going to see this when it's passed your muster.     

		DR. FISHER:  One of Richard's concerns was whether or not having that
kind of a protocol will be - will it unintentionally make the
registrants think that this is all they have to do.  In other words if
they answer every single question.  And I think it's a little similar -
I think Lois took it beyond that, and then Lois - the way I would frame
Lois' point is, will they think if they've answered all those questions
the HSRB is going to approve it.

		Now, we already know that that's not the - that's never necessarily
the case.  Because every study we see is something that you've brought
to us, that you have thought about in a positive way, and the  HSRB does
or does not agree with that in some way.

		So I guess the caution is, how do we frame this in such a way that
we're losing information, number one, which I think is Richard's
concern, and number two, that we're not misinforming those who are
submitting that in fact if they answer those questions and you submit it
to the HSRB that that means there's going to be a positive approval.

		John and then Mike.

		VOICE: Quick response on G - the registrants are accustomed to the
idea that we don't necessarily accept everything they send us.  And I
don't think any of them suspect that the HSRB approves all the protocols
that they submit.  So I don't think there is a real big risk there of
misinforming.

		In terms of the loss of information, we're going to have the complete
documents.  This is still a sort of annotated index of where to find the
stuff - but it will never substitute for the whole thing, and both we
and you will be reviewing the whole thing.

		DR. FISHER: Lois, did you - okay, Mike and then -

		DR. LEBOWITZ: Yes, I was going to answer your question, too.  It's
just like when we do a grant proposal, and then we do an IRB form.  And
this is not the IRB form, but this is the annotation that in this case
is for EPA, maybe us.  But certainly EPA to do the science and ethics
reviews - so we need to have them try to address these questions to both
EPA and possibly federal health EPA put it in a framework for us so that
we have a fairly good working document from which to go to because not
all of us are going to worry about - like I did find - basically the DOE
on hexavalent chromium - and so that helps us as well as helps them.  

		DR. FISHER: Kannan. 

		DR. KRISHNAN: I presume you will include  the title of this appendix
called framework or protocol assessment in the list of documentation
required, like in appendix A or appendix B for the new protocols, or the
review of human research, in the list of the documentation I presume you
will be adding this one, framework for -

		MR. CARLEY: I don't think we'd expected to, because appendices A and
C, there's nothing in there that isn't a direct quotation from the rule,
saying this is what the rule requires.  It's not saying here is the
table of contents for your entire submission.

		DR. FISHER: Let me just ask impressions.  I have around 12 comments
that were made, even though there was a lot of discussion but I think in
terms, I think it's around 12.  

		It's not clear to me what is our - is this something we need to
discuss more tomorrow morning?  Is there a direction we want to have EPA
- are we saying EPA should go into?  I'm not - I didn't see this as that
critical of the document, and I may be getting that wrong. 

		Is this the fine-tuning that the general thing was, for what was
presented here, was good, and acceptable with taking into account these
issues we raised to try to improve it. 

		So that's where we are.  We're not saying, John, go back, rewrite the
whole thing.  We're saying, John, you did a great job, and since we
can't let anything go, this is what we think will improve it.

		So just then to summarize - yes, briefly.

		VOICE: I wouldn't quite go that far - not the document's a bad
document or anything like that.  But I have - concerns as well,
reservations about the fact that once this is made public, this may be
seen as the essential item that we'd go and start with applications. 
And since you haven't really built up a lot of experience in doing these
types of things so far, we put that out there now, it's going to be seen
as a little early.

		VOICE: Gary.

		VOICE: And, again, I think that's - that argued against Susie's
suggestion - I think if we put this out, say this is what EPA is going
to use the index and the - to make it very clear that this is not
something necessarily that the investigator should fill in, whatever,
but it does give them the information that says this is what he did,
he's going to look for in and amongst your materials.  And I think that
- I think from that standpoint I'm fairly comfortable with EPA - you
know, they have been doing it.  They have been using various versions of
this, and I'm comfortable with them continuing.

		DR. FISHER: And I think where we're moving - so basically then a
solution is, one of the solutions is, at this point in time, at least I
can have the PI fill it in so that they - what they have to write but
that they need to provide all of that information, at least for this
time around, and we can - we would see how it works.

		And perhaps a clear statement about the fact that this is not all
that's required but this is helpful in terms of identifying some salient
information that will be helpful in moving the protocol along.  So I
think that might get at your concern.

		So basically our points are, separate section on risk minimization,
making sure there's - the remuneration is in section four, shifting some
og those questions to how rather than a yes/no, stating limitations of
the study, providing guidance when it exists, or a website that they can
go to.  And I think you might want to consider whether it's important to
look - to refer them if they wish to go to the HSRB sites because we
certainly have different criteria that we talked about, too, which might
be helpful but shouldn't be part of your document.

		We're now thinking there might be some risk having it electronically
filled out by the PI, but at least at this time we'll see how it works.

		Slide eighteen items G and E, respect for subjects.  I wrote that
down, but I think Gary said something.  Need to get - move to the
respect for subjects, what are you looking for with respect to OHRP
registration - that registration really is not a big deal, you might
want to ask whether or not they have an assurance by OHRP and whether or
not they're accredited and by whom are they accredited.

		Study design, I think - I mean Kannan said a lot of things that - but
I think that has something to do with concentration and dose that needs
to be in there.  Also I think he also pointed out that when information
on NOEL and LOEL are available and would help with the - making a better
justification for whatever dose is selected, that should be included. 
Also that in the study design, the basis of the choice of the dose level
should be in there.

		I think Lois said something to the extent that if you're only going to
do a single dose level study, beware that he grades that better than -
whatever that is.  But basically they should, you know, need to look at
- we had a lot of criteria and comments about that, so I think - just
put that in better.  And that we just want to make sure that this will
be a test in some sense, and hopefully it works really well and is very
educative, and maybe we'll have protocols that we might recommend or -
sufficient because people don't have to go back to the drawing board,
that they understand where EPA and the board are coming from.  So is
that where we are?

		VOICE: If the registrants are necessarily turning in a complete
protocol and all the IRB materials and all the traditional materials, I
don't understand what the harm is - an informed approach from the
protocol.

		VOICE: Well, it's going to - a lot of time and frustration.

		VOICE: I think that was the whole point of the exercise, to save time
for the PI, save time for EPA, and they were - and if we don't do that,
we get nowhere.

		VOICE: These are the basic questions we keep asking.

		VOICE: Okay, so - yes, Susan.

		VOICE: Well, so if you don't require it they just keep guessing -

		VOICE: If we don't require them to provide this information, we can
inform them that these are the concerns that are going to be addressed
in EPA's review, and we can advise them to be sure that they've
addressed them all and to tell us where to find their -

		VOICE: That's exactly what I think we should do.

		DR. FISHER: Okay, yes, Suzanne?

		DR. FITZPATRICK: Are we going to make any comments in the PR notice
tomorrow or today?

		DR. FISHER: Well, I'm asking you.  I think we should end today, but we
can make - you mean the beginning part?

		DR. FITZPATRICK: Yes, the beginning part.

		DR. FISHER: Why don't we save time, it won't be a long time tomorrow,
unless there is one - does anybody -

		DR. FITZPATRICK: I only have a couple here.

		DR. FISHER: Because we also have to ask for public comment on this
tomorrow.  So we'll reserve maybe a half hour or an hour at the most,
and then we are going to do CBI.

		Then, I was talking to Paula, I don't think it's necessary that we do
the conflict of interest because it was an administrative meeting.

		VOICE: And you feel it's not needed at this time.

		DR. FISHER: I don't think so, unless anybody does, because we were
talking about the information he sent out to you.  It seemed pretty
clear.  And so -- remember he sent a form out to be filled out.  So I'm
not sure at this point --

		VOICE:  Right, this is for the board members' recollection.  Several
weeks ago, I sent out a note to everyone to update your financial
disclosure form.  I know all of you have done that and it addresses the
reason why it is being done.  So that really was part of the driver of
having this administrative meeting.  I think that issue helped address
Dr. Fisher's questions.

		Just to add on -- can I make a few more remarks?

		DR. FISHER:  Sure.

		VOICE:  Okay, Dr. Fisher mentioned we are going to have a public
comment period tomorrow morning.  We'll begin on the PR notice.  It was
in the Register this afternoon but we want to again provide the public
an opportunity to make any remarks.

		The CBI discussion will be earlier so we are going to probably end
earlier tomorrow.  So I think it is hard for us to gauge what exact time
but you might want to be thinking about possibly moving up your flights
if an earlier flight is available by a few hours in that respect.  So
plan accordingly for that.

		DR. FISHER:  Yes.  And we could guess about it tonight, I guess, in
terms of what time.  But in case anybody really wants to change their
flight, I think, you know, one would be very realistic that we would be
done because we are only going to do really the CBI.

		PARTICIPANT:  (Inaudible.)

		DR. FISHER:  Well, that's -- we can do that, sure.  Yes, why don't we
do that.  If we go over, we'll be a little hungry but -- okay.

		The other thing is take your name badges with you.  They don't have
enough of these so they are going to let us in with name badges.  There
is no shuttle tomorrow.  So we will all walk across with our baggage. 
And be aware -- oh, also, it could take a long time getting through, you
know, because they have to check everything.

		Hotel check out is at noon.  So we might want to do that -- maybe take
a break at ten or something or something like that.  Or else we'll just
check out when we -- I'll check out in the morning but I know some
people sometimes want to go back.  But let's -- and allow plenty of time
to get through.

		Now the next thing is for those who are going to dinner, which is a
lot of us, do you want to meet in the hotel here?  Or in the lobby at
the other hotel?  Or are we even -- it is six o'clock.  We have a half
an hour.  It's right across.  We're eating at the Hyatt at six-thirty.

		Meet at the Hyatt?  Yes.  Okay, very good.  Thank you all.  We're
going to meet -- oh, I don't know.  Somebody make a decision.

		(Whereupon, the above-entitled meeting was concluded.)

 

 

	NEAL R. GROSS

	COURT REPORTERS AND TRANSCRIBERS

	1323 RHODE ISLAND AVE., N.W.

(202) 234-4433	WASHINGTON, D.C.  20005-3701	www.nealrgross.com

	 page \* arabic 1 

