U.S. ENVIRONMENTAL PROTECTION AGENCY PRIVATE  

+ + + + +

HUMAN STUDIES REVIEW BOARD (HSRB)

+ + + + +

PUBLIC MEETING

+ + + + +

THURSDAY,

OCTOBER 25, 2007

+ + + + +

		The meeting convened at 8:00 a.m. in the Conference Center at One
Potomac Yard, 2777 South Crystal Drive, Arlington, Virginia, Celia B.
Fisher, PhD, Chair, presiding.

				

HSRB MEMBERS PRESENT:

CELIA B. FISHER, PhD, Chair

STEPHEN BRIMIJOIN, PhD, Vice Chair

GARY L. CHADWICK, PharmD, MPH, CIP, Member

JANICE CHAMBERS, PhD, DABT, Member

SUSAN S. FISH, PharmD, MPH, Member

SUZANNE C. FITZPATRICK, PhD, DABT, Member

DALLAS E. JOHNSON, PhD, Member

KYUNGMANN KIM, PhD, CCRP, Member

KANNAN KRISHNAN, PhD, Member

MICHAEL D. LEBOWITZ, PhD, Member		

LOIS D. LEHMAN-MCKEEMAN, PhD, Member		

JERRY A. MENIKOFF, MD, Member

REBECCA PARKIN, PhD, MPH, Member

SEAN M. PHILPOTT, PhD, Member

RICHARD B. SHARP, PhD, Member	

HSRB STAFF:

PAUL I. LEWIS, PhD, Designated Federal Officer

HSRB CONSULTANTS:

RAJ GUPTA, PhD, Director, Research Plans and Programs, Walter Reed Army
Medical Center Medical Research and Material Command

STEVE SCHOFIELD, PhD, Department of National Defense, Canadian Forces
Health Services Group, HQ Ottawa, Force Health Protection, Communicable
Disease Control Program

DANIEL STRICKMAN, PhD, USDA, ARS, National Program Leader, Program 04:
Veterinary, Medical, and Urban Entomology

EPA STAFF:

JOHN CARLEY, Office of Pesticide Programs

CLARA FUENTES, PhD, Office of Pesticide Programs

WILLIAM JORDAN, Office of Pesticide Programs

KEVIN SWEENEY, Office of Pesticide Programs

PUBLIC COMMENT:

SCOTT CARROLL, PhD, Director, Carroll-Loye Biological Research

KEITH KENNEDY, PhD, Senior Consultant, Science Strategies

THOMAS G. OSIMITZ, PhD, President, Science Strategies

	TABLE OF CONTENTS

Science Issues in Mosquito Repellent Efficacy Field Research

	Introduction, Celia Fisher.	4

	EPA Presentation, Mr. William Jordan	5

Board Discussion

	Charge Question 1	22

	Charge Question 2	96

	Charge Question 3	123

Board Discussion	138

Public Comments	156

Completed Field Efficacy Studies by Carroll-Loye Biological Research:
SCI-001 and WPC-001 

Clara Fuentes, Mr. Kevin Sweeney and Mr. John Carley 	186

Public Comment

	Dr. Scott Carroll	253

Board Discussion	279

WPC-001 	366

	

SPC-002

	John Carley	446

Scientific design

	Kevin Sweeney	447

Ethic's Assessment

	John Carley	454

Disccusion	459

	P-R-O-C-E-E-D-I-N-G-S

	8:02 a.m.

		CHAIR FISHER: We are going to begin.  Bill, did you have anything that
you wanted to say?

		MR. JORDAN:  No thank you.

		CHAIR FISHER:  Okay.  So we are going to look at some of the science
issues in mosquito repellent efficacy.  I thought maybe what we would do
we have three consultants.  Also the working group had a discussion
conference call with them and they have been working very hard and have
sent us some materials.  I thought maybe you could just introduce
yourself.  

		Dan, do you want to start?  Don't forget to put your mics on so that
we can hear you.

		DR. STRICKMAN:  I'm Dan Strickman.  I'm National Program Leader for
Medical Veterinarian Urban Entomology for USDA Agricultural Research
Service.

		DR. SCHOFIELD:  Steve Schofield.  I'm actually with National Defense
in Canada.  I'm their Senior Advisor in Public Health Entomology.

		COL. GUPTA:  I'm Colonel Raj Gupta, the Science Director for Walter
Reed Army Institute of Research.

		CHAIR FISHER:  Thank you very much.  Okay.  So, Bill, do you want to
begin the presentation?

		MR. JORDAN:  Thank you, Dr. Fisher.  I thought it would be useful to
provide a little bit of background information on the science issue in
mosquito repellent efficacy.  Primarily for the benefit of the new
members of the board but much of it will be familiar for those of you
who have been through the mosquito battles with us before.

		In addition, EPA has some thoughts about the charge questions that
were developed with the working group of the board. We also want to
identify some protocol specific issues that we see will be coming up.  I
just draw your attention to those issues and set them apart really from
the broader questions that you have posed for the consultants.

		By way of framing the discussion the pesticide law, Federal
Insecticide, Fungicide and Rodenticide Act, or FIFRA as we say in
Washington, requires EPA to register mosquito repellent products as
pesticides.  We at EPA permit these repellent products to have on the
labeling claims describing the duration of protection that the issuer is
likely to get from mosquitoes and other insect pests.

		When a company decides to make such a claim on its label, EPA requires
data from field studies to support the duration of its claim, duration
of protection.  We have testing requirements or guidelines that describe
how to do the required tests.  

		There is flexibility under the guidelines for an individual
investigator researcher to make adjustments to deal with the specific
circumstances of the research.  		Generally what the guidelines specify
is that the studies should be conducted at two ecologically distinct
sites in the field and the studies should measure protection time which
is our term for the interval between the application of the test
material to the subjects, the participants, to the occurrence of an
event that we treat as an indication of the failure of efficacy.  

		Until the West Nile Virus became fairly widespread as a public health
concern, EPA would accept studies that measured repellency which
involved comparing the numbers of mosquitoes that would land or bite
treated subjects versus landing and biting on untreated controls, but
because of public health concerns now we only accept studies that
measure protection time.

		In terms of the event that indicates efficacy failure, the guidelines
specify that there has to be a confirmed mosquito bite meaning that
there is a bite and then within a fixed period of time, usually a half
an hour, there is a subsequent bite on the same subject confirming that
the first one was not an aberration or an unusual event.

		We have recently in response to discussions here in the board moved to
landings by mosquitoes as the metric for evaluating efficacy failure. 
Then when the data are collected we review them and calculate the
protection time and the label claims have to be consistent with what the
data show.

		These studies meet the definition of research with human subjects as
well as the definition that we discussed yesterday about research
involving intentional exposure of a subject so they require HSRB review
both at the protocol stage and at the stage where the completed data are
submitted to us.

		We have reviewed with you several protocols and completed studies.  As
you know, at the last meeting in June the HSRB saw for the first time
protocol from a laboratory that uses a different method from the one
that was favored by the submitter, investigator who submitted the first
set of protocols that the board looked at. 

		Working with the board I am trying to follow up on your interest in
the methodological issues.  We developed several science questions
relating to how these studies are performed and I think they will be
very useful in terms of hearing the consultants' reports useful in terms
of understanding the review of particular protocols.  

		We at EPA, as you know, have committed to revising our guidelines and
we think that the information will be more broadly useful for us as we
continue to work on that project as well.

		We have a series of charge questions.  I have put them into the
presentation here.  I'll say with regard to charge question No. 1 that
we see a lot of variability in the data in terms of the time intervals
between the first and subsequent landings in mosquito field trials,
mosquito repellency tests.  

		We have not had the time to analyze it systematically across studies
or tried to quantify that but looking at the next slide we present here
the data from the two studies conducted by Dr. Carroll, SCI-001 and
WPC-001.  As you probably are aware, there were four different
repellents tested in SCI-001 and a fifth repellent tested in WPC-001. 

 		This table arrays the unconfirmed landings, confirmed landings, and
confirming landings.  There were 10 subjects at each of two sites so
there were a total of 20 subjects in each --

		CHAIR FISHER:  Excuse me, Bill.  Could you just clarify?  Is this the
study we're talking about today or one we talked about before?

	MR. JORDAN:  These are data from the studies that will be discussed
later today.

		CHAIR FISHER:  Okay.

		MR. JORDAN:  This is not an attempt to get you into the discussion of
the studies but simply to illustrate the kind of variability that we see
across --

		CHAIR FISHER:  Right.  I do want to point out just for the -- I think
we wanted to make -- I think this is going to be helpful to frame one of
the reasons we are asking these questions but I do want to point out
that we should not be talking about the specific "us" as a board.  

		The purpose of this morning discussion is to talk about general
issues.  It's very helpful, especially for new members, to frame it with
respect to the type of studies we see.  Once we have discussions I don't
want us talking about this particular study until we are going to be
reviewing this study.  Thank you.

		MR. JORDAN:  Thank you.  So there were 20 subjects, or 20 subject
reports for each of the five different materials.  In the case of all
five materials all of the subjects had confirmed landings.  That is why
the second row of confirmed landings shows the results of 20.

		Now, interestingly enough there was considerable variability.  There
were as few as seven unconfirmed landings and as many as 14 which means
that a mosquito landed on a subject's arm but in the subsequent
observation periods there was no confirming landing by a different
mosquito since once a mosquito lands it is aspirated and removed from
the ambient population.  It happened seven times for three of the test
materials and 14 times for Doornon. 

		After a mosquito lands, then the investigators monitor subsequent
landings and if there is a subsequent landing within the specified time
period, a half an hour is treated as a confirming landing.  You will see
that there were a number of confirming landings.  

		Intuitively you would expect there to have been 20 because once a
mosquito lands during that prescribed time period you would expect that
to be the end of the study.  In fact, that is the way the investigators
do it, but the numbers there are larger than 20 because it appears that
once the efficacy of the product fails, the mosquitoes start to land
fairly greedily to begin to try to draw their blood meal.  

		Within the short period of time when the first confirming landing
occurs, there is sometimes two or three on a subject.  That is why there
are 28 or 22 or 29 or 27 confirming landings.  Those are reported in the
data.  Then you see the total number of landings varies fairly
significantly there.

		So, in our view, we think there are many different factors which may
affect the variability in terms of initial landings and confirming
landings, the characteristics of the mosquito populations, these biotic
factors relate to the genus and distribution of the mosquitoes at the
test site as well as the level of ambient pest pressure, the number of
critters that are there to land and bite.

		Test sites also have characteristics that seem to us to be counting in
part to the variability, the season of the year, the time of day, the
brightness of the sunshine, the temperature, the humidity, the wind and
speed, and the wind direction.  All of these factors can affect the
potential mosquitoes to land on a test subject.

		Subjects themselves can differ in terms of attractiveness to
mosquitoes.  We have seen that from our anecdotal experience and it is
reinforced by observations in the field and the laboratories.  Some of
those differences between subjects may be attributable to the use of
alcohol, tobacco, or scented products.  

		Investigators have attempted to control for those in some instances by
prohibiting the use of these materials that might either increase or
decrease attractiveness to mosquitoes.  Then there could be differences
in behavior.  All of those characteristics could play into it.

		Then finally we identified different characteristics of test methods,
the pattern and duration of exposure, the issue that initially caught
the attention of the board when you notice that one investigator exposed
subjects for a different period of time and different frequencies.  

		Investigators also differ in the amount of skin that they treat and
expose to mosquitoes.  There are differences in determining the amount
of test materials.  Some investigators use a standard amount and others
use an amount derived from a dosimetry study.  

		Finally, some investigators will apply to a single subject multiple
treatments concurrently.  That may also have an effect on the exposure
and the attraction of the insects.  The second charge question focuses
on one of the methodological issues that --

		CHAIR FISHER:  Bill, I'm just wondering maybe it might be helpful
instead of going through all the charge questions because in some sense
we have consultants here who are filling -- who will fill in in terms of
the kinds of issues that you are raising.  

		Perhaps we should start with the first charge questions and then you
can introduce the second and then the third with EPA's perspective.  I'm
just not sure -- they are complicated questions so I'm not sure if it's
helpful.

		MR. JORDAN:  Okay.  That would be fine.		  

		CHAIR FISHER:  Okay.  So basically we have the first charge question. 
I want to remind everybody that we do not -- we are not going to
reference protocols that we have in front of us right now.  

		That is not the purpose but we can reference protocols that we may
have seen in the past but not ones that we are going to be judging in
the future.  So we are going to go to the first charge question that
Bill has laid out for us.  We can get back to us.  Our lead discussant
is Col. Gupta.  Could you begin for us?

		What I also wanted to do for those board members who are new and
others that may not have been in some of our calls, these questions were
jointly initiated by OPP and the board.  

		The purpose of this discussion is not to make recommendations for
guidance or guidelines for EPA but rather for us to be better informed
when we are looking at protocols in terms of how to determine the
appropriate methodology, the selection of sites, the variability, the
sample size.  This is really an educated forum for us so that we can
better evaluate studies that come before us.

		COL. GUPTA:  Can I have my slides up, please?  

		DR. LEWIS:  Dr. Gupta, before you begin, just in terms of process to
help everyone on the board and the public and the agency.  As Dr. Fisher
mentioned, we have consultancy.  As I mentioned yesterday morning,
consultants are brought to the board to help the board in terms of
grappling with issues as advisers.  They are not part of deliberation by
the board members but they assist the board in terms of that work.  

		In terms of the process we are going to have this morning, each
consultant was assigned a question and will be providing a brief
background in terms of their response taking into account the response
of the two consultants.  That's what we will be hearing this morning. 
Then from there we will have a discussion overall with the board in
terms that helps us to explain the process.

         	DR. FISHER:  Thank you, Paul.  I do have one question, Bill. 
In terms of the guidelines does FIFRA currently require the confirmed
bite after 30 minutes or some guideline?

		MR. CARLEY:  FIFRA is the law.  The law is silent with respect to what
the matters should be.  They leave that to our discretion.  The next
step down from the law is data requirements which are regulatory and
they don't specify exactly how required testing is to be conducted. 
Then we get down to the level of guidelines which are advisory and not
mandatory. 

		In the guidelines we refer to different methods and in the most recent
draft of the guideline, which is the one we reviewed a year ago we
talked about time to first confirmed  bite.  We didn't put landings into
that guideline.  For the past year we have been looking at studies where
landings were the target event measured rather than bites.

		CHAIR FISHER:  But my question is where is a confirmed bite, the 30
minutes required?

		MR. CARLEY:  It's defined in the guideline.

 		CHAIR FISHER:  In the guideline.  Okay.

		MR. CARLEY:  Remember that the confirmed bite is the first one that
occurs within the 30-minute period.  The second one is the confirming
bite so the time that we're talking about is the time to the first of a
pair of bites that occurred within a 30-minute period.

		CHAIR FISHER:  So the 30 minutes is required or is suggested in the
guideline?

		MR. CARLEY:  It's the way we define confirmation in that context.

		CHAIR FISHER:  Got it.  Okay.  

		DR. BRIMIJOIN:  Can we get the way you defined it that it's not
recommended?

		MR. CARLEY:  We recommended testing to first confirmed bite.

		DR. BRIMIJOIN:  Okay.

		MR. CARLEY:  We defined confirmed bite as one that was confirmed
within this period.  We have historically interpreted 30 minutes a
little loosely.  If you think about these two different intermittent
exposure patterns that we've seen, in the one case if they were executed
perfectly on time, in the case of Carroll-Loye protocols the one minute
and 15, we would treat a second event within 31 minutes. 

		As confirming within the five minutes of 30 exposure patterns proposed
by ICR in the protocol you looked at last time, we would treat two
events within 35 minutes within two successive periods as meeting the
definition.  It's not rigid at 30 minutes.

		CHAIR FISHER:  And just also to kind of clarify what our knowledge
base would like to be is, given that EPA is going to collect this data,
and we are not making recommendations that they shouldn't collect,
require data after 30 minutes or 35 minutes but we also have in some
sense flexibility to understand and interpret what that first confirmed
bite means vis-a-vis the 30 minute confirming bite.  That is something
that we want more information about.

		Public comments?  Is Thomas here?  Keith Kennedy?  Okay.  We're off
schedule?  Okay.  If anybody sees them walk in, let me know.  We'll come
back to them.

		Dr. Gupta.

		COL. GUPTA:  Good morning.  Today what I will be concentrating on is
some of the basic science which is going to deal with factors affecting
mosquito landing, biting activity in a field environment.  Before I go
into getting to the specific questions, what I would like to do is try
to get an understanding of what is a repellent.  

		The widely accepted definition which was proposed by Dethier in the
1940s means that a repellent is a chemical that causes the insect to
make an oriented movement away from the source.  In this case it would
be a mosquito flying away from a human before it's biting.

		The flip side is the way we are measuring the efficacy is that if a
mosquito already landed on the skin, the repellent has already failed
because according to the definition the behavior we are measuring is no
longer effective.

		The repellent could be a couple of different types.  One of the mostly
used repellents in the commercial market is called vapor repellents. 
The current examples are any formulations of DEET, picaridin KBR3023
repellents where we are measuring the efficacy based on their vapor
action.  When a repellent is effective is repels insects before they
land or bite.

	Then another type of repellents are called contact repellents.  In this
case the insect has to land on the surface before they figure out this
is not the right substrate to bite or it causes irritancy and they fly
away.

		Coming to the question of what data to show that there is a
variability of time intervals between first and subsequent landings.  In
this case the literature indicates a lot of factors play a part in this
first and second landing.  Some of these factors are listed and I will
try to summarize those.  As I go through my presentation I am going to
address these factors as they also relate to other questions.

		Basically what I want to say here is that the time intervals between
first and subsequent bites or landing is quite or very variable in
nature and depends on a number of factors.  Normally in a field
situation if you go in and you walk into an area, if the mosquitoes are
there, they will bite you in the first five to 15 minutes.  They will
find the holes, they will land and they will bite.

		If they haven't discovered you in the first five to 15 minutes, you
become part of the environment and you are just there and the mosquito
may happen to run into you accidentally.

		Can I go back, please?  Temperature plays a major role.  Skin does
play a real role but over and over in studies, which we looked at about
1,500 different observations and different volunteers, we really didn't
find a whole lot of significant difference among the population or the
different skin colors or skin types.

		The temperature plays a role.  We have found that with increasing
temperature the repellency goes down.  Age of the mosquitoes play a role
but the age of the volunteers don't.  Hairs on the skin has some role in
it.  Other major factor is the density of mosquito population.

		Studies have shown that mosquito populations -- okay.  Now coming to
the second, I would also like to point out that in the field of
repellents there is no agreement among any of the researchers.  

		If you have two researchers in the same room, I don't think they will
agree on what type of population you need to work with.  There is a lack
of coordination or agreement among researchers and the folks in private
industry who does the efficacy studies.  It varies with density.

		Studies have shown that the mosquito population of different densities
do not provide equal estimates of protection time.  Especially when the
test method depends upon a fixed endpoint such as the first bite or the
second bite because what we are looking for is looking at a very fixed
point.  What we are observing is the behavior rather than a predicted
endpoint.

		CHAIR FISHER:  Excuse me.  Are you public commenting?  Okay.  Excuse
me.  We started a little early or we got to what we were doing a little
early so after the presentation by Dr. Gupta we will ask you to make
some comments.  Do they have the slides that you provided and the slides
that Dr. Gupta is providing, Paul?

		MR. JORDAN:  They do not.

		CHAIR FISHER:  Okay.  Could somebody get them because they weren't
here for your presentation.  If you feel you are not prepared yet, we
can go on with the discussion and then when you are prepared for your
public comments, that would be fine.  Okay.  Very good.  Thank you.

		Let me ask you one more thing.  We have three different sets of
questions that we are addressing that I assume you are familiar with. 
So my question to you --

		COL. GUPTA:  Go to the next slide.

		CHAIR FISHER:  My question to you -- Dr. Gupta, please turn off your
mic.  My question to you is are you addressing them specifically?  No? 
Okay.  So then we have time.  Very good.  Thank you.

		Okay.  I'm sorry, Dr. Gupta.  Please continue.

		COL. GUPTA:  We were looking at -- I think I addressed the variables
for density and susceptibility.  Pretty much most of the population is
susceptible to mosquito bites.  The difference is whether the person or
volunteers, they have a developed immunity or they react to mosquito
bites or not.  Most of the human population is susceptible.  The only
difference is whether they show a reaction to a mosquito bite or not.

		Age in the mosquito population makes a difference.  This means if the
mosquitoes are too young they may not bite and if the mosquitoes are too
old they may not bite as readily as the ideal age.  The ideal age for
mosquitoes to bite is somewhere between five to 15 days old.

		The test sites play a big role and the selection of the tests.  In our
study the selection of a test site is critical.  Weather, terrain, and
fauna/flora may play a major role in the outcome of the study.  Looking
at the weather factors one of the major factor that weather conditions
play quite a role in is the temperature.

		Studies have indicated that there is a decrease of repellency of
almost eight minutes for every one degree centigrade rise in ambient
temperature.  If you put it under the human perspective, most of the
temperature on the human skin is about 30 to 32 degrees and that is
maintained normal to which air conditions you are in.  The temperature
of the skin is pretty much considered as a constant factor but the
variable factor is the environment or ambient temperature where you are
doing the studies.

		Vapor repellent wouldn't play a role because they remove the vapors of
the repellent quite regularly so the repellent has to continuously emit
vapors and the faster the wind, the prediction is that it will last less
time.  

		Humidity may play a role but it is based, on it may interfere with the
evaporation rate which is, we are trying to measure, is the repellency
of the repellents.  Light doesn't seem to play a role because there are
mosquitoes that can bite in the daytime as well as nighttime.

		Fauna/flora plays a role in that if you time your studies at the same
time as the local fauna/flora is out, mosquitoes have preferred hosts
and they would rather go to that rather than the human volunteers
walking into a test site.

		Next slide, please.  Coming to the question of the test subjects, as
Mr. Jordan has already alluded to, attractancy is quite variable.  It
does play a role.  Skin type may or may not play a significant role. 
The data which we analyze at our laboratories during lifetime based on
about 1,500 different data points, we really didn't find any significant
difference among the skin types of different people.

		Skin chemistry, yes.  It does play a major role and may have an impact
but it's not going to have an impact as much on the active ingredient of
the repellent once it interferes.  The main impact is the delivery
mechanisms or the delivery mechanism used to place a repellent on the
skin.

		The skin temperature of the test  subject, I already addressed that. 
It is a constant variable so it does not play as much of a role in my
opinion as the ambient temperature.  Based on the temperature you could
actually greatly reduce the efficacy of a repellent.

		Skin permeability also depends on the formulation and the delivery
mechanism used.  The repellent will be absorbed anywhere between nine to
over 50 percent.  The delivery mechanism of the formulation can reduce a
lot of the skin absorption or permeability into the skin.  Most of the
time if anything is absorbed into the skin it is taken up by the blood
and most of that is excreted out of the human body within 30 to 48
hours.

		Next slide.  Test methods.  This is really interesting and a complex
issue as I think you have found out in your proceedings.  It depends on
selecting the right experimental design to get the right outcome of your
studies so it indicates what you are trying to observe or what you are
trying to get at the end of the study.

		There are two commonly accepted ways of doing this experiment.  One is
a continuous sampling and another is a time sampling.  Continuous
sampling is where you have volunteers in the field and you report the
bites for the duration of the time.  Time sampling is where volunteers
go into an area.  		They sit down and record or observe bites for a
fixed period of time and record all the bites including the control
subjects and the treatment subjects.  Then they walk out.  After a
certain fixed time they go back in and they go back out.  The advantage
is that you are not reducing the population in the test site and it
gives you a little more consistent results compared to if you are
continuously in the site.

		Now, going back to the experimental design, one of the factors which
plays a role is the exposure skin surface, as Mr. Gordon also mentioned
in his presentation, is what part of the skin on the human body are you
exposing.  In this case the way we can reduce the variability is by
having a subject with an untreated surface to compare because the
mosquito bites or landings are hard to predict.  

		What you are doing is observing behavior and you've got to have some
relative point of reference to compare.  That would be going to a
realistic comparison.  Like any other scientific study you have a fixed
point, in this case, the mosquito biting or landing behavior is very
unpredictable.  The best measure is to compare the one when you don't
have a candidate to treatment compared to the control.

		Coming to your last question is we cannot predict initial or
confirming bite.  The reason is when you're looking at the first bite or
the confirming bite you are looking at the two extremes of the
population and it could be called catastrophic.  What you are not
measuring is the majority of the population response.

		In my briefing I read through the comments from Dr. Strickman and from
Dr. Schofield and I tried to incorporate.  I think the difference in the
comments is based on some of the comments that Dr. Strickman is going to
present later on.

		That completes my presentation if you have any comments or questions.

		CHAIR FISHER:  Yes.  Dallas.

		DR. JOHNSON:  I have one question.  You referred to control and I
wasn't quite sure whether you meant by control that there was no
repellent at all administered or whether control could be a standard
repellent being administered.

		COL. GUPTA:  Okay.  That's a very good question.  The control could be
a couple of different ways.  Control is a person with no repellent on
the surface or nothing on the skin.  Control could also be a person or a
subject with a delivery mechanism on the skin, just without a repellent.
 In this repellent formulation, you have an active ingredient in the
rest of the delivery mechanism so it could be the two.  

		When you are talking about a standard repellent that is used as a
measure to see if your repellents are working.  When I say control, it's
either a negative control that has no repellent on it or it could have
inactive ingredients on the skin.

		CHAIR FISHER:  Sue.

		DR. FISH:  Dr. Gupta, if I heard you correctly at the very end of your
presentation I think I heard you say that the best measure is to have
control and treated areas on the same subject, on the same volunteer. 
Did I hear you correctly?

		COL. GUPTA:  I think what I meant to say there is you have to have
control of the time you are doing the treatments also because that is
the only way you can observe if the mosquito population is even biting
or not.  If you don't have a control, your repellent could be good for
as long as there is no biting population.  During a continuous sampling
or the first bite or the second bite, you cannot determine that.

		DR. FISH:  If I may ask a follow-up.  So when you are saying a
control, do you mean a control subject or a control area on the same
subject?

		COL. GUPTA:  Control subject.  It does not have to be the same area.

		DR. FISH:  Okay.

		COL. GUPTA:  What you want to have in an ideal study you want to have
two or three subject -- more than two subjects as control.  You want to
randomize your treatments over time among subjects and that addresses
your variability among the subjects and the treatments.

		CHAIR FISHER:  Mike.

		DR. LEBOWITZ:  I'm not sure whether our rediscussant will bring these
questions up.  Part of my confusion is how much of the knowledge we have
is determined from the lab versus field studies.  

		Maybe I should reverse that and say how much of the lack of knowledge
is because there haven't been the lab studies to actually test these
different variables.  Again, if you are raising any of these questions,
tell me to wait.

		COL. GUPTA:  May I address that question?

		DR. LEBOWITZ:  Yes.

		COL. GUPTA:  Most of my comments during this presentation, I tried to
concentrate on the field studies.  We have done field studies for the
last almost 25 years.  We have done studies in different parts of the
world and about five different continents using volunteers and sample
size from 10 volunteers all the way to 150 or plus volunteer groups.

		I can address some of your specific questions but a well designed
study always comes out and gives you the outcomes at the endpoint.  If
it is a good repellent and the study is designed well, it will give you
a very good indication of whether your repellent is good for how long,
what percent protection is there or not.

		CHAIR FISHER:  Yes, Jan.

		DR. CHAMBERS:  Dr. Gupta, I kind of missed the significance of what
you said about skin permeability.  Could you repeat that, please?

		COL. GUPTA:  Yes.  The literature indicates that the skin permeability
or absorption of repellents into the skin varies anywhere between 9 or
10 percent all the way up to 50, 60 percent of the active ingredient. 
The way the repellent formulations control that is based on the delivery
mechanism.  Some delivery mechanism they place the active ingredient
between a couple of layers of a polymer formulation.  Or if there is a
microencapsulation they try to encapsulate the active ingredient.  

		When these delivery mechanisms or different metrics come in contact
with the ambient moisture of the air, they tend to dissipate or break
down.  As they are breaking down they release the repellent.  So in that
case, they control the absorption into the skin.

		DR. CHAMBERS:  What you're saying is that really is formulation
dependent, not individual subject dependent.  Is that correct?

		COL. GUPTA:  That's correct.

		DR. CHAMBERS:  Another question I had, you said something about when
you walk into an environment the mosquitoes notice you and then after a
little while you become part of the environment and you are not noticed
anymore.  In these extended field studies like that does that mean in
the later time points that the mosquitoes won't find that person a novel
person and, therefore, be attracted to them or does that compromise
that?

		COL. GUPTA:  I think that is a very good question.  We also struggled
with it and that is why when we did multiple studies is we have tried to
look at the behavior.  

		There is really no accepted theories in the literature but when we
designed the study we predicted that some of our studies have indicated
when a mosquito comes in contact with some sort of chemical vapors, it
affects his sensory mechanisms, so once his sensory mechanism, if that's
powered by the mosquito, then the mosquito goes away because it's really
no longer attracted to the host or, in this case, could be the subject. 
		After that the mosquito is biting by chance so it could bite anything
in that area. When a mosquito runs into a volunteer human after that
time, it's by chance.

		DR. CHAMBERS:  So what is the range of chemo reception in mosquitoes
then or does that vary by the species of mosquito?

		COL. GUPTA:  This is an excellent question.  What happens is most of
the mosquitoes have to sense and most of the repellents they have a
limited range of skin surface.  The repellency depends on the
concentration of the vapor pressure.  

		In some of our laboratory studies we found that you have to have a
certain vapor pressure threshold to repel the mosquitoes.  If it falls
below, they will come and bite and it becomes an attractant.  If it is
above that threshold, then it's a repellant.  

		The further you go away from your skin, your vapors dilute so you have
a less effect.  The mosquitoes will come close to your skin but before
landing they will fly away or they will go away from the source.  Does
that address your question?

		DR. CHAMBERS:  Yes, I think so.  So are you saying -- am I
understanding you to say the mosquito has to be pretty close to the
person in order to either notice the person or be attracted or be
repelled?

		COL. GUPTA:  Yes, it's a combination of factors.  The mosquitoes are
attracted to a host based on what is commonly accepted, that it may be
due to the emanations of your skin surface.  It may be because the
figures they have, they have associated that general figure to a source.

		CHAIR FISHER:  Can I just follow-up on June's question?  I think you
said somewhere that after around 15 minutes, it's by chance if they will
bite because you are just something else that they'll land on.  

		I guess I don't understand why they would bite because they wouldn't
suck a tree.  Right?  I'm just trying to understand what by chance means
in that you are not a person anymore.  You're just part of the
environment.  I'm just trying to get clear on that.

		COL. GUPTA:  What happens is in our studies when we looked at both
landings and biting we found out there was too much variability in the
landings.  We were having a hard time to repeat.  We wanted to get an
experimental design which we can repeat in a different part of the
world.  Biting is a better measure because you can absorb it better.  

		We designed a collection sample which even an uneducated person can
use it.  If the mosquitoes are feeding on the skin, the feeding lasts
somewhere between 60 to 90 seconds and that time frame is enough to
collect a mosquito.

		The other thing to answer your question, after 15 minutes, if you've
gotten bites in the first 15 minutes, the repellent has already failed
so why expose the volunteers for a longer time or duration when you
already have your outcome the repellent works or doesn't works.

		CHAIR FISHER:  Dr. Schofield.

		DR. SCHOFIELD:  I just wanted to add a comment.  I think one thing
that would be useful to understand is to get into the mind of a mosquito
a little bit in terms of how to find its host.  Okay?  

		Say you are a hungry mosquito.  What might you do?  You may be tens of
meters away or even hundreds of meters away from your host.  You might
start flying around because you are a little bit hungry.  It's
nondirectional, it's random.  They are kind of searching for something
that smells good.  

		You get within a few tens of meters of your host and you might start
picking up cues like carbon dioxide.  That may, for example, induce
orientation, directed movement, upwind anemotaxis.  Attraction is a very
imprecise word so there is very specific behaviors that we are talking
about here.

		Taking it back to what we were just discussing, it could be, for
example, in the first couple minutes you are recruiting mosquitoes that
are pretty close to you that have this efficient mechanism of finding
you, this upwind anemotaxis.  

		After that you may have some kind of equilibrium wherein mosquitoes
find you continuously but they kind of bump around eventually smelling,
for example, your carbon dioxide a couple of tens of meters from you and
then finding you.  It's a complicated behavioral algorithm to actually
get to the point of finding you.

		CHAIR FISHER:  So by definition the time is extended.  Whether you had
a repellent or not there is almost a natural course of bites get reduced
because you become part of the environment.  Is that what you're saying?

		DR. SCHOFIELD:  Absolutely not.  What I'm saying is you probably have
a higher recruitment efficiency initially only because you are
collecting mosquitoes in your vicinity.  It's not to say you will not
continue to attract mosquitoes but that equilibrium may change a little
bit because now you are drawing things from a broader area but they are
kind of doing a random thing when they are way out there.  

		Orientation directional only is probably good for a couple of tens of
meters with mosquitoes.  Wind tears apart these cues very quickly so
it's very challenging for a mosquito to find you.  You are not the
background but they just have to sort of bump into that area where they
can detect it and then they find you.

		CHAIR FISHER:  Steve and then Gary.

		COL. GUPTA:  A follow-up to Dr. Schofield.  Initially when you walk
into the area you are walking, you're making movement, you are rubbing
against the brushes and all that.  Mosquitoes which are in the area,
they sense those movements and they know something else that is maybe a
good food source coming in.

		DR. BRIMIJOIN:  This is a question either for Dr. Gupta or Dr.
Schofield or Dr. Strickman.  Can any of you gentlemen give us some sort
of quantitative sense, a mathematical curve that would describe in your
opinion if we were able to measure in a typical environment, putting a
subject into a field environment, where there was a stable degree of
hungry mosquitoes, so there are going to be bites.  

		We put a repellent on the skin and we were able to get enough subjects
and enough observations over time so we could get a quantitative record
of the let's say minute by minute likelihood probability of a bite. 
Just graph it over time from the moment the subject walked in there.

		Let's maybe ignore the initial effect.  The initial effect you have
walked in and there are mosquitoes right next to you.  They know you're
there.  They are going to come right after you if you are unprotected. 
We ignore that and we get to this equilibrium condition.

		What would you imagine then would be the time course of this curve
over time as the repellent on your skin sat there slowly evaporating or
slowly penetrating and absorbed into your body?  Would you imagine a
kind of exponential approach to some final equilibrium level so it would
be low at first and then it would gradually approach some final value?

		Would you imagine if you average this out it would be a straight line
function or a step function that would go out for any given repellent on
the same individual or just go out and then suddenly fail?  What kind of
a curve should we anticipate?

		COL. GUPTA:  Good question.  I would like to address that and then you
guys can jump in any time.

		CHAIR FISHER:  Could you put the mic closer?  Thank you.

		COL. GUPTA:  What happens is if you understand what toxicology is,
toxicity means the mortality rate goes up as your concentration is going
up.  In the repellent science the curve is upside down.  What happens is
you have a higher protection in the beginning of the study.  

		As the concentration and the time goes down, you have a curve which is
going to go down with time because your repellent's efficacy
effectiveness is reducing so the curve goes down.  

		DR. BRIMIJOIN:  Well, never mind the direction.  I mean, that's just
sort of mirror image.  What I'm really wondering is does it go down
steeply at first and then level out or go down slowly at first or not at
all and then suddenly we pass a threshold?  In other words, there is
really kind of a step function or is it a smooth curve?

		CHAIR FISHER:  Dr. Strickman.  I'll let you think about that for a
minute, Dr. Gupta.

		DR. STRICKMAN:  Well, first of all, the first part of your question
might clarify things.  The usual caveat, it's going to depend where you
are and what species and all.  I have done studies myself with
continuous collection all night for malaria vectors in Korea.  I think
we've all had similar experiences and they will be variable for
different people.  

		I think this specific one where there is real data to back it up might
help.  What you saw was with that you start before they start biting
early in the evening so the initial activation phase is not a factor. 
Then you get a rise and then a decrease more or less through the night. 
This is like 12 hours of continuous collections.  

		As Dr. Gupta mentioned, there are so many variables and you can
quantify these variables which influence exactly how many are biting at
that time.  One is temperature.  If it's cold and it gets below a
certain threshold, then the mosquitoes tend to bite earlier in the
evening.  

		I can't tell you how they are able to know that it's going to get
cold.  Nonetheless, they will concentrate earlier in the evening.  The
whole population will, so that's one thing you will observe.

		If conditions are very conducive to biting, with this particular
species you will see a peak about 9:00 in the evening, three hours after
sunset, and then another peak just before dawn, and that's just sitting
there.  I mean, there is a blanket of mosquitoes out there hunting, as
Dr. Schofield mentioned, and finding hosts who are continuously
collecting.

		The other thing you'll see is that the phase of the moon will
influence -- the phase of the moon at that hour will influence biting so
the larger moon disc available at that hour will increase biting by that
particular species.  

		That is the sort of complication you will find in average periodicity,
which varies hour to hour, half hour to half hour.  It varies very
rapidly but it's different on different nights and different conditions.
 Okay.  I hope that is helpful background.

		Now, as far as the repellent goes, that also has been measured in a
few studies, many fewer studies than the kind of thing I'm talking
about.  You see two different things.  You see rapid failure within a
very short period of time.  Suddenly everything is biting that can be
biting it seems like.  

		Then you also see a diminution over a period of two or three hours
where you go through these phases that Dr. Gupta referred to that you
see in the laboratory.  You get 90 percent protection and then 80
percent and then 50 and like that.  I think that answers your question.

		DR. BRIMIJOIN:  I guess so.  The reason I'm asking the question is I
think the board is trying to educate itself about what would be best
practices, what is the ideal sampling method.  In the interest of
protecting subjects and minimizing their exposure, we are looking at and
being presented with sampling models that are just like looking through
a keyhole at a larger scene.

		We are trying to come to some conclusions about which keyhole should
we look through and how big should it be but we don't have as yet a
sense of what the larger scene really does look like.  That's what I was
asking and you have helped me somewhat.  Thank you.

		CHAIR FISHER:  Thank you, Steve.

		Gary and then I'm going to turn to Jan because she was the second lead
discussant.  Did you know that, Jan?  Okay.  I think you probably
prepared some of the questions that some of us might be wanting to ask. 
I think we should proceed that way.

		Gary.

		DR. CHADWICK:  A question and then a follow-up I guess.  It's all
along the same line.  What I guess I'm wondering, and this is sort of
what Steve was saying, we are sort of looking for practical implications
on what we need to be looking at and what we as a board would consider
to be scientifically sound and also what our researchers should be doing
to present information to the EPA that is scientifically sound.

		I'm assuming because of this five to 15-minute thing that we have been
dancing around for a while that in a laboratory setting where you are
sticking you arm in a box that has no real practical implications.  		If
I'm understanding this correctly, in a field test if you have a subject
or a control who is stationary that will change the numbers that you get
versus if they are at least minimally mobile.  If they are walking
through the environment as opposed to standing like a tree in the
environment, that it would be different.  

		Should this be a best practice?  I mean clearly you wouldn't want your
subject necessarily to be doing a lot of exercise and so forth but it
does mirror, I think, the use of the product because few of us go out
and -- few of us are out standing in our fields, I guess.  I do have a
follow-up.

		COL. GUPTA:  Very interesting question.  We were faced with the same
dilemma because what we wanted to see is would this repellent protect
the soldiers while they are deployed or doing their exercises and all
that.  We had design studies and we found that logistically it becomes
very difficult and it's very hard to collect mosquitoes when the other
person is moving.

		We also found out that it is very hard for a person to collect the
mosquitoes off themselves.  We ended up designing studies where we
paired volunteers.  The way we have done was we let the soldiers do
their regular activities and then we brought them back into an area for
observation time for five to 15 minutes.  

		We standardized that 15 minutes is about the max number of bites
you're going to get.  If it's past 15 minutes there was not a
significant change or increase in number of bites.  

		Also at the same time we put an endpoint on our volunteer control
subjects that if a control subject receives 25 bites total whether it's
four minutes or five minutes they close up the thing and they leave the
area but the treatment continues until the end of the 15 minutes.

		In our studies we always had populations where we got enough bites on
the control subjects that we were able to compare the rest of it.  Then
we replicated over time different days because the behavior is so
different on different days based on time, ambient temperature, light,
and so on.  

		I would like to address your question now.  Dr. Strickman has
mentioned that biting activities of mosquitoes are different at
different times so we actually ended up designing studies we call DL
cycle.  I means we try to have our volunteers in the field at the peak
activity of the mosquito population or when they were biting.  What we
ended up doing was we ended up staggering our start times.  

		It means you apply repellent before doing activity and then they go
into the area where there is biting activity.  Therefore, we were able
to capture the biting behavior in the 24-hour cycle rather than just a
snapshot at a point in time.

		CHAIR FISHER:  Dr. Gupta, can I just say before Gary follows up, what
is the sample size that you usually use to determine this?

		COL. GUPTA:  The sample size on the test volunteers or the mosquito
population?

		CHAIR FISHER:  The subjects, the humans.

		COL. GUPTA:  Okay.  It depends on the number of treatments we are
looking at. Normally when we do the study we always have a control and
we used our standard repellent.  In addition to that we would have a
number of treatments.  

		For example, we are looking at five or six different formulations so
five formulations.  Total treatment will be seven.  One control, one
standard repellent, and then five test repellents.  Now we have seven
treatments.  Then we will take -- we will do an experimental design. 
Based on that we will determine that we need minimum of so many
volunteers.

		CHAIR FISHER:  I'm just wondering per condition on average, per
treatment compared to control on average for the kind of study you just
described to us what is an average sample size?

		COL. GUPTA:  Average sample size, about five.

		CHAIR FISHER:  Five per condition?

		COL. GUPTA:  Yes.

		CHAIR FISHER:  Okay.  Thank you.

		Gary, do you want to follow up?

		DR. CHADWICK:  I'll actually hold my follow-up question but just to
verify, you're saying that it is, in fact, better if the subjects are
mobile during the testing that you have done some paired sampling to get
that or are you saying that it's just impossible to do that as a
measurement, a scientific measurement.

		COL. GUPTA:  What I was trying to say is it's impossible to collect
mosquitoes while the subjects are moving.  When you are doing the
sampling they have to stop and they you can do the collections of each
other.

		After that they can start moving again.  Also we have found out that
if you are doing activities rather than just sitting around, your
repellence efficacy is not as long as if you were just stationary.  

		DR. CHADWICK:  That is actually what my follow-up question was. 
Essentially what we are doing in these studies is measuring nonevents,
which in drug studies and so forth that I am more familiar with will not
get you very far as far as registration and so forth.  

		It doesn't, in fact, from a scientific standpoint, promote sponsors
and investigators or however you want to say from designing these to be
more rigorous because, in fact, it's better to do sloppier science
because the numbers are longer.  That was my point.

		COL. GUPTA:  I think, as I suggested in my written comments, it may be
beneficial to conduct these studies at the end-user stage.  If we are
proposing that this be used by hikers or some of those studies may be
done in a situation where people are using for hiking.  If it's a
backyard barbecue, then studies should be close to an environment where
we are actually preaching that this is what they are protecting.

		CHAIR FISHER:  I think Suzanne had a question.

		DR. FITZPATRICK:  You had a list of factors that affected whether a
person is bitten or not at the very beginning, age and a couple of
others.  Are they enough of factors to make sure that between the
controls and the experimental that there is some similarities?  For
example, age.  Hopefully you get bit less as you get older.  Is that
enough of a factor to make sure your controls aren't a lot younger?

		COL. GUPTA:  Very good question.  When I was talking about age I was
talking about the age of the mosquito, not the human subject.  The
mosquito --

		Yes, but I was listing a number of different factors which may impact
on the biting or active behavior.  Age of the human does not really make
a big difference for mosquitoes.  They are --

		CHAIR FISHER:  Excuse me.  Let me just clarify because I think you
really did a good job of talking about the different variables.  I think
what Suzanne is asking is are five subjects control versus the
treatment.  How powerful are these individual differences?  Do any of
them determine the type of person that you would recruit for your study?

		DR. FITZPATRICK:  Controls and the experimental have some factors in
general so that the controls are really controlled for the experimental.

		COL. GUPTA:  I think I also addressed that question in my comment that
if you are going to preselect the controls, you are biasing the study
because you are selecting some people which are already prone to receive
more bites and they are also trained so they are going to have more
landings or they are going to collect more mosquitoes, biting ones.  

		When you are trying to select your controls and the treatment, there
shouldn't be any preselection and they should be selected, or treatment,
including the control, should be assigned at random to the population.  

		If you are trying, for example, to do a study of seven people and
you're going to do it for seven days, then scientifically you've got to
make sure that your assignments are random and it's not repeated until
everybody has the same treatment.

		CHAIR FISHER:  So also I think what you're saying is the more
experienced the individual, the more mosquitoes they are going to be
able to detect as they land which means that if the control and
treatment group differ in terms of the sophistication of the individual,
then there is a bias toward control having more mosquitoes?

		COL. GUPTA:  Yes, Dr. Fisher.  I will give you a living example.  We
were in a field situation and we had about 110 volunteers.  One of the
volunteers was so quick that he could catch a mosquito about two inches
from your skin.

		CHAIR FISHER:  Kind of biased whether they were going to land.  So
before we turn to Jan, what I'm thinking about as chair just for the
board to contemplate is that I don't know if you remember but when we
were talking about dosing we set up different criteria not in terms of
what somebody should do but what they should tell us about in terms of
give a rationale for this, give a rationale for that.  

		One of the things I'm thinking about and one of the reasons I think
the work group wanted these questions were with, I think, at least six
parameters that Dr. Gupta was talking about, the activity level, the
sites, the variability of the mosquitoes, temperature, etc., as we are
talking, one of the things is are there some conclusions that we have
with respect to the type of information and rationales we would like to
see.  

		Not setting a criteria for those but this is information that we need
from investigators in order to make a good decision and whether or not
we can make a list of those.

		Dallas.

		DR. JOHNSON:  Can I ask one more question?  When you are designing
these experiments that you've been talking about, has your goal been to
try to determine whether repellents are significantly different or
determine whether repellents do the same effective job?

		COL. GUPTA:  Most of the studies which we designed, or I designed, we
were looking at the efficacy of different repellents to see how long
they will provide 90 percent or better protection and not looking at
whether they are providing similar protection or not.

		CHAIR FISHER:  Thank you.

		Jan.

		DR. CHAMBERS:  So my question, I guess, right now is the military is
not using complete protection time then as an endpoint.  Is that
correct?

		COL. GUPTA:  Yes, ma'am.  As far as I know, no, we are not using the
endpoint.  The reason we decided this a long time ago was, as I
mentioned earlier, we were having too much variability in the data and
we weren't able to repeat at another location.  

		In some other experiments we like to see a repellent which is
efficacious in different parts of the world.  A mosquito is a mosquito. 
Whether you are in the states or somewhere else they are going to bite
the human host so we wanted to protect soldiers in this case.

		CHAIR FISHER:  I'm not sure I understand.  What is the alternative to
complete protection time?  How do you evaluate?  It's just more or less?
 Is that it?

		COL. GUPTA:  No, ma'am.  What happens is what we are looking at is we
are looking at -- we calculate percent protection at a time interval. 
For example, if you design a study for 12, we are looking at a 12-hour
extended duration protection.  We say our sampling points are going to
be at zero hour, two hours, four hours.  

		I take another two hours and what we look at is the number of bites on
the control and then the number of bites on the different treatment
subjects.  We always have a relative protection based on the number of
bites so that way we know at a given time in a given environment this is
the number of population that is going to be biting.  Compared to that
how well is your repellent working.

		CHAIR FISHER:  Thank you so much.  		DR. CHAMBERS:  So your protocols
that you use in the military really are having subjects bitten a whole
lot more than the types of things we are looking at here for EPA.  Is
that correct?

		COL. GUPTA:  Yes and no.  What happens is the studies I've designed
where we put a time frame, you know, we are exposing the volunteers for
15 to 20 minutes.  At the same time we are exposing the controls the
same amount of time but the caveat is that if the control subjects get
all the 25 mosquito bites in five minutes or four minutes, they leave
the area because that is the upper limit.  If we have gotten that many
bites, the repellent isn't going to work anyways.  

		DR. CHAMBERS:  I also gather from your remarks that typical of any
field studies kind of harkens back to my training as a field biologist
that things are different from day to day and place to place and
whatever.  How in this kind of a situation can you really make any
conclusions with any confidence?

		COL. GUPTA:  You can design a study.  For example, a lot of the field
studies we design we look at the number of treatment and then you select
the number of the volunteers.  Then you decide how you are going to
analyze your data based on are you going to use randomized design and
then how are you going to analyze a variance.  

		Based on that you pick a number of subjects to recruit.  Once you have
done that then you look at what is the minimum number of days you have
to repeat the study so you've got statistically varied results.

		We have done studies anywhere where we have done exposed volunteers
from anywhere from three to 12 days based on the number of volunteers,
the number of treatments, and how the population is going.  At the end
when we look at the 95 percent protection or higher, that number gives
you a significant results so you can distinguish whether this is
significant or insignificant.

		DR. CHAMBERS:  So I gather you are taking these same individuals out
on multiple occasions and testing the same repellent?

		COL. GUPTA:  The same repellent but different individuals.

		CHAIR FISHER:  Dr. Kim, you wanted to make a --

		DR. KIM:  I just want to make a general comment.  In any experimental
situation if you have a concern about the variable you will include that
as part of the design so if you have a concern about the site-to-site
variation you will include that as part of the design.  

		If you have a concern about the day effect, you will include that as
part of the design.  As we are going to see later, I mean, much of the
studies really ignore all of these factors so now we have all these
confoundings.  We cannot really interpret the data because sites are
changing, dates are changing, volunteers are being used multiple times
without any sort of consideration for the design of the experiment.  You
cannot interpret the data.

		CHAIR FISHER:  Sue, did you have a follow-up before Jan continues?

		DR. FISH:  No.  Jan can finish and then I'll --

		CHAIR FISHER:  Okay.  You're finished?  Okay.  Sue.

		DR. FISH:  Dr. Gupta, you said that you went through a formal sample
size calculation and came up on average with a number of five per
treatment.  I'm just wondering for my own edification if it would be
possible for you to later after this meeting to provide us with some of
the assumptions that went into it.  

		It's hard for me to think about how with all the variables that are
potential confounders and all the inter-individual variability on this
that the sample size would be five.  

		I defer to my statistician colleagues but I'm a clinical trialist, it
just doesn't seem that the numbers would come out.  I would have thought
that the sample size would have to be huge even controlling for the
confounders in the analysis.

		CHAIR FISHER:  Can I just say something?  Is somebody addressing this
specifically for question No. 3?  I just want to know.

		DR. STRICKMAN:  No, not the human subject sample size.

		CHAIR FISHER:  Okay.  Thank you.

		COL. GUPTA:  I think we addressed this issue, I think adequately, in a
publication a few years ago.  When I was using the number of five
development was the number of days.  The question was asked how many
times do you have to repeat it so that was the number of days.  The
minimum number of days we have done a study is five.  

		The number of subjects is totally dependent upon the number of total
treatments you are testing.  If you have a true repellent, then you have
to have a control.  In case if you have a control with the delivery
mechanism, that is another treatment.  		If you have a standard
repellent, that's another.  That way if you have two treatments, they
control the delivery mechanisms of control and the standard so that is
why there are different treatments.

		CHAIR FISHER:  I think we are asking what is the formula for
determining the number of subjects in each of those treatment arms.

		COL. GUPTA:  Okay.  That is all dependent on the experimental design
so if you have five treatments, you look at it and say you have selected
a design, after five treatments then you are going to do it over five
days so then you will assign the treatments at random so each volunteer
will get a different treatment every five days.  That will be your
minimum.

		CHAIR FISHER:  So you are doing a repeated -- wait a minute.  Excuse
me.  This is a repeated measure design?  Is that what you are saying?

		COL. GUPTA:  Yes.

		CHAIR FISHER:  Okay.  Maybe five, but they are randomized across the
designs over five days.

		COL. GUPTA:  Yes, one person, one treatment.  Whether it's a control
or a treatment it doesn't matter.

		DR. FISH:  If there are five different treatment arms, whether it's
control, active, negative control, placebo control, that kind of thing,
so it's a cross-over design and -- no?

		CHAIR FISHER:  It's a repeated measure design which would mean a lower
end.

		DR. FISH:  If there are five different treatment arms, you are going
to use five subjects over five days.

		COL. GUPTA:  Okay.  You can use five subjects or you can use multiples
of five because you can use 10 volunteers a day, or 15 or 20.

		DR. FISH:  Right, but how do you determine whether to use five or 10
or 20 if there are five treatment arms?

		COL. GUPTA:  The number of days or the number of volunteers?  Okay. 
The number of volunteers.  The reason we decide is we have five
treatments and the minimum you need is five different volunteers because
each volunteer can only have one treatment at one time.  We found if you
are trying to do more than one treatment on a person they may interfere.

		DR. FISH:  How do you decide if you are going to use five volunteers
or 10 volunteers or 15 volunteers?

		COL. GUPTA:  Oh, okay.  When we go into the field we know that we are
measuring, for example, studies are designed that our endpoint is if a
volunteer gets more than -- the maximum number of bites is 25.  

		That is our repeated measure so we go to the statistician and we sit
down with the statistician and say, okay, this is the number of bites. 
This is the number of treatments and will they have an update at the end
of the design.  

		Based on the statistical design we chose the number of volunteers that
we need, five volunteers, or we need to have 10 volunteers every day for
five days.  Whether we need five or 50 volunteers is based on the
efficacy, the power of the study and what you are looking at in the end.

		CHAIR FISHER:  Okay.  So it seems to me, just to move ahead a little
bit, that, No. 1, one of the points of calculation for the power
analysis in terms of how many subjects you need, obviously there are
less subjects if it's a repeated design.  But it also is anticipating
the number of bites.  It has to be taken into account with the power
analysis.

		Now, you hired a statistician so you don't have that formula.  I guess
one of the questions is are there formulas out there and where might we
have a reference to that formula that your statisticians are using to
determine the sample size because we would like to recommend that and
see what that is and recommend that those who come -- who submit
protocols to us have done a statistical analysis of power.

		Dr. Kim.

		DR. KIM:  Well, I mean, there is a standard formula for doing the
calculations but the important thing is that it really depends on what
kind of endpoint you are measuring and the variability of the
measurement.  It all entirely depends on what we call effect size and we
haven't seen any of that even mentioned in any of our studies.  The
whole discussion of power and all these protocols are not nonsense.

		COL. GUPTA:  Some of these software programs are available there,
quite readily available, and it tells you if you were measuring this
many factors this is how many volunteers you need and this is how much
outcome you are looking at.

		CHAIR FISHER:  I think, at least from my perspective, one of the goals
is to be able to identify.  We all know.  We have all used those kinds
of formulas but we haven't seen those formulas being used.  One of the
things I think that we don't want to see is, "We have asked our
statistician and they have told us this," because that doesn't really
help us very much.  

		It's okay, Dr. Gupta.  It would be nice if somebody would be giving us
this kind of a reference for investigators so that they would know we
would like to see the analyses, how you selected your outcome measures,
and how you conducted the power analysis.  I think Suzanne --

		COL. GUPTA:  There is a publication we published a few years ago which
addresses the number sample size.  I think at the last year meeting I
think Mr. Sweeney at one of the meetings here at EPA I remember
discussing that very issue, what number of treatments you are doing, for
a long duration how many minimal volunteers or maximal volunteers.  Do
you remember that, Kevin?

		MR. SWEENEY:  I mean, you published a paper with Dr. Rutledge a number
of years ago and we did reference it in the guideline and it does
discuss sample sizes and variability of different sample sizes, the
standard deviation one, two, etc., in that paper but I don't remember
all the formulas in the paper but there is a published paper.

		CHAIR FISHER:  Okay.  Let me suggest that we get that paper and we
move on.  		Suzanne, did you have a question?

		DR. FITZPATRICK:  Also there must be -- per site there must be a
certain number of square feet each person must have.  If you get it too
crowded you are just going to automatically lower the number of bites. 
I'm wondering if people -- when you go out into a field of a certain
size do you have a limit of the number of people that can go out there
and not interfere with the study?

		COL. GUPTA:  No.  Most of the studies that I have done we have really
never limited the area.  We just wanted to make sure the people were
separated enough that the vapor effect of repellents don't interfere
with each other.  The minimum distance we kept apart was 10 feet away.  

		After that a number of volunteers or how big the field really doesn't
make a difference because all we are measuring is the number of bites. 
The mosquito population if it's there you will have the outcome of the
study you are looking for.  So really the size of the field does not
impact it in my opinion.

		CHAIR FISHER:  Dr. Schofield.

		DR. SCHOFIELD:  I'm not sure if I understand the question correctly
but what I can say is for mosquitoes, for example, there is not a good
understanding of the relationship between, for example, bait mass and
numbers coming.  It simply doesn't exist.  		I can tell you because I've
worked on it.  For Tsetse fly, for example, a number of mosquitoes
coming as a power function of the bait mass, that is, with increasing
bait mass you get relatively speaking fewer per individual kilo coming. 
That makes sense based on the way older --

		CHAIR FISHER:  Okay.  So what you are saying is that the larger the
sample size the less mosquitoes you get?

		DR. SCHOFIELD:  I say we do not know that but one might extend the
Tsetse work to say if you put 10 people beside each other in the field,
the number of mosquitoes per person will decrease.  

		Total number of mosquitoes coming will increase.  The nature of that
power function will depend on a variety of circumstances including the
beast in question, this being mosquitoes.

		CHAIR FISHER:  Thank you.  Thank you.

		DR. FITZPATRICK:  So as a follow-up, if you had a subject that was
removing the mosquitoes themselves versus the subject that had a partner
that was helping them remove the mosquitoes, would that make a
difference in how many were bitten?  Would that second person kind of --
do you understand what I mean?

		DR. SCHOFIELD:  I understand exactly what you mean.  Would it make a
difference?  There is not a lot of exceedingly good data out there. 
What I can say is that it probably would affect the number of mosquitoes
coming to the vicinity.  What happened after that probably depends on a
number of factors not the least short-range cues for what the mosquitoes
are going to do.  		There is no question that the presence or absence of
individuals with repellent on them clearly affects the number of
mosquitoes, for example, biting and untreated control depending on
proximity.  That data exists, and I think it's this year in ASTM.

		CHAIR FISHER:  Hold on.

		COL. GUPTA:  I just have a quick one.

		CHAIR FISHER:  Briefly, please.

		COL. GUPTA:  In our studies when we paired the volunteers, they took
turns so the first volunteer will collect the other volunteer for the
first 15 minutes.  Then they will switch places and they collected and
we didn't see any difference between the number of bites on the control
subjects or the treatments.

		CHAIR FISHER:  Okay.  I have one more question.  I think you said --
I'm not sure if I got it right but that after this 15 minutes when in
some sense whether a mosquito may just land on you randomly, I think
that was an argument you were saying where actual bite rather than
landing with intent to bite might be better because they may just land
on you without any intent to bite.  I think that is what you said.  Yes
and no.  I don't even want to be on the wrong path.

		COL. GUPTA:  What I was saying is that when we were looking at it when
we were considering landing as one of the factors our data was too
variable to conclude anything from it.  We wanted a definite or major
and then that is when we went to the bites.

		So the landing may happen or may not happen whereas the bites, you can
tell, yes, the mosquito bit because you can see the blood in the gut of
the mosquito.  When you are doing these type of studies, different
people, you are counting on volunteers to observe if you collect the
data.  

		The more variable factors you include, the more variable data you are
going to get.  We were trying to reduce variables in the study.  If we
want to make sure that they counted the mosquito bite when they saw the
blood in the gut, that was counted.  The mosquito can land multiple
times and still not bite.

		CHAIR FISHER:  Thank you.

		Okay.  Yes.

		DR. CHAMBERS:  One last question here.  The power calculations you
were talking about a little while ago for determining sample size.  That
really isn't relevant to this discussion for complete protection time,
is it, because it's a different endpoint?  It would be a different type
of power calculation.  Would it not?

		COL. GUPTA:  I can't answer that because we looked at it in a very
different perspective.  From the beginning we used the method of fixed
point sampling rather than continuous sampling.

		CHAIR FISHER:  Lois.

		DR. LEHMAN-MCKEEMAN:  Begging the chair's forgiveness.  I want to make
sure I understand something I think I heard you say that perhaps
influences additional discussion later.  That is that what I thought I
heard you say a few minutes ago relative to precision of these data is
that precision would be gained by replication of the study.  Did I hear
that correctly?

		COL. GUPTA:  That's correct.

		CHAIR FISHER:  Thank you.  That was important.  Thank you very much.

		So we are going to go on to the next point.  I just want to review for
a minute not what we've said but just the six categories, I think, that
we have been talking about so that we can continue to think whether or
not this is some kind of criteria that we want to introduce in terms of
once again not a standard or guidance but just in terms of the type of
information that we need in protocol.  

		One would be a description and rationale for the activity level of the
subject population.  This may not be -- I'm just -- that's one of the
things we've heard, creates variability and so we would want to have a
description of the activity and a rationale for activity, inactivity,
whatever, the type of mosquito, the density of the population and, I
guess, a cross-comparison of the density of the mosquito population in
the two sites that are being studied and the rationale for choosing
those two sites and what makes those two sites different.  

		I'm not sure.  I wasn't clear.  Maybe we'll get clearer later whether
there is any reason to look at human differences.  I don't think that
came out clearly yet in terms of I think they are different from hair
and skin but I don't know.

		The test types themselves in terms of what is the weather, what is the
temperature, what is the fauna/flora.  Those are all information that
should be in the description of the test sites that come to us as well
as the rationale for why those test sites are being used.

		Obviously a rationale for the test methods as well as some kind of an
analysis which includes an outcome measure with respect to the sample
size.  

		It sounds like what is also very important is that there is a random
assignment of experience versus naive subjects in terms of treatment
control because there will be a bias toward an increased number of
mosquitoes if it's a sophisticated person so there has to be an
equivalence of training between treatment and control so that has to be
some kind of description needs to be out there.

		It also sounds like I don't know what the calculus would be but the
distance between or the density of the subjects themselves seems to be
some kind of an issue.  Those are just some things we might be
suggesting as information we would want.

		Michael, did you want to --

		DR. LEBOWITZ:  I'm going to let Jan talk first.

		DR. CHAMBERS:  Well, their control is a different thing than what we
have been typically looking at here, though, because they are using
relative protection time or whatever you're calling it as an endpoint
and this is complete protection time using the control only to monitor
biting pressure or landing pressure to make sure that the experimental
conditions can still show efficacy.  That's different.

		CHAIR FISHER:  Thank you.

		DR. LEBOWITZ:  If we look at the charge questions and realize that we
are not trying to set policy, we are just trying to understand what
protocol should include and how the lack of information or the presence
of certain information inform us to help us evaluate the studies.

		The issues that come up as you describe, first off, the design has to
describe how the experimenter is dealing with the different variables
and how much of a randomization is handling differences that have been
specified as sources of variability and how the investigator's
calculation and sample size is based on those number of variables that
are left unexplained and the amount of variability in the measurements
they are taking, as well as the -- 

		In other words, as well as the confounders and how many controls and
how many per treatment and how many days to cover the different kinds of
variability that might exist at each site, etc.

		Those are the features that Dr. Gupta has clearly enunciated. 
Whatever the endpoint there is a way in which to calculate.  I mean, a
randomization is a method that removes a certain amount of the
variability or makes it equal between groups.  

		These are the kind of issues that we need to look at from a scientific
standpoint and then there are ethical aspects to each of those
questions, whether the investigator is using actual bites or intent to
bite or whatever and the variability associated with each.

		CHAIR FISHER:  And another question to follow up on Lois' question in
terms of the importance of replication.  Because of the variability,
there are just some things that can't be controlled.  What is the
definition of replication?  

		Are two sites supposed to be a replication?  Is it the same study
conducted again a week later?  What is typically a criteria for
determining what is a replication of data that has been taken at one
particular kind of testing period?

		DR. LEBOWITZ:  I think we have to be very careful there again.  We
have encountered this problem before about defining it.  If you use --
doing the same study on another day with different subjects is not a
specific replication because it's not the same subject.  We are not
looking at intra-individual variability.  We are still concentrating on
inter-individual variability and treatment and then the co-variates or
confounders.

		We do it on multiple days and look at days as a factor.  That's not a
replication.  What you strive for is independence of observations and
randomization to maximize that independence but the experiment itself
might be on multiple days.  It's not repeated measures on the same
individual. 

		The number of days, again, is determined by the design and the
variables that you have to control, etc., etc.  If you are using a block
design as a variance if you want, then you have to account for all these
different factors and then figure out sample size.

		DR. FITZPATRICK:  So following up on that, the same subjects at a
different site wouldn't be a replication either.  Would it?

		DR. LEBOWITZ:  Same subject at a different site at a different time
would be -- one assumes the subject would be part of a different
experiment as it were.

		DR. CHAMBERS:  Yes, because those two sites are not replications. 
Correct me if I'm wrong but that is to give two different ecological
venues with different species compositions of mosquitoes to test in two
different types of conditions.

		CHAIR FISHER:  It's for generalizability I assume.  We would say it
was for generalizability from two different settings.  That's why.

		DR. CHAMBERS:  Generalizability 

of --

		CHAIR FISHER:  The results.  If the results are similar across two
sites, then that product generalizes -- the efficacy of that compound,
or whatever it is, isn't that the purpose of the two sites?

		DR. CHAMBERS:  I guess so.  I had a question earlier for Bill if I may
just go ahead and interject this right now.  You are trying to get
something to put on the label and I have never looked at labels that
closely to tell you the truth but what do you end up putting?  You kind
of blend the two sites or something and come up with the lower time or
what?

		MR. JORDAN:  The agency has no written guidance on this.  As we have
dug into this issue, we have discovered that there are some difference
in approach among our labelers which is a whole other question.  What we
have tended to do most often is use the mean protection time of the two
sites.

		DR. LEBOWITZ:  But getting back to your question, Celia, it's not
generalizability in a population sense.  It's generalizability in an
ecosphere sense.

		CHAIR FISHER:  No, I understand that.  It's generalizability across
sites.

		DR. LEBOWITZ:  Across sites but not everyone in the general population
because if you only have --

		CHAIR FISHER:  I understand.

		DR. LEBOWITZ:  -- ten subjects going to different sites, it is still
only 10 trying to represent the six billion in the world.

		CHAIR FISHER:  Right.  Okay.  So let's continue to -- especially,
science people, please continue to think about what kind of information
we might recommend we need.  Like I just throw it out there and design
people can say it much better and correct me but this is where I would
like us to be going toward the end of our discussions here.

		Let's move on.  Thank you very much, Dr. Gupta.  That was so
informative and we really appreciate it.  Stay and obviously we'll hear
more from you.  Thank you also to the consultants.

		Do you want to introduce the second question, Bill?

		MR. JORDAN:  Thank you, Dr. Fisher.

		CHAIR FISHER:  Well, the public comments I understand you want to go
after we are finished discussing.  Thank you.

		MR. JORDAN:  The second charge question focuses on a very specific
aspect of the test methodology, the duration and frequency of exposure
of the test participants, the subjects to ambient lighting pressure. 
There are two different designs that have appeared in protocols before
the board.  

		One exposes subjects to potential landings one minute out of every 15
and another design exposes subjects to landings four to five minutes out
of every 30 minutes.  		The questions here EPA has some comments on but
not much to say.  We have not seen a rationale for the different
patterns of exposure.  It appears to us that the design depends on how
many studies the sponsors choose to place with different researchers.  	
Each researcher has his or her preference and they tend to do the same
thing when they go out into the field.  We are unclear about what impact
these different designs have on potential landings.  That's what we have
to say.  Thanks.

		CHAIR FISHER:  Thank you.  Which speaks to it's good to have some
consultants because it's such a challenging issue.  

		Our lead consultant on this is Dr. Schofield.

		DR. SCHOFIELD:  I think the first point is that I concur and probably
we concur that really making sense of this is indeed very challenging
and probably fundamentally can't be done at this time.  Nevertheless,
I'll take a stab at it.

		Next slide, please.  I couldn't resist putting this up considering the
context of the actual questions and answers.  However, I am afraid I
will not be able to meet this standard.

		Next slide, please.  What is the rationale for the different designs?

		Next slide, please.  Bottom line, they are not, at least in my
opinion, and this is primarily looking at peer-reviewed and published
literature, standard designs.

		Next point.  The question, of course, then are there some standard
designs?  I think the answer to that, next point, well, kind of.  Again,
reviewing the published literature usually what I see is relative
protection.  We describe relative protection is to proportion or
reduction in bites associated with use of a given treatment.

Typically what you see in the literature is intermittent exposure.

		Probably the other things you see somewhat in order of my experience,
not necessarily actually in order of what you see in the literature, is
relative protection, continuous exposure, first confirmed bite or
complete protection time, continuous exposure.  Sometimes nowadays you
do see some survivorship analysis.

		Next slide, please.  In terms of guidelines you all probably know
better than I the '06 guideline I think is a draft guideline from EPA
which seemed to indicate intermittent exposure for first confirmed bite
is allowable, whereas the preceding guidelines, from what I can read,
did not allow that.  The Pest Management Regulatory Agency of Health
Canada, our regulator, their draft guidelines require continuous
exposure for first confirmed bite.

		CHAIR FISHER:  Dr. Schofield, could you move the mic a little closer
because when you look there you're not in the mic.  Thank you.

		DR. SCHOFIELD:  ASTM guidelines reviewed and reapproved in 2006. 
First confirmed bite or complete protection time must be continuous
exposure.  WHO guidelines or guidance.  Maybe not guidelines per se. 
Doesn't even look at complete protection time or first confirmed bite. 
It is relative protection, the benchmark for something that is
acceptable as 80 percent relative protection.

		Next slide.  I can't actually tell you the specific rationale for the
approach because I'm not walking in their shoes.  I can give you a few
ideas for why I might do it.  There may be a logistic advantage in terms
of execution of the experiment.  Certainly in the laboratory you see
complete protection time, first confirmed bite, and intermittent
exposure so maybe an extrapolation of laboratory work.  

		Clearly it may minimize exposure or that may be the idea.  For me this
is a hard sell.  If your endpoint is a couple of bites, your endpoint is
a couple of bites regardless of whether it's intermittent or continuous
exposure.  I do have concern that it may affect the estimation of
protection time estimates.  

		Next slide, please.  Again, this hasn't been well elaborated in the
literature but it sort of goes like this.  Decreased exposure is
equivalent to reducing biting pressure in some way.  We don't know the
way.  Decreasing biting pressure would seem to result in a decreased
likelihood of bite, first or confirming.

		Next slide, please.  Certainly this was recognized as early as 1940 by
Granett who compared repellent protection time in minutes.  This is
first bite, not first confirming bite against biting pressure in field
situations.  We clearly see a inverse relationship where you have
decreasing protection period with increasing biting pressure.  Indeed,
he recommended that the minimum biting pressure should be 10 bites per
minute.

		Next slide.  This is a huge caveat.  I have included it in my response
in terms of this particular model.  I generate it with a three-year-old
in one leg, a one-year-old on the other leg on a Sunday morning.  It's
just meant certainly to provide a little bit of context.  It is not
actually in my opinion a valid model necessarily.  

		Certainly not from a statistical perspective or even from a behavioral
perspective.  Nevertheless, it attempts looking at the impact, if you
will, on various sampling regimes under a very constrained model system.
 We have continuous exposure.  We have intermittent exposure for one
minute every 15 or five minutes, every 30 minutes.  

		In this very artifactual situation I think it's pretty clear that,
indeed, you might expect to see some variability in estimates based on
the intermittency of sampling.  I'll leave it at that again with the
substantial caveat that this is just the back of the envelope thought
process.  I'm actually thinking about now of doing some simulations in
terms of refining this a little bit.

		Next slide, please.  Which design is more widely used in the field and
why?  Next slide.  Neither.  Pretty simple answer, I think, at least
from my experience.  I do not know what the EPA sees.  I know what the
PMRA in Canada saw for the few years I spent there.  Typically it was
continuous exposure.

		Last question.  Can potential effects of variation be isolated, be
predicted, or accounted for?  Next slide, please.  Not really.  Advance
it, please.

		This question is premised on the idea that we actually have a good
handle on what is going on in terms of the relationship between
intermittent exposure and estimates of complete protection time.  We
don't and I'll leave it at that.

		CHAIR FISHER:  Thank you very much.

		Dr. Lebowitz.

		DR. LEBOWITZ:  Thanks.  I'm going to proceed with some of my comments.
 I've had the advantage of looking at the opinions, very, very good
opinions of our consultants.  I am very impressed by Dr. Schofield's
response to the issues.  

		It seems to me that based on some of the different sets of
recommendations that there have been more standard approaches which are
the actual first confirmed bite from continuous exposure.  To what
extent exposure time periods are used seems to vary a little bit and
there are slightly different opinions as to how long it takes before you
have enough bites or enough of whatever measurements you're using.

		The issue of how many bites or landings, I noticed that Dr.
Strickland, speaking ahead of time before he does it, provides his
expertise.  Officially he says that the timing between first and second
is likely to be variable.  This is an issue that we have asked about in
this part of the charge.

		What strikes me the most is that there is nothing very standard about
the timing regimes but I think if Dr. Schofield and Dr. Strickland
continued with their modeling and used different ages and sizes of
children on their legs as they did it, they would come up with a whole
series of models. 		Each one would basically show in a general sense
what Dr. Schofield has already showed and that is that the less
continuous the exposure the more the intervals between exposure times
the more that you might overestimate at least complete protection time
which is what we've been looking at.  I don't want to go into all the
issues because I think our consultants have and will do that in a better
way than I have.

		What strikes me is also in terms of the literature we have read, or
been provided, for instance, a presentation by Lawrence, WRAIR
presentation in June of '07 which talked about a number of landings in
20 minutes of exposure or challenge and minimum criteria, one mosquito
per minute.  Then we looked at some of the USDA stuff which I won't go
into, especially the duration of testing.  I think Dr. Strickman will
probably speak to that.  

		In terms of the responses to, I think what I saw that was quite
critical and actually we discussed at great length after Dr. Gupta's
presentation is the number of variables that may affect the observations
and the kind of designs that are needed to be able to accurately do the
studies to account for some of these variabilities, as well as the way
we then analyze the results.  

		Then my own question to Dr. Gupta, how much of this has been
determined in the lab, he indicates that very little has.  My conclusion
in response to the charge was that there is a heck of a lot more
research that is needed to determine biases, adjustments, design, etc.,
in order to answer.  

		In fact, there are designs which would include randomization and
appropriate power calculations, etc., that would allow for more
appropriate conclusions to be drawn whatever the endpoint.  The issue of
how to adjust for differential intermittent or continuous exposure times
and length of exposure still needs to be determined.

		CHAIR FISHER:  Thank you, Michael.  Can I ask a question of EPA? 
Given that continuous exposure is the standard in the academic science
field, what was the basis of selecting intermittent exposure as I guess
the standard or whatever for EPA, either EPA research or EPA evaluated
research?

		MR. CARLEY:  After a very quick and very informal check with my
colleagues here, I don't think we selected it.  The protocols calling
for different patterns of intermittent exposure were proposed to us when
we started this new regime of intensive prior review of intentional
exposure studies.  

		I wouldn't speculate about what the distribution of different patterns
was in earlier studies that didn't get the same kind of intensive
review.  

		But we don't require intermittent exposure.  We haven't specified that
it should be one of 15 or five of 30.  I don't recall it being mentioned
except in passing as a possibility that you might want to think about in
the guidelines.

		CHAIR FISHER:  Thank you.  So we don't have any -- I know that one of
the important elements of review for EPA is consistency in terms of what
ends up on the label.  I certainly appreciate and thank you for letting
us know that in some sense this was suggested and is somewhat new in
terms of the new regulation.  

		In terms of this kind of comparison to pre-regulation/post-regulation
has there been any analysis of whether or not the kinds of studies that
were submitted to you in the past were intermittent versus continuous.

		MR. CARLEY:  Yes, there have been both.  We haven't done an analysis. 
As I said a minute ago, we don't know what the distribution is.  As you
are aware, but I want to remind you because I think it's important in
this context, we have only brought to the board here protocols from two
different investigators and all but one of the protocols that we brought
here came from the same investigator.  

		As Bill pointed out, what protocol is used tends to vary with what
investigator is involved.  They have their habitual preferred protocols
-- think of it as a word processing exercise rather than an elaborate
analytical exercise.  They bring up the last protocol and change some
values that they used to fill in the blanks in the template.  

		There is not a clear rationale for these differences.  If we did look
at the distribution before and after the rule, the results would show us
that Scott Carroll's preferred design of one minute in 15 is far more
common in coming before the board since the rule than any other and that
all protocols call for intermittent exposure and we wouldn't know
anything that we don't already know as a consequence of that analysis.

		CHAIR FISHER:  Dr. Kim.

		DR. KIM:  I think Dr. Schofield's illustration with the sort of
calculation clearly indicates that there -- if I were a sponsor I would
choose certain sampling methods to inflate my protection time.  That is
what is indicated by Dr. Schofield's --

		If you make the landing time possible time of exposure smaller and
smaller you will get less bites and, thus, a longer protection time
which was indicated by Dr. Schofield.

		MR. CARLEY:  I think another way of interpreting that that doesn't put
more weight on Dr. Schofield's little back-of-the-envelope model than
perhaps it can bear is simply to say in a test system where there are so
many variables, perhaps we shouldn't permit variation in elements that
can be standardized and controlled.  There is enough problem in making
sense out of this stuff anyway.  Let's tighten it up.

		CHAIR FISHER:  Mike.

		DR. LEBOWITZ:  And I would add to that designs which would help
minimize variations.

		CHAIR FISHER:  And in the absence yet of these kinds of standard
recommendations, getting back to what I hope is a goal toward the end of
this discussion is a rationale for intermittent versus continuous and
what type of intermittent is chosen but that we see in the protocols the
rationale and the pros and cons of those different approaches.

		DR. LEBOWITZ:  I think one of my points in the conclusions also spoke
to the lack of knowledge we have as illustrated by the responses of Dr.
Schofield.  I mean, until we know some more about all the things that we
don't know, it's going to be very hard to understand and for the
investigators to determine how they will approach their experimental
designs.  

		I think some of these things -- I don't know who is going to do it.  I
doubt if anyone is going to if they don't have to but I think some of
these issues need to be looked at more systematically before we know the
rationale.

		CHAIR FISHER:  I have a question.  When we are seeing protocols are
they referencing the academic literature or is that absent?  So, you
know, I think I would like to see, and I don't know if others would like
to see but I think there needs to be a connection between the kind of
sponsor research that comes before us and at least some evidence that
there is a familiarity with any academic literature that is of
relevance.  

		In this regard it seems -- once again, I'm not making a definitive
statement.  I'm just giving an impression but it seems as if the
academic literature utilizes continuous exposure.  

		I think we need to see more of that connection in terms of design
rationale.  I think it will improve design.  It doesn't mean they have
to do continuous but we need to see more of that because that is what we
are looking for.  

		I think Mike and Steve and -- I'm sorry, John, Steve, and then Mike.

		MR. CARLEY:  Something to think about.  Perhaps the connection is
better made by the agency through guidelines or the like rather than at
the level of individual studies.  If we, if the world, changed in such a
way that each investigator who brought us a protocol did his own review
of the literature, drew his own inferences from it, and used that to
design his own approach to testing repellency, our regulatory chore of
achieving some level of consistency across products and testing would be
made immensely more difficult.  

		The way we see the link is that is fundamentally our responsibility. 
I assure you that we are giving very serious thought to significant
changes in the approach.  As my old boss used to say, these things take
time.

		CHAIR FISHER:  I appreciate it.  Thanks very much for clarifying that.

		Steve.

		DR. SCHOFIELD:  I would actually concur from the regulatory
perspective that you need appropriate guidelines.  A couple of things I
wanted to point out.  One, continuous exposure typically is not used in
peer reviewed literature because relative protection is the endpoint
that normally is evaluated and for that it's intermittent exposure and
that is probably okay.  It's a fine detail but it is an important fine
detail.

		I think the other thing to bear in mind with the literature and,
indeed, it's consistent with my own reasons for doing this kind of work
is that my endpoint is different.  I protect soldiers from getting
bitten and getting disease.  The reasons I carry out and how I carry out
a study may be fundamentally different than how you want to approach a
study from a regulatory perspective.

		CHAIR FISHER:  Thank you.  Very helpful. 

		Mike.

		DR. LEBOWITZ:  I think that use of the -- that the agency would
benefit from again reviewing the different guidelines including their
own, but also reviewing the literature provided by the Armed Forces, at
least of these two countries, and some of the Gray literature, the
reports that aren't published in peer review literature that, in fact,
go into great detail as to the findings, some of which Dr. Gupta
referred to.  

		I know both in my experience with the Senate VA committee looking at
this kind of issue in the Gulf War and the syndrome and then the Academy
of Science for the Joint Chiefs looking at how to protect deployed
troops.  The stack of literature I had to read was that thick.  

		In fact, we benefitted greatly from learning things that we didn't
know before because they hadn't been published in the "academic"
literature but were extremely relevant in determining what was important
and what wasn't.  That's why I would be so rude as to extend what you
said, Madam Chair. 

		CHAIR FISHER:  Jan and then Steve.

		DR. CHAMBERS:  With respect to the academic literature, though, I
don't really think that is the sort of thing that you expect in a
protocol.  It's not an NIH grant application and I wouldn't think that
you would want an extensive literature review to bog that down.  

		Again, I think it's EPA's responsibility to get the best sense of what
the current best practices are and what will allow the most consistency
amongst sponsors and so forth and revise the guidelines.  In the
meantime, I don't know what to do.

		Hopeless comes to mind as a key word.  It just seems like -- I didn't
really comment on the first question because I'm not an expert but when
you hear what our experts have said so far, there is no consistency with
times, space, day, phase of the moon not even as a phrase but really as
phase of the moon and everything.  Field studies are going to be
different one to the next so I don't envy you guys, I really don't. 
You've got a tough job.

		DR. BRIMIJOIN:  I think we are probably belaboring the issue
unnecessarily but just one step further.  This situation reminds me
strikingly of actually a much simpler problem which actually Harold
brought me into contact with the EPA.  

		It was about 15 years ago when it became apparent that in attempts to
regulate anticholinesterase pesticides a variety of industrial labs and
some academic groups were taking -- were using slight variations of a
very stable and relatively simple biochemical method for determining
levels of cholinesterase inhibition called the Ellman method.  

		It was applied in multiple variations and with automated analyzers or
spectrometers or various things.  Yet, with that very stable and simple
method in comparison to the business of determining protection against
insect bites.

		There was so much variation and there were certain aspects, certain
applications, especially when it came to determining how much inhibition
there was in the red blood cell enzymes as opposed to plasma that there
were wild inconsistencies in measurements.  

		EPA wound up convening a series of scientific advisory panels who were
charged with developing a recommended best practice.  I was part of a
group of people who took the lead in developing a consensus, very
simple, of how this assay should be done, how it should not be done and
then participating in round robin tests to validate a recommended method
and proved amazingly challenging.  

		That was an easy problem and this is much harder and I think we can't
expect as much consistency and consensus as was obtained then.  I think
it was, nonetheless, very, very helpful to address it in that way not by
charging hapless review groups like HSRB to come to some conclusion
about what they should be expecting but to convene a scientific panel
with the express charge of reaching consensus.

		CHAIR FISHER:  Thanks very much, Steve.

		Any other comments on this?  Yes.

		DR. SCHOFIELD:  I probably should have flashed this up.  The PMRA,
which is our regulator, in 2002 also reevaluated DEET.  As part of that
evaluation they sketched out the relationship between protection time
and concentration of DEET.  

		Of course, right now we have this feeling of hopelessness in terms of
understanding these things.  What I could say and, indeed, I have the
document with me, I think, electronically, is that two things are
striking from that -- well, actually three things.  The analysis
probably isn't as robust as it should have been, partly because I did
it.

		Two, when you look at a given, for example, concentration that is a
little bit hopeless, you say, "Wow, there's a lot of scatter around that
given concentration and how long it lasts."  When you look at the macro
picture, however, if you will, a meta sort of picture, it is also very
striking about how powerful the relationship is over all of the data
points.  

		Individual trials themselves are a little bit noisy but despite all
the things we're talking about today, the general pattern of how long
things last, for example, is very powerful.

		CHAIR FISHER:  Thank you.  As consumers I think we feel better.  

		Okay.  We're going to take a break.  We'll come back at 10:30.  Thank
you  so much.  Thank you, Dr. Schofield.  This has been very educative.

		(Whereupon, at 10:15 a.m. the above-entitled matter went off the
record until 10:32 a.m.)

		CHAIR FISHER:  Okay.  We are going to continue.  Bill, would you
introduce No. 3 for us?  Thank you.

		MR. JORDAN:  Thank you, Dr. Fisher.  I found the previous discussion
very helpful, and would just like to, if I may, expand a little bit in
introducing charge question No. 3.

		We at EPA have been rethinking our whole regulatory approach to
evaluating the efficacy of mosquito repellents and other insect
repellents looking at the question of what to test for, whether we
should be using relative repellency or protection time, how to do these
studies, which is the primary focus this morning, how to analyze the
data, how to communicate about the results of the scientific research to
consumers.  

		There are a lot of factors that are going into this.  As Dr. Schofield
pointed out, the regulatory endpoint for the EPA may not be the same as
for the Walter Reed research and so on.  We also have questions about
how much certainty do we need from a regulatory point of view.  That
also relates, of course, to what do the consumers expect to understand.
There are also practical considerations about how much all of this
costs, and what is a reasonable burden to impose on people to do the
research.  So we have been working on this complicated issue, as Dr. 
Brimijoin pointed out, for a while, and we have, earlier this year, held
a conference in which we invited a lot of the people who are
practitioners in the field of insect repellent efficacy evaluations to
get together to talk about methodological issues.  And we provided the
board and the consultants with presentations and notes that we got out
of that event.

		One of the participants, Dr. Matt Kramer, a statistician at USDA, had
an idea that struck us as an interesting one, and it seemed closely
enough related to the issues that we are talking about that it would be
helpful to hear the consultants' views on it.

		His proposal was that something not quite like what has been done
before, and not a percent repellency.  It's defining efficacy failure as
the mean time for a series of landings or a series of bites, so what is
the mean time across a subject population for each subject to get three
bites, or five bites, or five landings, or three landings, or whatever.

		This approach, at least Dr. Kramer argued, it could provide a greater
precision to the estimates of protection time without increasing number
of subjects.  It obviously would expose subjects to more landings, more
bites than the current guideline approach and the current protocols, but
that may be acceptable if it is conducted with landings from an ethical
point of view.

		And if we do this, sort of pragmatic considerations relating to how
would it work for long-lasting repellents, repellants that are designed
to be effective for 10 or 12 hours, which may well test or stretch the
limits of what can be done practically in the field.

		We think it raises some interesting scientific questions, some
interesting ethical questions, and the consultants' views on this would
be very helpful for us as we go forward with this effort to come up with
a better system of evaluating and communicating about repellent
efficacy.

		CHAIR FISHER:  Thank you.  

		Dan, would you like to begin the presentation?

		DR. STRICKMAN:  Sure.  Thanks.  First of all, I wanted, in my official
capacity at ARS, I wanted to invite the board, if they would like to
submit comments to us on research that is required for this purpose,
that that is something we can consider in integrating into our program. 
And I've made the same invitation to EPA.  And of course, we have the
facilities to do that, and some of the resources.

		From the previous discussions, it seems to me there is a little bit of
lack of clarity on the purpose of the test.  And we have seen some of
that with Dr. Gupta's comments on this long effort in the military to
compare protection times.  

		And to extend a little bit what he said, that program has been
successful because the military, through experience, has decided it
needs a repellent that lasts for eight hours.  And part of that decision
is based on how long a person sleeps, but most of it is based on what is
available.  

		So eight hours is kind of a good minimum/maximum, and 12 hours is a
better one, and that is what the military is working with.  Therefore,
they have designed a series of studies that have cost millions and
millions of dollars, and involve the work of dozens of scientists to
evaluate that, and they have done that very, very successfully.  

		But each of those studies is a comparison within itself, not a
comparison between the studies.  However, in the course of that, they
have learned a lot about how different vector species respond to the
suite of repellents of the day.  

		So it's been a very successful program, though very different from the
purposes we are talking about here.  USDA for 50 years has screened
repellent active ingredients using complete protection time model, and
that program has also been very successful.  It developed DEET and other
repellents out of that.  

		But again, a very different purpose from what we are talking about
here.  So I think until the EPA or the committee decides what the
purpose of these tests are, it is very difficult to talk about what
methods are required, and that is an area for debate, which I'll talk
about a little bit more in this presentation.

		And then, finally, I haven't heard any discussion of different species
other than mosquitoes. And of course, labels include mosquitoes, biting
midges, stable flies, ticks, chiggers, and horseflies currently on
repellent labels over all.  And believe me, you are not going to do a
chigger study the same way you do a mosquito study.

		Could I have the first slide?  Not that far.  There you go.  In
talking to Dr. Kim before the meeting, I had a little epiphany in that I
realized for the first time, and probably this applies to most of you
folks, you are looking at this from the human standpoint, and I am
looking at it from the mosquito standpoint. And that's just second
nature to me.  And I would argue that there are reasons for both, but in
preparing this, I was thinking about it from the mosquito standpoint,
and mosquitoes vary.  We can't assume that each one has an equal chance
of biting.  

		And it probably is a normal distribution of their avidity, their
tendency to bite that particular person at that particular time.  And
like any normal curve, those that are less susceptible are going to be
rarer in the population than those that are more susceptible.

		Could I have the next slide?  And even if you assume a different kind
of distribution, a Poisson distribution, which some people advocate, or
a binomial distribution, it's still basically the same shape.  That the
mosquitoes at the far ends of the curve are rarer than those in the
middle.

		Next slide, please.  Now this is something that is based on data with
repellents.  The normal log dose Probit curve that Dr. Gupta referred to
works very well for repellents.  There is plenty of data on immediate
repellency versus concentration done in the laboratory under highly
controlled conditions, and you get nice repeatable results, which make a
lot of sense, and the literature is full of this information.  

		So given that as an effective and very familiar model from toxicology,
as I understand it, anyway, necessarily the most precise, the most
accurate portion of that curve is going to be in the middle, not at the
extremes.  

		And the error gets wildly big as you go to the extremes.  Therefore,
it's very difficult to compare the LD-98 of two toxicants, whereas the
LD-50 you can compare fairly precisely.  And the same thing is true for
the immediate repellency of repellents.

		Next slide, please.  So the CPT is based on a single mosquito, and
this is where it's the human standpoint versus the mosquito standpoint. 
That's a very rare mosquito.  It's an aberrant mosquito.  In a
population at one bite per minute there are studies, mark, release,
recapture studies, where the absolute number of mosquitoes in the area
has been measured.  

		And you're talking about probably 1,000 to 10,000 mosquitoes in that
area.  So it's a large population, and the extremes are going to be
those rare individuals, or at least, there is a fear of that.  And I
think that's what Matt Kramer was getting at in the inaccuracy of using
that one aberrant mosquito as your single measurement.  

		But the 100 percent protection time, for the reasons I was talking
about, is inherently imprecise.  And although it is what the consumer,
from the human standpoint, wants, or what we sitting in this air
conditioned screened room think the consumer wants, but even that is a
question worth examining because, if you do quantitative studies of when
people put on repellents, it's not at the first bite.  It's at the third
or fifth or tenth bite depending on what they are used to.  So this is
from a nuisance standpoint, I realize, but nonetheless, there are
quantitative studies, and that is what they have shown.

		We've heard from Dr. Gupta that the military looks at 95 plus percent
protection.  And if you can lower that percentage of protection even to
95, and especially to 90, you get down into that portion of the curve
where precision is much, much improved. And I am sure that our
statisticians could say exactly how much.  I can't, but that is one
reason why I personally have advocated a 90 percent effectiveness rather
than 100 percent in terms of comparing products. Not necessarily
developing products, but comparing them.

		Another point which I wanted to make was that not all repellents are
the same mode of action.  There is a lot of research on new kinds of
repellents that do entirely different things, but would be regulated as
repellents, compounds that eliminate the ability of the mosquitoes to
orient, for instance, but used as a topical repellent.

		These are things which are under development, and even on our older
repellents we notice that indole, an old repellent and -- well,
permethrin isn't applied to the skin in this country, it is in Australia
-- are not very volatile at all and, therefore, their mode of action is
quite different.  It's gustatory instead of olfactory.  

		And I would advocate that we don't want the regulatory process, or the
HSRB process, to limit the development of new products because, after
all, that is kind of what this is about.  So that is just another
complicating wrench to throw in the works.

		You all asked specifically if there would be additional risk from
using five landings versus one.  And, of course, there wouldn't be if
they are only landing.  If that is the criterion, then there is no
increased exposure.  You could use 100 landings, it wouldn't matter
because, in theory, there are no bites and no risk.  And I think that,
you know, if you are going to say it's landings, then it's landings. 
Not a bite sometimes, but landings and, therefore, no increased risk.  

		Now, Dr. Kim kindly wrote down his thoughts to me about why the five
landings didn't really mean anything, and I agree with him.  He's got a
point.  What does that mean?  The average of the time of the five
landings, just a confirmatory thing, like we are using the first
confirmed landing.  

		I think that is a very good point.  On the other hand, I also agree
with Matt Kramer that the precision will be much increased by using five
instead of that extreme mosquito.

		And then finally, I just wanted to put on the record what are mostly
personal opinions that the only way to compare products in different
studies, and this is just my opinion, is in laboratory studies.  There
is no way you are going to do it in field studies, and it is simply
inappropriate to try.  It is just inappropriate.  

		We can standardize laboratory studies completely, right down to the
strain, as you do in bacteriology.  You can require a certain strain of
mosquito be used in order to prioritize active ingredients and
formulations with the right procedure that takes account of the various
factors you we talked about this morning.

		On the other hand, so the purpose of that is labeling, relative
labeling, like an SPF value.  How does this product stack up to that
product when the consumer picks it up and looks at the label.  That is
very different from the claims of the manufacturer as to what
performance they will get from the product under certain conditions.  

		And that is perfectly legitimate for S. C. Johnson or any other
manufacturer to say, "you will get six hours of protection under average
conditions in the midwest against mosquitoes, or under average
conditions against mosquitos," knowing that that is based on the kind of
precise field studies you are attempting to define here.  But that's a
really different thing from the labeling requirement on relative
effectiveness given standardized criteria.  

		And then finally, there are those who would be very interested in real
duration of repellency, mostly what we have been talking about here, and
100 percent repellency based on public health, where disease really is
the endpoint.

		Fortunately, in the United States, that is a rare occurrence, but when
we have travelers going overseas, it is not.  So there is a legitimate
reason to look at that but, again, I would advocate that that is the
realm of academic studies, or of studies that are done on a claim basis
rather than a labeling basis.

		So that's all I have to say.  Thanks.

		CHAIR FISHER:  Thank you very much.

		Lois.

		DR. LEHMAN-MCKEEMAN:  When I looked at this notion of five bites, I
had to step back for a moment and think about the precision of this
estimate.  What would actually be ideal to increase precision?  And
we've talked about the hopeless character of these field studies
because, from my vantage point, to increase precision we would want to
carry out studies at the same site, on the same people, under the same
protocol, carried out precisely in the same way and under the same
environmental conditions.

		If I stipulate that that makes a good study, then I ask, well, if I
increased bites from one to five, have I, in fact, increased the
precision?  And my answer is, probably not, because the inherent flaw in
the field study fundamentally remains.  So that is really where I came
out.

		I actually thought that some of that Dr. Schofield's comments spoke to
this in a way that connected or resonated with me in that it's possible
that going to multiple bites increases the precision around getting some
kind of arithmetic meantime, or something like that.  And I appreciate
the comment about the 90 percent protection as well. My perspective is
that perhaps that does increase a little bit of precision, but I come
back to, again, whether or not this is useful statistically.  And I
think it speaks to your comments, Dr. Strickman, about potentially going
to other kinds of tests, particularly in the laboratory.

		From my vantage point, so that's my answer, as it were directly.  I
was trying to conceptualize what a five-bite study might look like.  And
so in my own mind, if I can create this graphical picture, the study
would, effectively, if we could take it down to a pictorial analysis of
a graph where the X axis is time, and the Y axis is cumulative bites, I
think the curve that is presented by doing that analysis is
fundamentally a thresholded curve.  That is to say, there is a period of
time during which no bites occur, and sometime thereafter, bites will
begin to occur.  And if we are looking at cumulative bites, then the
slope of the line might actually give you a little bit of perspective on
overall efficacy.  That is kind of how I saw that.  So it's possible, if
you looked at it that way, that that could be a useful piece of
comparative information.  

		But then it comes back to, what is the question we are really trying
to answer, because fundamentally, it could be the time that we sit in
the threshold zone, in the flat portion of that profile, may actually be
the most important point.

		And then finally, from my vantage point, I have to say that, when we
started looking at these kinds of studies, the justification for doing
the work in humans was that the field differs from the lab, people
differ from animals, and people differ from people.  So that became the
basis, I think, or the overarching justification, for moving into field
studies.  

		But I have to say that I find the argument of trying to do comparisons
of products and formulations in a far more controlled environment in the
laboratory, I personally find to be much more compelling at this point
in time, based on the experience we now have relative to actually
carrying out field studies.  

		So I was particularly persuaded by Dr. Strickman's comments about
actually moving, perhaps away from the field, if I can put words in your
mouth, and back into the laboratory in order to really get at what is
the efficacy of these products.

		CHAIR FISHER:  Thank you, Lois.

		Dr. Kim.

		DR. KIM:  I just want to start by sort of giving a sort of
introductory remarks about typical statistical inference that occurs in
an experiment.  Whenever you carry out an experiment, you have some sort
of a reference point, and so, when we were measuring, say, time from
application to the efficacy period, however it is defined, we can think
of a sort of a statistical world in which there are certain parameters
associated with the distribution.  And typically what we are doing is to
estimate that parameter, the time to the efficacy failure.  

		And so, with that, we introduced some statistical models to get our
hands around, and that is how we came up with either means, 95 percent
confidence intervals, or median and associated confidence intervals,
depending on the nature of the distribution whether there is a censoring
or not.

		When I first saw Dr. Kramer's comments, what came to my mind was that
very question, what it is that we are trying to infer.  And that is how
I got into the discussion with Dr. Strickman earlier, before the morning
session began.  And I never thought of the, sort of a perspective from
the mosquito as the experimental unit.  

		I always thought that we are doing an experiment with the human as an
experimental unit.  And so, to me, looking at time from application to
the failure of the efficacy of repellency is a very specific quantity,
for which I can think of doing statistical inference.  

		But then, when you get into multiple landing, or however you define
it, what is the net effect?  Certainly you take more measurement, and
you generally increase the sort of precision.  But then I start to
question, what is the precision that we are concerned about?  

		Is it a precision of protection time for each individual participant,
or precision of the protection time for the product?  And this
replication didn't answer the precision in the protection time for the
product.  

		And the other aspect is that precision is certainly important, but I
think bias is also a very important concept, and we had discussion about
these issues going back to last June.  If you look at time to the first
event, but if you start adding time to the first, second, third, fourth,
fifth, and if you take the average of those times to these multiple
events, the net effect is just to push the mean out to the far end.  And
I'm not so sure whether that's what we want to do.

		CHAIR FISHER:  Dr. Strickman.

		DR. STRICKMAN:  Yes, I know it's not only satisfying to have anecdotal
information, but the appeal to me of the five mosquitoes, and I'm not
saying it would work, either.  I mean, Matt Kramer, who has a lot of
experience with this kind of work, also, thinks it's a good idea.

		But I am haunted by an experience I had in Paraguay where there were
very, very high populations of mosquitoes.  Huge.  I mean, literally
killing animals, they were so high.  And I remember putting 75 percent
DEET on my arm, and watching one mosquito bite right through the stuff. 
You know, that was the far end of the curve.  

		I mean, that is the way I interpreted it.  It's anecdotal, I know, but
I know what I saw, and I have experienced this since, also, in very high
populations, where repellents fail.  And you'll read about this in the
literature.  So there is an extreme end of the population that didn't
get the memo.  

		And I just hate to think that is what you are measuring.  It is not so
much about getting the right measurement as preventing getting the wrong
measurement is kind of where my nonquanitative anamology feel has some
fear.  That's all.

		CHAIR FISHER:  Jan.

		DR. CHAMBERS:  Let me ask a question.  There is nothing in the
regulations that says that field tests have to be done.  That's a
guideline thing right now, is that correct, which is not legal, right? 
That's not a legal requirement?

		MR. JORDAN:  It's not binding.  The guidelines -- let me see if I can
be precise about it.  The regulations say that we have to have data, and
the guidelines describe how to generate the data to satisfy that
requirement.  As a practical matter, we have operated at EPA with the
expectation that the data will come from field tests, and all the
companies have understood that expectation, and have acted as though it
were a requirement.

		DR. CHAMBERS:  Sure.  That's understandable.  I'm coming to the same
conclusion that Lois expressed just a minute ago.  And I want to thank
all three of our consultants; they have been extremely helpful to a
group of naive people.  And I'm not really sure, because of that, I
don't think we are the body that needs to really be making any decisions
for you.

		But Dr. Gupta tells us that you can't predict anything, and then Dr.
Strickman presents some very compelling evidence for using laboratory
based studies.  And, you know, as a scientist, the reason I'm not a
field biologist is that you can get the precision in the laboratory. 
You can control the species.  The species obviously can be disease free.
 You can control the humidity, the temperature, all of these factors
that Dr. Gupta says makes a difference in the overall thing.  

		And so the way to get data that you can compare amongst these various
products to put labels that the consumer can have some confidence in I
would think would be through the laboratory studies.  And that would be
my conclusion at this point.

		CHAIR FISHER:  Sue Fish.

		DR. FISH:  Picking up on what Jan said, I would like to ask Bill
Jordan or John Carley or someone about this.  What I think I understood
about how labels are generated, how the labeling is generated, and what
it means.  And I don't remember which of the previous meetings, but I
remember hearing that what the label says, this product is good for four
hours, came from the average complete protection time in the field
studies, and that that is not intended to inform the public that the
protection time is four hours, but is only helpful when used in a
relative basis to compare which product to purchase.  

		And if that, in fact, is the case, and what I have heard today, is
making me less and less and less comfortable with any of the data that
we have seen in the experiments because of the tremendous variability
among the mosquitoes, among the people, among the conditions, I'm
wondering why -- if what I understand the labeling to mean, why the
labeling isn't -- this has a short, medium, and long protection time, or
one star, two star, three star, because to say four hours, to me, as a
consumer, suggests four hours, kind of like on a clock.  

		And I would be concerned about reapplying, if I'm getting bitten, I
would be concerned about reapplying it in less than four hours, so I
need some help here to clearly understand where the data from the
experiments goes to inform the label writing.

		MR. CARLEY:  We are going to respond with a kind of a tag team effort
here, but I get to go first.  First, your memory is pretty good.  

		We take these studies and, as Bill mentioned in an earlier response to
the question, we don't always do it precisely the same way, or in ways
that are equally defensible, but generally speaking, yes, we take the
mean complete protection time from the various tests that have been
done, and we permit that to be incorporated into claims on the label.  

		But I refer you to the slide that is still up there from Dan
Strickman's presentation.  He talks about two different purposes, one of
which is this index permitting product to product comparisons, and the
other of which is, think of them as advertising claims, marketing
claims.  

		The label serves both purposes, and this is one reason why this whole
area is kind of muddy.  We have been permitting claims for the duration
of protection that serve both the purpose of expressing the results of
testing, and the purpose of marketing claims.  		So the question of
whether you should think of four hours merely as twice two, or half
eight, or as clock time, is what emerges from this conflation of the two
different purposes.  So I think Dr. Strickman has suggested a way of
clarifying or thinking about that, that there are two different
purposes.

		The other point I want to make is that there is more than just the
number of hours on the label.  There are also, on typical repellent
labels, specific directions about reapplication.  Sometimes there is a
safety concern that says, "Reapply as needed, but not more than X times
per interval."  Or, "Keep it away from your eyes, or keep it away from
children under the age of X."  

		There are all sorts of things like that that also come out of the
whole array of testing that we require on repellent products before we
approve them. So there is more than just the one number, but we are
certainly thinking in terms of separating the index.  Your small medium
large, or your one two three start from any number of hours.

		MR. JORDAN:  I'll add a little bit to what John said.  The agency has
been in the business of evaluating repellents for a lot of years, since
we were created, in fact.  And the understandings have evolved over that
time, but I think we have not revisited or updated our approach taking
into account the latest information.

		As we have said in past meetings, and again this time, we are in the
process of doing that, and some of the ideas that you all have suggested
have also occurred to us, and we are pursuing them.  I think we are not
quite ready yet at EPA to say we are going to switch from field tests to
laboratory tests, although that is certainly under consideration.

		We have not said that we haven't reached for sure a position on
whether to shift from labeling products in terms of hours of protection
to relative merit, but that is also under consideration.  We have not
figured out whether we want one day and one site, or one day and two
different sites, or two days in two different sites if we were to go
with the field testing.  

		We haven't decided whether we want two different labs, if we were to
do it in different labs.  So there are a lot of questions that are still
open. And as I tried to indicate in introducing the third charge
question, these things connect in that we have to think through what are
the expectations of consumers, how best to communicate that information
to them, what level of certainty or uncertainty can we tolerate in terms
of understanding and analyzing the data.  

		How does all of this play out in terms of cost, and how are we going
to make the transition if we do decide to adopt a dramatically different
approach, given that we already have a large existing universe of
registered products with registered labels.  And how do we -- you think
your workload is heavy now, just imagine retesting all of these
pesticides using a more standardized approach.

		And can we, as Dr. Schofield, I think it was, pointed out, can we
group products together, and what are the characteristics that permit us
to group products, say, by concentration, or perhaps formulation types,
and the carriers that may affect dermal absorption. 

		So we are working on it.  As John's boss said, it does take time, and
we haven't sorted those things out, but we are learning a lot from both
these discussions, and others that we've had.

		CHAIR FISHER:  I want to use Lois' imaginary guide, and just point
out, I do want to kind of begin to go to a summary of this morning,
because we have a very long day.  And so I guess what I would like us to
discuss, and maybe only give five minutes to this, I think we've learned
a lot.  

		It seems to me what we are hearing is that, in terms of questions two
and three, two being intermittent versus continuous, that is something,
obviously, that is going to be in the realm of EPA to be considering,
and then they are considering this, this new kind of multiple versus
single landing is another issue that I think ends up to be beyond what
we can be thinking about, because it has so many implications for the
way that products will be, and have been in the past evaluated.  So I
guess what I would like us to talk about is what are the implications? 
There were a lot of variability issues that we learned about when we
talked about question 1.

		We do now have some information on intermittent, continuous, and on
multiple versus single, but is there anything that this means to us as a
committee in terms of, not criteria, but the type of information and
justifications that we would be requiring, or is there not enough there
that we are kind of in the same position?

		Jan.

		DR. CHAMBERS:  I think things are in enough of a state of flux right
now that I don't --

		CHAIR FISHER:  I'm sorry.  We can't discuss this yet.  I forgot the
public comments.  Thank you.  Sean reminded me.  We shouldn't make any
conclusions until we hear the public. So save your response to my
question.  I'm sorry.  Thank you for reminding me.

		Please introduce yourself.  Tell us who you are affiliated with, and
also remember there is a five minute.  And I assume you're together.

		DR. OSIMITZ:  We are.

		DR. CHAMBERS:  Okay. So five minutes.

		DR. OSIMITZ:  My name is Tom Osimitz, consultant to the DEET task
force.  And this is Keith Kennedy, entomologist, also a consultant to
the DEET task force.  And I wanted to just spend a few minutes.  I think
this is very relevant to what you were talking about this morning. 
First of all, thank you for the opportunity, and we enjoy the
conversation very much.

		One of the most important things that we see -- next slide, please. 
The DEET task force, by the way, consists of the major manufacturers and
marketers of DEET.  The project team we have here that is working on the
whole issue of learning from experience of what we know about DEET and
insect repellency includes the DEET task force, myself and Keith, Larry
Holden, and Bob Sielken from Sielken and Associates.

		Next slide, please.  The key element I want to mention today is that
we have put together a program over the last six months to look at,
comprehensively, the literature, both publicly available, the Gray
literature, industry studies, etc., as they relate to DEET repellents.  

		The focus is on DEET repellents, but still, what we are finding in our
initial efforts, is that there is a lot we can learn.  And I think they
can inform, not just the EPA with regard to repellent guidelines, but
also some of the considerations that you are undertaking with regard to
human subject protection.

		Next, please.  Actually, back up one more.  Okay, next please. 
There's three questions that we are looking at in this analysis, and we
don't have answers yet, but I wanted to at least get in front of you
today and let you know that this is where we are headed, and give you an
idea of how we are focusing this effort.

		Again, as manufacturers of DEET containing products, the DEET task
force is focused clearly on data that can help, not only the EPA decide
what should and shouldn't be registered, but the associated claims and
communications to the public.  

		So this is not a basic research effort, but really is taking a look at
studies.  Some of them are basic studies that have been published, but
also studies that have been submitted over the last 15 to 20 years
specifically to support registration of actual products.

		Three questions.  What is the relationship between DEET concentration
and protective time, and is it possible to develop some predictive
model?  Second of all, assuming there is a simple relationship, is there
a variation, and how can we characterize that variation? 

		One of the things we heard today, when you're talking about two hours
or four hours of protection, we clearly know there is no such thing as
two hours of protection.  Not even for one individual.  Over time, that
is going to vary.  We are trying to characterize that variability both
per individual, if possible, but certainly across individuals.

		And finally, can the protection time versus DEET concentration that we
obtain in laboratories actually be used to predict some kind of a
relationship -- don't know what that is going to look like yet --
between what you see in the field, especially, at least for some of the
more common species that have been tested extensively.

		The outcome of this, we would like to see something similar to what
Dr. Schofield mentioned takes place in Canada where there is essentially
a monograph or a generic approach that says, if you have certain
concentrations of DEET products, DEET or other repellents, but again, we
are focusing primarily on DEET here, because that is where the richest
data are, is there a way, then, to generalize about what protection
times would be, again, given a range as opposed to a point estimate.

		The value of this, I think, especially from a human subject standpoint
is obvious, because I think we could reduce or eliminate the need for
the extensive human testing that might be required if the agency moves
to new guidelines, and essentially wants to reset the system.  And
second of all, we think from an overall efficiency of resource use,
besides humans, both EPA resources and industry resources, this could be
valid.

		What have we done so far?  We have reviewed over 800 abstracts
relating to the efficacy.  We have actually looked at 49 papers in great
detail.  We have collected over 500 industry studies, reviewed about 370
laboratory studies, and now it's over 50 field studies.

		Next.  Keith will make a couple of observations about what we have
seen so far.  Again, we don't have statistics, but our initial
impressions at least we can comment on.

		DR. KENNEDY:  In the interests of time, I'll just hit the highlights. 
One thing I would note is that arm-in-cage testing is remarkably -- is
conducted remarkably similar across a lot of different studies.  There
may be some variation in number per cage and cage size and whatnot, but
it was surprising how consistent that was across all the studies that
we've looked at so far.

		Certainly in the last 10 years there has been more inclusion of
additional species besides Aedes Aegypti, which is the standard test
model insect for lab studies, and I think that's been a good thing.  But
it was surprising how similar it was. Next slide.

		The field test, as you've heard today from the consultants here, there
is incredible diversity of how field tests are carried out.  And I have
read papers, but trying to tease out information on how a researcher
conducted a certain study, and did they list the biting rate, and where
did they list it, and how did they do it, and what was the control -
extremely frustrating, which says something about need to standardize
how we report our field test data, I guess, in the literature.

		Not a lot of similarities. Given that we have looked at about 50
studies, I was shocked, frankly.  And the issue of how they report the
data, whether it's complete protection time, or percent repellency was,
in some cases they calculated both, and in many cases, they only
calculated one.  So a lot of discrepancy there.  Let's just go to the
next -- go to the last slide.  We just did. 

		But just on testing, though, I would say standardization of some type
of field methodology, at least from guidelines, is definitely in order.

		DR. OSIMITZ:  Just as far as the next step of our analysis, we are now
currently working through the criteria to stratify the data, meaning the
criteria we are going to use to look for differences, and if we can
develop any models that are predictive, we'll do that according to study
type, laboratory versus field, methodology, genron species, of course,
and then we'll look across formulations, as well, and particularly
looking at lotions, sprays, wipes, water based versus solvent based, and
regular and slow-release formulations.  

		So this is going to be taking place probably over the next three to
six months.  It's a monumental task, but we think the information from
here can one, give you we hope more confidence in understanding the
difference between field and lab studies, whether there really is a lot
to be gained by trying to hone in on what is really an imperfect test
system, and probably always will be.  

		And I think more important, when it comes to informing EPA's view of
how they should approach future testing, we might be able to realize
tremendous efficiencies, and especially reduce the need for human
testing.

		DR. KENNEDY:  Just one more comment, Tom.  One thing I would like to
second that Steve mentioned, and if you look at all the data that has
been accumulated on DEET that has been published or not published, it is
remarkable how similar, from a macro view, the protection times are.  It
is a fairly robust molecule, actually.

		CHAIR FISHER:  Thank you.  Questions?

		Jan.

		DR. CHAMBERS:  I applaud that effort.  I think it will be very useful
in the future.  That last statement that you just made, would you
elaborate on that just a little bit, please?

		DR. KENNEDY:  If you look at -- we have a question in there about
relationship between DEET concentration and protection time, and
without, we haven't completed the analysis, but just a cursory look
would tell me what I already suspected, and that there is a pretty
strong relationship.

		DR. OSIMITZ:  And part of what we hope to do, Dr. Chambers, is
characterize how uncertain that is.  So even though we know there are
differences in similar formulations, and some of these studies have been
done at different times, in the end, when it comes to communicating to
the consumer, is something such as protection from two to three hours,
or 90 minutes to three hours, that may be as good as this system will
ever do.  

		And if that is acceptable from a consumer and EPA standpoint, then the
need for additional testing probably doesn't make sense.  We don't know
that at this point.  The one thing we have learned from a lot of
statistical analysis, and working with Sielken and Associates in
particular, is that these systems are not perfect.  

		But to say that is not enough.  We really have to characterize how
imperfect they are, so it can give you an idea of the kind of precision
that couldn't be derived from additional testing.

		The last comment, just to let you know, we do plan on sharing this
with the EPA in great detail in whatever format they want, and the
eventual intent is to peer review and publish this, too.  So thank you
very much for the chance.

		CHAIR FISHER:  Thank you.  Are there any other questions?  Okay. 
Thank you.  We are looking forward to seeing your products.

		Are there other public comments?  Dr. Carroll, please introduce
yourself and your affiliation.

		DR. CARROLL:  This is Dr. Scott Carroll from Carroll-Loye Biological
Research in Davis, California.  And I reserved this slot in case I had
something to add based upon board commentary.  I wasn't certain there
would be a lot after the expert commentary today, but a few things have
come up that, based upon my very practical experience, I may be able to
contribute.

		One is that, as I indicated in one of my first appearances before you,
when I came to the field of repellent science, I discovered, as was just
alluded to, that it is, in general, not a very strong body of work.  The
brain power aimed at the questions right now is greater than it has ever
been convened before in this room this morning.  

		And I think three of the five major quantitative scientists in the
field in the last half century are sitting at this table with us here. 
So I think there is prospect for gain, but issues have come up about
sponsor motivation, and about difficulty in controlling what happens in
the field that I would like to address.  

		And let me do that by focusing on the question of relative protection
versus, which comes from continuous sampling, or at least in the studies
that have mainly been performed and recorded, comes from continuous
sampling, versus the intermittent sampling, the interval sampling that
is present in my protocols and others that are presently being put
forth.

		As of five years ago, I argued before an SAP in this subject convened
by EPA that doing continuous sampling and using the relative protection
measures was much stronger scientifically, much stronger statistically. 
There was a great deal of information that was being lost by relying on
CPT.

		And we didn't even talk that much about intermittent sampling because
it seemed a much weaker approach.  And I commented at that time that I
could only see its relevance for studies in which disease concerns were
either major or paramount.

		So all studies that I did before the new era - we need a name for this
new era, probably a logo - were always continuous exposure, and looking
at relative protection.  My very strong impression from speaking with
various EPA officials at the federal and state levels was that such
protocols would no longer be permissible given the renewed, or the new
emphasis on risk to subjects.  

		And so I crafted, for the first time in this new era, protocols that
use an exposure interval that I thought was relatively precise if we had
10 subjects, for example, giving us data points every 15 minutes.  But
some of the limitations there, especially statistically, I think, are
obvious.

		There was a suggestion, both implied by Steve Schofield's
presentation, and by a comment from a board member that, "Gosh, that's a
neat way for a sponsor to design a study if they want to maximize what
they generate as the summary data for complete protection.

		Two things there.  One, just from my practical experience -- well,
both from my experience.  The first is that, I know sponsors are very
concerned about what they get to put on the label.  I have never had a
single discussion, in the 15 years since I initially drifted into this
field as an adjunct to my primary field, with the sponsor where there
was any -- it has never been a point of discussion regarding study
design.  

		No sponsor has ever brought that up.  And almost all my sponsors have
been very keen to know whether or not the study would show that the
product didn't work.  That has been the emphasis.

		Also, since adopting the one minute per 15 minute exposure regimen, we
require an ambient biting pressure for the study to continue, of one
bite on an untreated control arm during that exposure interval.  Or I
think the protocol now says in 90 percent of those exposures.  

		To achieve that, especially for a test that may last 10 hours or more,
we are actually working at higher ambient biting pressures in the
environment than ever before.  Does that make sense?  In order to -- if
you are standing out there all the time, we have always wanted one bite
per limb per minute, and what I used to do was do summations every five
minutes.  

		And if you are present in the environment, and we are always active,
we are moving, and there are enough mosquitoes around, you will find
five bites on a limb within a minute.  That's our criterion for
sufficiency.  But if every 15 minutes you are just putting your arm out
for a minute, it may -- well, mosquitoes move around you, they are
attracted to your body.  You may have several mosquitoes on your Tyvek
suit, but not one on your arm.  They are getting closer to it.  I find
it requires a higher ambient biting rate, so it does not necessarily
result in a longer measured mean complete protection time.  

		I don't know how it does influence it.  Your back-of-the-envelope
model was interesting, but that is a countervailing factor that I am
experiencing.  We have to work at higher ambient biting pressures. 

		So just to return to a focus that Dr. Strickland brought up, I think
the question is what we have to emphasize.  And Dr. Chambers also
brought up that it is out-of-control in the field, and how can we
predict anything.  And it has to do with effect size, as Dr. Kim raised.
 If we are looking at treatment versus nontreatment, control versus
treated, that effect will be very, very strong.  Site effects, data
effects, conditions, all of those will be important. They may be
relatively inconsequential in terms of their impact on relatively
nonambigious results from treatment effects, so whether we label
something for seven, eight, 10 hours of protection, as in the DEET
studies we are looking at today, site effects, data effects are
relatively inconsequential.

		CHAIR FISHER:  Thank you, Dr. Carroll.

		Any questions?  Okay. Thank you very much.  

		Any other public comments? Okay. Thank you.  Now to resume our kind of
conclusive discussions, Jan, and then Rebecca also wanted to --

		DR. CHAMBERS: Well I think what I was going to say earlier, if I
remember, was that this obviously is in a state of transition right now,
and it looks like EPA is going to come up with some new guidance
sometime in the future based on all the available evidence.

		It seems like what we have to do as a board right now is just deal
with the things that are being presented to us, and look at the science
and the ethics of those particular things.  

		I don't think we need to be making a whole lot of suggestions about
future directions and everything, as much as we may feel like we want
to, because I think that is being handled in the best way possible by
EPA, with the advice of your expert panels, for instance, not this naive
group who has to  bring in experts to find out about this particular
subject matter.

		CHAIR FISHER:  Rebecca.

		DR. PARKIN:  Thank you.  I wanted to go back to the issue around
purpose.  And I am, again, totally new to this issue, so I need a little
bit of education here.  Is the purpose clear in the regs as to why these
studies are being done?

		MR. JORDAN:  I'll take a shot at trying to answer the question.  EPA
has regulations which are focused on data requirements, and it specifies
that we require appropriate data -- it does not say what those data are
-- to support claims regarding the efficacy of insect repellents.  

		And we do that because consumers really can't evaluate that in
advance, and we think it's important for consumers to be able to
understand something about the efficacy of the products they are buying,
so claims on those products must be supported by data.

		Guidelines are not regulatory requirements.  They are suggestions. But
operationally, people give them considerable weight, and very frequently
cue quite closely to the methodologies described in the test guidelines.
 

		The test guidelines themselves, in the latest version that was
reviewed by the board, indicate that we are interested in protection
time measured from point of application to efficacy failure.  The
guidelines, as currently drafted, specify bites, but we have been moving
in response to the board's direction toward landings as an indication of
failure of efficacy.

		That's what is said, and out there in the public domain.  In our
explanations and our discussions with companies, with you, and in other
settings, as well, we have emphasized that what we are trying to do in
terms of pesticide labeling is to give consumers a sense of the relative
protection that they will get from buying one product versus another
product through putting on the product claims related to the data
derived protection times.

		DR. PARKIN:  That brings me then to my next question, which perhaps
will be more of a comment than a question, because I think I already
know the answer to the question. And that is, when you talk about this
domain, I'm hearing appropriate data to support claims, to help
consumers understand -- this goes back to that document we looked at
yesterday -- without doing empirical research on what consumers want to
understand, and how they need to get that information so they can
understand, to make the appropriate decisions between products.  That's
a different domain of research than I believe your office is involved
in.  Is that correct?

		MR. JORDAN:  That's correct.

		DR. PARKIN:  So I'm wondering if there is some value to considering,
as you move forward in this process of deciding what kind of outcomes or
what kind of labeling changes you'll be making, whether it would be
reasonable to conduct some research that would be appropriate to these
labeling issues, which other agencies have done, and other researchers
have done, so there is a body of literature you could build from.

		MR. JORDAN:  We will certainly think about that.

		DR. SHARP:  Returning to the question that you raised about sort of
what does this mean in terms of what the board might do differently in
the future, or how that might change our request of investigators
submitting protocols and so forth.  

		Personally, I would find it very helpful if there was a section of
these protocols that was more specifically focused on the rationale for
conducting field studies, and essentially made that case that there is
some needed data here that can only be generated through that particular
study design, as opposed to a more laboratory based study design.  

		I think it effects, not only our interpretation of the science that we
talked a lot about today, but also our interpretation of some of the
ethical challenges imbedded in some of these protocols.  

		Because if you can generate more or less the same types of data in a
more controlled environment, a safer environment for subjects, I think
that's important.  So my recommendation, I guess, would be to have some
section of those protocols explicitly address the need for a field
study, as opposed to a laboratory based study. 

		CHAIR FISHER:  Mike.

		DR. LEBOWITZ:  A major conclusion for me given our responsibilities to
look at studies from a scientific and ethical perspective is that the
discussions we've had today provide the very important framework for us
in which we can actually understand and judge the studies that have been
completed in terms of their methodologies, results, etc.

		This is the most critical thing in terms of providing advice to EPA
vis-a-vis what is appropriate in terms of design and data that can be
used for their purpose to protect the human population vis-a-vis these
repellent products.  

		I think the discussion we had this morning has, in fact, for me
provided that basic framework so that I can use now in assessing and
evaluating all the future studies that we have to look at starting with
those that we have to look at today in this field.  I am very glad we
have had that discussion.

		CHAIR FISHER:  Jan.

		DR. CHAMBERS:  I have two questions, one for Bill.  Really you were
addressing Richard's comment.  In the current time is our field test
required by the guidelines?

		MR. JORDAN:  In the sense that guidelines are guidance they are not
required, but operationally companies always do them and we pretty much
always expect them.

		DR. CHAMBERS:  The other question is for our three consultants because
the question I've had for months now, and I don't think I still have a
precise answer is do landings necessarily result in bites?  I'm getting
the impression from you all that it's not.

		DR. STRICKMAN:  No.  A mosquito can land without biting obviously.

		COL. GUPTA:  That assumption is correct.  Most of the time when
mosquitoes land they are probably going to probe multiple times before
starting to bite.

		DR. CHAMBERS:  But, again, not being an expert in mosquito behavior
and trying to avoid them as much as possible, I don't really look at
them closely.  In the context that we are seeing here if it looks like
it's going to probe and bite, then it will?

		DR. STRICKMAN:  Mosquitoes are like any other fly.  At these high
densities we're talking about you can have a mosquito land on you just
because it's landing on you and it's quite common to have like a gravid
mosquito that is not going to take a blood meal land on you, you know,
just appear with no intention to bite.

		As far as determining whether it had an intention to bite or not, I
mean, I wouldn't want to have that task myself without actually being
bitten.  I mean, that's usually what you see is the mosquito starts to
probe and you can see in the mouth parts.  

		I don't know if you want this but the sheath separates from the
styletes that are actually inserted and you can see that at the very
start of the probe so there is the ability to do that but that is more
than landing.  

		That's a probe and in terms of risk to the subject, that's as much
risk as a blood meal, at least for viruses.  That is in the literature
that the virus is transmitted at the first probe.  I don't know if
that's helpful.

		CHAIR FISHER:  Okay.  So just to conclude, the thing that I think we
could perhaps be looking at, which we are, but I think have come -- in
some sense it's questionable about whether we should look at is some
rationale for the sample size including the outcome measure that's used
as well as how many treatment groups and control.  I think we are
interested in that and sponsors could be providing that information.  

		Why a field study?  I understand there's also constraints and
requirements for field study but if there is information about contrast
with lab studies, it seems like something that would be helpful for us
and feasible for us to hear about.  Why the specific environment was
selected and how the two environments differ and why two different
environments were selected I would be interested in hearing.  

		How one is controlling either statistically or methodologically for
environmental shifts in temperature and daytime and things like that,
these are things that we know.  

		We understand that there is a smaller sample size and some of the
procedures that are being used lack the power we would like to see
because of this balance with subject protections but it still is
important to understand which of these kind of extraneous factors that
affect mosquito behavior are going to be looked at, controlled, or at
least considered.  

		Whether or not there is a balancing in terms of expertise between the
control group and the treatment group because we were presented with
some evidence that the better trained you are the more likely you are to
be able to detect the landing and also the activity of the subjects
appears just the information that would be important for us to look at. 


		Those are just some of the things without providing the types of
guidance on some of these other issues that we really don't know what
there is and I don't know if there is anymore that people are thinking
about.

		Okay.  It sounds like we can --

		DR. JOHNSON:  Can I ask one of our consultants one more question?

		CHAIR FISHER:  Sure.

		DR. JOHNSON:  Still don't know that I understand exactly -- this is
for Dr. Gupta.

		CHAIR FISHER:  Dr. Gupta is not here.  Let us do move on because we
are so packed and we really do have to get to the next phase where your
question may come up at that point.

		John.

		MR. CARLEY:  Before I proceed, I would like to point out that today is
the 253rd anniversary of the Charge of the Light Brigade.  With their
example in mind, we are going to charge prudently forward into this next
topic which is EPA's review of two completed Carroll-Loye field tests of
mosquito repellency, one comparing three slow release DEET repellents to
the U.S. military standard and another testing all by itself a
repellent, the active ingredient of which is oil of lemon eucalyptus.

		This presentation is unusual as compared to those we have done before
because we are combining our discussion of the two studies.  The
organization will be a little more complicated.  I'll go first.

		When we get to the science assessments most of you are pretty familiar
with the Carroll-Loye designs but there are two new members of the group
and we've got the three consultants.  With them in mind we have prepared
to review with some detail the objectives and design of the study.  

		We won't do that again when we get to the protocol for a new proposal
when we discuss that later on today.  For those of you who are already
very familiar with this protocol, please bear with us.  We are just
trying to make sure everybody is on the same page.

		First a brief history of the SCI-001 study.  You reviewed this
protocol in January.  Scott Carroll executed it this past July.  The
results of field testing were recorded separately for each of the three
test formulations, two of which were registered products and a third of
which was neither registered nor the subject of an application.  	
Although there was only one protocol and although it was only executed
once, the results were submitted in three separate volumes each one
reporting on only one of the three products and each relying on the same
positive control data and on any given day on the same untreated
controls.

		On September 18th EPA asked for a supplemental description of the
composition of the third test material, the one that was substituted
just before execution and an explanation for how it came to be
substituted for the material described in the protocol.  We included Dr.
Carroll's response to that inquiry in your background package and we
take that into account in this presentation.  

		Our assessment was that the study reports taken together with that
supplemental material meet the standard of completeness defined in the
rule at 26.1301.  Because this study was initiated after the effective
date of the rule, the completed study has to be reviewed by the HSRB so
here we are.

		For the other study, WPC, this was reviewed by the HSRB in April of
this year and executed also in July.  Reports testing of a single
conditionally registered repellent product containing oil of lemon
eucalyptus.  In this case on September 13th we got back in touch with
Dr. Carroll and asked for an accounting of which subject signed which
versions of the consent document and when they did it.  

		That is the supplemental material that we are referring to in the
third bullet.  Again, we think that the report plus the supplemental
material meets the standard of completeness in 2613.03.  Again, this is
post-rule research that has to be reviewed by the HSRB.

		The two studies were conducted by the same laboratory and used
essentially equivalent recruiting strategies, tapped the same pool of
subjects.  Many of the subjects participated in both studies.  Both
studies had very similar objectives and followed essentially the same
experimental design.

		Perhaps most important both were executed concurrently at the same
sites sharing the same untreated controls along with some of the other
data involved in the study.  That is why we are presenting them
together.  We have posed separate charge questions for each of the two
studies but you will see that fully understanding the conduct requires
that you appreciate the interdependence of their execution.  

		There are some important differences between the two studies.  You
need to keep these in mind.  First, on the dimension of complexity. 
SCI-001 was a little like some of the studies Dr. Gupta described
earlier.  It involved multiple materials, one positive control, if you
will, the military standard product, and three test materials.

		WPC-001 had only a single test material.  With respect to recruitment
of controls and consent forms the differences are really a reflection of
the different times at which these protocols were drafted and reviewed
by this board.  The earlier one, SCI-001, does not have an expanded
description of the recruiting process for untreated controls and uses
one consent form for both classes of subject.

		The more recent one, the WPC protocol, was amended after the HSRB
meeting to expand the discussion of the recruiting process for untreated
controls and it does use separate consent forms for treated and
untreated subjects.

		Looking at the calendars, first a caveat.  There is a lot more detail
in my ethics review about the calendars.  I'm just getting high points
for this presentation.  The SCI protocol was revised in the December and
the EPA's review was done a week later.  		Then in response to that
review there were some further revisions made by Dr. Carroll to the
protocol and consent document.  Those revisions were approved by the
IIRB in early January and the protocol was reviewed by this board in
late January.  

		This board had the December 14th version of the protocol, our review
December 20th, and the further revisions as reviewed and approved by the
IIRB.  Our presentation to the board in January reflected those
revisions.  

		It was essentially the December 29th version of the protocol that the
board considered.  The report of the January meeting didn't come out
until mid-April and there were no other events in that intervening
period.

		For the other study the early history was a little more
straightforward.  There was only one generation of the protocol that
came in in mid-January.  We reviewed it in mid-March.  It went to the
board in mid-April and a couple of months later the HSRB issued its
report.  

		Then immediately in the wake of the HSRB report a round of amendments
to the protocol and revisions to the consent documents was conducted and
went forward to the IIRB.  Those amendments were approved on 19 January
but with some calls by the IIRB for further changes.

		That resulted in further change to the consent documents on the 4th of
July, approval by the IIRB on the 10th, but with still a little bit left
to go.  Final revisions to the consent for untreated subjects on the
12th of July approved by the IIRB the following day.

		Now let's look at the calendar for the execution phases.  I have put a
calendar strip at the bottom of this slide, a few selected dates in
June, all of the days in the first half of July.  It reads kind of from
the bottom up.

		On the 30th of June the sponsor asked for a change to the list of test
materials substituting LipoDEET 3434 for the previously proposed
material that we had reviewed, that you had reviewed that was stated in
the protocol and described in a consent document.

		The protocol was amended to make this change on the second of July. 
Then the study is reported to have been initiated on the 3rd of July. 
On that same day subject limb measurements were begun.  They went on for
10 days.  Dosimetry testing was begun and that lasted for three days.

		Then the field testing phase began on the 7th of July and ran through
the 15th six different days in two different sites, three at each.  For
the other study the pattern is a little bit more complicated largely in
this presentation because I have included more information about the
consent documents that is relevant to this study.

		June 13th the HSRB reported the next day amendments to the protocol
and consent documents.  A week later -- five days later approval by the
IIRB but with a call for some further refinement.  Then we see the
further amendments on the 4th and the 12th and IIRB approvals on the
10th and the 13th.

		The treated subjects are reported in the supplemental materials to
have signed the consent of July 10th during the time range from the 10th
to the 12th.  The control signed the consent of July 10th on the 11th
and the consent form of July 13th on July 16th.

		Now it gets a little bit unclear.  Subject limb measurement extended
from the fourth to the 11th but the study wasn't reported to have been
initiated until the 10th, the same day dosimetry testing began.  Then
there were three days of field testing on the 12th, 13th, and 15th.

		If you put these two together, you can see that the study initiation
was staggered by a week attributable to the delays in approval of the
revised consent documents.  The subject limb measurement overlapped
almost completely.  The dosimetry was completely independent and three
of the six field days overlapped.

		The discrepancies in the dates with respect to the signing of the
consent is something I'll talk about more later when I present my ethics
review.  To really understand what happened we have to look at the level
of individual subjects.  In this presentation each column represents a
specific subject.  

		The days are July 7th for the top row, the 8th for the next row.  Then
this little white line is the reminder that the days were not continuous
between the 8th and the 12th.  Then we have four more days during the
second week of testing.

		The color indicates whether people were untreated controls or which of
the four materials in the SCI-001 they were treated with or the dark
blue is the WPC-001 study.  This shows all of the six days of field
testing.  It was in Butte County on the 7th, 8th, and 15th and Glenn
County on the other three days.  

		The four materials in SCI-001 were tested on all six days.  OLE was
tested on only three days.  Looking at July 12th and following across,
this and these two subjects were treated with LipoDEET 302.  The four in
green were LipoDEET 3434.  The light blue four is Coulston's Duranon. 
The four yellow ones are the Ultrathon and the four dark blues are the
oil of lemon eucalyptus.  The purple is the controls.  

		Note that on two of the six days the controls were identified only by
function and not by subject numbers so there are sum analyses that we
can't follow all the way through because we are not sure which subjects
those actually were.

		Where there is a stack of different colors on different days that is
an indication that the protocol exclusion of subjects who used the
repellent on a previous day was violated.  Wherever that pattern occurs,
as long as it doesn't cross this white boundary line, there is the issue
of violation of that provision of the protocol.

		The day on which it was recorded as a violation of the protocol that
the subjects failed to maintain interpersonal distances prescribed was
this day.  It was the most complicated day where all five materials were
being tested.

		There are two key questions that come out of this.  These are the
protocol specific questions associated with the methodological issues
that we discussed earlier this morning.  The first of these is the
exclusion factor that I have already mentioned.  The second one has to
do with the amendment to add this material that was not previously
described through an amendment that was not reviewed by the IIRB.  

		Depending on how one interprets the significance of that issue, you
can see by how many green spots there are how many different subjects
were affected that it could potentially have a broad impact on what is
left.

		Looking at the next frame, this is just for people who like numbers
better than pictures.  We've got the two sites, Butte County, which is
the sum of the three unshaded days, and Glenn County, the sum of the
three shaded days.  We have the repellents across the top and this shows
the number of subjects treated with each material on each date.  

		The total number from the day ranged from 13 on July 8th to 23 on the
15th with the exception of oil of lemon eucalyptus on the 15th.  In
every other case in order to reach the sum of the design sample size of
10, you have some smaller samples from different days.  

		That led us to another protocol specific question which has to do with
whether you can sum partial results across multiple days in the same
habitat to comprise the design sample size, what implications this might
have, and is it likely to affect the results.

		Now, to illustrate the impact of the question concerning the violation
of the protocol prohibition on testing people who had been used the
previous day, on this slide I subtracted all of those stacked instances.
 The only case where there is a stack is the untreated controls and here
where there is a gap of several days between the two test days.

		What happens?  This is illustrative.  This is not a recommendation but
if we were to drop out the data points, the treatment days that violated
that protocol constraint, we would have left only 54 of the original 100
treatment days.  That is 20 for each of the five materials.

		In summary form you look across these stacks here and we still have
the problem of pooling partial results from multiple days to comprise
the sample size but the overall number of samples left for each case is
quite a bit smaller.  We've got -- they are marginal at best.  I'll just
leave it at that.

		A different illustration here.  What I did here you will see there is
still the salmon and it disappeared because of the columns in this Excel
spreadsheet that I suppressed.  I didn't notice it until this morning
but there should still be a pink key down here but there is no green.  

		What I did here was I dropped out all of the subject columns that had
a green square.  Those are the people who were treated at least once
with the LipoDEET 3434.  What happens here is that you lose 66 of the
100 treatment days.  There are two days, or three days rather, where
there is only one untreated control left, although we can't be quite
sure about that because we are not sure who these two guys were.  

		Those are the ones that were identified by function rather than by
subject number.  Moving on to the next frame, we've got really tiny
numbers down here.  Still the issue about cooling results across days.

		If you put both of these factors together depending on the sequence
with which you delete things, you can get some interestingly different
accounts.  This is, again, just illustrative but if you exclude all of
the subject days, the treatment days that were affected by either of the
two conditions of concern, there is not much left at all.  

		When that is summarized numerically, this is what you get.  The
biggest sample size is three and they are unacceptably low and there is
still some cooling of data across multiple days in a couple of cases.

		On the 19th of this month Dr. Carroll responded to our science and
ethics reviews, my ethics review of both studies and there were separate
science reviews done by Kevin and Clara.  You should have those letters.
 They are in the public docket.  

Dr. Carroll will make some comments of his own later on.

		To summarize our general concerns, the two studies were very deeply
entangled in their execution.  Many subjects were treated on successive
days violating the protocol exclusion criterion that was in both
studies.  The unreviewed amendment to SCI-001 to change the test
material potentially affects about two-thirds of all the data.  

		We are not sure what to do about the issue of pooling partial data
from different test days at the same site.  We did not expect that when
we looked at the protocol. Kevin will talk about that more in a bit.

		Forty-six of 100 treatment days violated the exclusion criterion. 
Sixty-six of 100 involved subjects who were treated with LipoDEET 3434
and 86 of the 100 treatment days involved one or the other of those
problems.

		Here we are.  That is the stage setting.  Kevin is going to begin with
this.  Kevin and Clara will take turns telling you about their science
assessments.

		CHAIR FISHER:  John, I wanted to thank you for that remarkable
analysis both verbally and visually.  Just really very appreciative and
impressed.  Thank you.

		MR. SWEENEY:  Okay.  I guess Clara and I are going to first describe
the general methods that apply to both protocols, or both studies in
this case.  Then we will both present separate science assessments on
each study.  I'll start this off and then Clara will follow.

		First the study objectives are similar to what we have seen in other
studies before.  I don't know that I really need to read this slide but
essentially we are looking at the study objectives here to test each one
of these materials in the field against mosquitoes in this case.  Then
we had a section and both studies were determined typical consumer
doses.

		The dosimetry phase, of course, is quite similar to what we have seen
before.  In this case, though, lower legs were measured and skin area
was calculated and there was some variation of those measurements as you
can see from the data.  

		The symmetry phase established a typical consumer dose of each test
material for use in efficacy testing.  In SCI-001 each of the 10
dosimetry subjects applied to each of the four test materials three
times to each leg.  In WPCO each of the 10 dosimetry subjects applied
the OLE pump spray three times to each leg.

		Brand mean of subject mean was calculated for each test material and
that was used as the standard dose rate for material in the field.  Then
the standard dose rate was converted to an individual dose for field
testing by adjusting it to the individual subject leg area.  The leg
areas were different for each subject.

		In terms of the subject design aspiration mosquitoes.  Again, this is
a standard part of the protocols that we have been looking at,
especially in Dr. Carroll's protocols.  Before participating in the
field studies all the subjects were trained or the subjects that were
going to aspirate were trained in the laboratory to aspirate landing
mosquitoes before they bite using lab-reared pathogen-free mosquitoes
and handheld electric aspirators.

		Subjects in the field testing were equipped with aspirators and worked
in teams of two to watch each other for landing mosquitoes.  Untreated
subjects were attended by two technicians to assist in aspirating
landing mosquitoes.

		The field efficacy phase.  The way the treatments were described and
the way they were allocated was 10 subjects were treated with each
formulation and two untreated control subjects participated in the field
trials in each two habitats.  

		Because of the expected long duration of efficacy the subjects were
treated with test propellants before traveling to the site.  That is,
they were pretreated and there was a time span before they were actually
exposed to mosquitoes in the field at the sites.

		Untreated subjects were monitored for mosquito pressure and treated
and untreated subjects were exposed to mosquitoes for one minute every
15 minutes until efficacy failure.  I think Dr. Carroll elaborated on it
a little bit more this morning.  I think John talked about the actual
outlay of the study design in the chart shown previously.

		In terms of the sample size considerations, sample size was 10
subjects per treatment at each site for a total of 20 with two
concurrent untreated controls.  This was just protocols with arguments
considered previously by the board in reviews of Dr. Carroll's
protocols.

		With the board's comments from the previous meeting, and I elaborated
on this to some extent in my own review, EPA is reconsidering this
matter in a general case but our current position is that the sample
size of 10 was adequate and exceeded the general recommendations of the
guideline.  That is, therefore, what was done in this particular study. 


		I think I'll turn it over to Clara at this point to talk about --

		DR. FUENTES:  The variables that were measured in this study were the
skin surface area of the subjects, the legs that were used for applying
the repellent.  The weight of the test material applied to subjects by
dosimetry.  

		The mosquitoes landing pressure at least one landing per minute on
control subjects, the time of all landings and time of first confirmed
landing and the efficacy that was expressed as the average time to first
confirmed landing which is the standard deviation and was in 95 percent
confidence interval.  The median was also calculated using the
Kaplan-Meier Survival Analysis.

		The next slide, please.  The field sites.  The test was conducted at
two different sites, field sites, representing different habitats
according to the EPA guidelines to have the opportunity to test
different species of mosquitoes.

		One site was the lake-side grassland in Butte County, California.  The
other site was a forest habitat in Glenn County, also in California. 
The sites were numbered in reverse order for the two different studies,
SCI and WPC.  That is a little bit confusing.  It's better to refer to
the sites by habitat or the county instead of numbers.

		The species of mosquitoes encountered in the field were those that
were listed at the last bullet, aedes melanimon, aedes vexans,
freeborni, and culex tarsalis.  The distribution of the species that
were encountered in the field by site and by date are represented in
this table.  

		The percentages of a species encounter by day and site changed as you
can see in this table.  The predominant species in Glenn County was the
aedes vexans and in Butte County grassland site was the aedes melanimon.

		The counts show the average percent of a species found per site close
to three days put together.  In the population and the distribution of
the mosquitoes varied quite a bit from one day to another.  For example,
in the aedes melanimon at the Glenn County site you see a 3.7 percent
changing to 13.9 percent.  

		Also in the aedes vexans in Butte County there is a 3.3 percent
changing to 15.8 percent.  Also keep in mind that from July 8th to July
15th when this occurred there is not one day to the next.  There is a
seven-day difference there.

		This is the study to test the efficacy of a product containing oil of
lemon eucalyptus.  The study was conducted according to changes that
were incorporated in the protocol.  The dosimetry test, for example, was
conducted outdoors because it was a pump spray.  The data captured forms
were modified to allow recording of the exact placement of the
dosimeters on the legs.  

		The statistical procedures were revised and incorporated and the
Kaplan-Meier survival analysis was incorporated in the protocol.  Also
it was used in the study.  The rationale for simple sites was not
revised and the premature withdrawal of subjects in case that would
occur was addressed in the protocol in Amendment Section 9.1.312,
exclusion criteria for subjects.

		Post-testing consisting of monitoring of collected mosquitoes was also
incorporated in the protocol and was also performed in the study.

		The dose that was -- this table represents the dose that was applied
to the subjects.  The standard dose was determined for the pump spray
product and it shows that it was only 26 percent of the long-standing
standard of 1 grams per 600 square centimeters for deep lotions.

		The dose of active ingredient applied per square centimeter of skin
was 0.13 milligrams.  The mean total dose of active ingredient was 135
milligrams.  The first row shows the standard dose of the product that
was applied per square centimeter.  

		This active ingredient that is contained in the product and the dose
that is applied per milligram per square centimeter.  Then the mean
total of active ingredient that was applied was by multiplying that dose
of active ingredient per square centimeter by the outreach,
approximately 1,000 square centimeters of skin area that was used for
applying the repellent.

		Then the dose that is applied to outreach 70 kilograms adult is 1.93
milligrams per kilogram.  The MOE is based on the dermal LD-50 of oil of
lemon eucalyptus which is higher than 2,000 milligrams per kilogram.

		The landings with intent to bites that were collected and reported in
the study are broken down into different categories in this table.  The
unconfirmed landings were a total of seven.  The confirmed landings were
20, one per subject.  

		The confirmed landings were more than that.  They were 24 because
sometimes more than one mosquito landed simultaneously on the same
subject.  The total landings of all that was 51.

		The reported protocol deviations, these are deviations that are
reported in the study, are the ones listed here.  The subjects didn't
cover untreated limbs between exposures because they moved to a screen
house that was available.  The practice rounds for the dosimetry phase
was reduced from three to one for most subjects.  Three replications of
he same applications was deemed unnecessary so it was reduced to only
one.

		The experienced subjects assisted in treatments of the other subjects.
 At the time of application of the repellent on the subjects, other
subjects helped out to apply the repellent and that, in fact, helped to
synchronize the application.

		The treatments were applied before traveling to the test sites so the
treatments were applied in advance to the test.  The prescribed
distances were not maintained on July 12th and that has a significant --
it's important.  This observation led to our fourth protocol specific
issue identified earlier by Bill Jordan that if a minimal intersubject
distance is not maintained, could this affect the results.

		The next deviation is the temperature data that was recorded
accurately for three hours on July 12th but it only occurs at the first
two hours of the test when most mosquitoes were not really landing on
subjects.  

		The dose for subject 13 was miscalculated by 0.01 milliliter.  All
these deviations were explained except the -- well, they were reported
in explaining the study report not to be relevant in compromising the
accuracy of the data.

		In this next slide there are some unreported protocol -- a couple of
unreported protocol deviations and these are the concurring execution of
the two studies, WPC-001 and SCI-001.  That was not acknowledged and the
exclusion criteria that was violated by testing subjects who had used
repellent on the previous day and different repellents, too, on the
previous day.

		The complete protection time, the  result of the test, appear on the
table, the mean complete protection time for each site and the mean
protection time for the two sites pooled together.  Also the median
complete protection time for each site and for the two sites pulled
together.  The median was generated by the Kaplan-Meier survival test.

		The study limitations are an experiment deviated from the revised
protocol by testing more than one formulation simultaneously.  Different
repellents were tested on the same leg the day before testing the oil of
lemon eucalyptus-based repellent.

		The data for different days was pooled for analysis without accounting
for additional sources of liability due to different dates.  The data
from Glenn County site was collected during two separate dates, July
12th and July 13th and pooled together for analysis.  That introduced an
additional source of liability associated to different days.  That has
not been acknowledged in this study report.  Finally, a clarification is
needed to verify the accuracy of the data generated by this study.  

		Now I will pass it to Kevin.

		MR. SWEENEY:  And I will present the science assessment for SCI-001. 
This is a field mosquito study with four slow release lotions all
containing DEET.

		Next slide.  First of all, in terms of test materials I think we have
talked about some of them before so I will make this brief.  This
protocol was reviewed by the HSRB in January and proposed the testing of
four materials listed on the slide.  We have data of course in the study
from three of those materials and I'll talk about that for a second. 
They are all D formulations and they are all lotions.

		Next slide.  At the request of the sponsor, as John has already
mentioned, insect R2 was replaced as test material by LipoDEET 3434, an
unregistered repellent product.  The initial submission was inadequately
characterized in the report of its testing and the substitution was not
acknowledged in the reports for the other two test materials.

		In response to EPA's request LipoDEET 3434 was described as similar to
LipoDEET 302 but containing DEET at a slightly higher concentration. 
This was for the purpose of comparing it to 3M Ultrathon product which
also had 34 percent DEET.

The rationale was also explained and I'm talking about the first one. 
Insect-Guard was no longer going to be marketed.

		Next slide.  In terms of dosing regime, I went through this in more
detail in my science review presented in table form and I'll present a
summary of that here.  Essentially I'll go row by row.  These are the
four repellent products that were tested.  First one a 30 percent, 34
percent, Duranon is 20 percent and then Ultrathon again 34 percent.

		The standard dose in milligrams per centimeter score is listed here. 
They are pretty much the same for the first three or fairly similar with
LipoDEET 302 being the highest and Ultrathon now being the lowest. 

		In terms of the amount of DEET expressed in milligrams per centimeter
squared, again the lowest DEET application rate was with Duranon at 20
percent formulation compared to the highest for 302 which was also
fairly close to Ultrathon.  Excuse me, 3434 was the highest.

		In terms of the mean dose of AI in milligrams, again the dosages here
vary quite a bit between these three and Duranon again with Lipo 3434
being the highest and Ultrathon only being 500 and Duranon being as
expected, a much lower dose.

		When you look at this in terms of the amount applied with 70 kilogram
adult we see significantly less DEET applied, Duranon formulation. 
Slightly less than what you see with these two apply for Ultrathon.

		Just to make a mention about the formulation.  I'm familiar with
Ultrathon.  I mean, this is a fairly viscous formulation and goes on
pretty easily but it doesn't take quite as much material.  I think also
in the table I talked about as well as Dr. Carroll talked about the fact
that there's only three-quarters the amount of Ultrathon was used
generally compared to these other three repellents if you look at it on
a 600 square centimeter basis.

		Next slide.  In terms of distribution, landings by repellents, I think
John presented an overview of this already.  I think what is really
important here is that we saw unconfirmed landings at all these and
especially with Duranon No. 14.  

		Confirmed landings as expected would be 20 and, of course, we saw even
more bites with these confirmed landings.  Essentially you have a
situation where you've got a total of 55 -- u p to 63 bites, excuse me,
over 20 subjects for this formulation and 55 for these two and then 49
for this one.  

		I think in my mind the thing that is really important is that we have
bites that are not confirmed and the testing is continuing.  I think it
is probably something that we talked about before.  Landings, excuse me.
 Landings.

		Next slide.  In the science review I did I separated these out.  For
the purposes of just the illustration here I'm pooling everything and
that made this slide a little easier to read versus a much larger table
as presented in the review.  

		Essentially when we look at these products, and here is the active
deconcentration, when you look at the CPT time with the standard
deviations, you can see it is fairly similar for these three DEET
products.  

		It seems significantly lower, although not greatly lower when you look
at the standard deviation values.  It's much more pronounced with the
median CPT values with Duranon being significantly less than these
others.  

		Essentially, I mean, I think I pointed this out in my review, the
results are fairly similar between these three with the 20 percent being
significantly different, although not that significantly different
depending on the overlap between these confidence intervals.

		Next slide.  In terms of the study limitations, and I would point
these out, too, and I think Clara made some mention of this.  In terms
of some of the comments from the board on this protocol, the random
selection of treated limbs was mentioned.  

		In here the method used to chose which leg to treat was not reported. 
The data suggest the subjects who participated on more than one day were
treated on the same leg each time.  We did this just from evaluating the
data sheets themselves.  

		Many subjects were treated with different repellents on consecutive
days and this has already been mentioned so I'm not going to elaborate
on that.  Or the same repellent on multiple days.  Then data for the
same repellent from the field site but at different times on different
days were pooled for the analysis.  

		I guess this is a fairly significant point, of course.  I pointed this
out in my review and I think a little bit of discussion came up already
about this this morning and also in the tables presented.  What we see
here is we have four different treatments at least in this study.  We
have two different sites.  

		We have 10 subjects per site for a total of 20 per treatment but the
data collected up to multiple days.  Then the data were pulled and then
pulled again to present the analysis.  I think that is significant
compared to say doing the treatments all in one day and then replicating
them on consecutive days at these same sites just as another method.  We
also see it submitted to us over time.  Thanks.

		Then potential for dropouts.  This was mentioned by the board in the
analysis plan although there were no dropouts and the subjects that
received confirmed landings with intent to bite.  Everybody actually in
the study who participated the repellent applied to them actually went
to the failure time.

		The distribution was not analyzed for normality.  In this case the
sample size, as Dr. Carroll points out in the study, are fairly small so
he did not apply the assumptions or normality to these data and
conducted multiple analyses.

		One other thing I mentioned, and Dr. Carroll addressed in his reply,
is that the potential impacts of data affects or time affects were not
assessed.  I think the argument could be also mentioned this morning
whether the side effects were really adequately assessed.  

		Again, Dr. Carroll addresses this in his reply and I'll let him speak
to that in his public comment if he chooses.  He reports for only one of
the studies where he had the discussion and I think this was Duranon if
I remember correctly.

		Next slide.  Then at this point I'll hand it over to John for the
ethics assessments.

		CHAIR FISHER:  First I think we will take science questions if there
are any.  Remember for the purpose of time, let's limit board comments
here to questions rather than comments that we might make in the
discussion and critique of the study.  Are there any questions for EPA
about the science aspects?

		Dr. Kim.

		DR. KIM:  It was described that the patients at location each day was
randomized method according to randomization.  I see on July 8th of 2007
the distribution of the number of patients are two to three to five to
three.  My question is what kind of randomization method was used to
look at the subjects?

		MR. CARLEY:  That was identified as one of the limitations of the
study that was not reported.  There was a slide that Bill didn't get a
chance to present earlier, the one that listed protocol specific
questions.  		The question that came from that was does it matter how
the limb to be treated is selected and, if so, how should it be
selected.  That is something that we would welcome comments from the
consultants about.

		DR. JOHNSON:  You made a point of the Lipo 3434 taking some of that
data out.  I didn't see the rationale for why you were doing that other
than it wasn't part of the original protocol.  Was that the reasoning
behind it?

		MR. CARLEY:  First off, what I did was illustrative of the effect of 
one possible response to ethical concerns.  This is an ethics issue
about whether the study should have gone forward with this change in the
design that was not reviewed by an IIRB and it was not reflected in
revised information provided to the subjects so it is not really a
science issue.  

		All I was doing was saying here is a picture that shows you how few
spots are left on the page when you drop out the green people.

		DR. KIM:  I just want to mention that there is also science issues
with the interpretation of the data because of the potential carryover
effect of one treatment to the next day.

		DR. SHARP:  In the earlier review of this we got a chance to look at
the specifics of the other three formulations but we didn't see anything
about Lipo 3434.  Can you say a little bit more about that product and
how it compares in particular to Ultrathon?

		MR. SWEENEY:  I mean, well, first of all, the deconcentrations are the
same.  In terms of the carriers, I mean, they are different and that is
proprietary frankly.  But, I mean, the carrier are similar to what is in
the LipoDEET 302 product.

		DR. PHILPOTT:  On that same sort of line of questioning, I'm just
curious are there other Lipo formulated products out there because one
of the things that I keep seeing in protocols is simply that
justification for a particularly concentration is based on the notion
that there are other products with the same concentration.  

		I'm curious about what is known about the risk associated with the
formulation in liposomes or in the case of the other products, the
protein capsules.

		MR. SWEENEY:  I'm not a toxicologist but I'll attempt to answer your
question.  I mean, in terms of there's acute toxicity data for that. 
There has been data collected on liposomes.  

		I mean, that's a -- I don't know if it was really developed only for
-- I don't want to get myself in any hot water -- only for the use of
repellents.  I think the original patens are actually held by Johns
Hopkins if I'm not mistaken off the top of my head.  I think that has
been looked at.

		Have we conducted -- I guess if you're asking have we conducted a
separate toxicological evaluation to just look at that part of the inert
ingredient, I think we have considered that as part of our regular
analysis.  I think we have determined it's not that different from
anything else we've seen.  Does that answer your question?

		DR. PHILPOTT:  I think my question was more are there other products
that use this technology for delivering insect repellents, either
liposomes or protein capsules?

		MR. SWEENEY:  There are slow release formulations out there.  The
liposomes, though, are restricted pretty much to these products, I
think.  I don't know if there are any other registered off the top of my
head that are LipoDEET specific.  I'm not sure.

		CHAIR FISHER:  Dr. Gupta, did you want to enlighten us?

		COL. GUPTA:  Yes.  Just for information on the Lipo 3434 and
Ultrathon.  They are two different delivery mechanisms as far as I
understand.  Ultrathon is a polymer-based formulation.  

		In that case what happens is there are about two different polymer --
the DEET is sandwiched between two polymer layers.  Whereas the
liposomes the active ingredient is actually inside it in a small bubble.
 Liposomes could be complete or incomplete. 

		Earlier, 10 years ago, the reason the DEET wasn't -- liposomes were
not considered as the ideal delivery mechanism for DEET because it was
very expensive.  Since the advances in the technology, now you can
manufacture that liposome formulation DEET as cheap as the other ones. 
Basically what you are trying to do in both formulations is trying to
control the release rate of DEET.

		CHAIR FISHER:  Okay.  Thank you.

		Jan.

		DR. CHAMBERS:  Does EPA have any information about the kinetics of
these formulations or how quickly they dissipate over time once they are
applied to the skin?

		MR. SWEENEY:  These particular formulations themselves for these four
products you mean?  Probably for Ultrathon we probably do.  Whether we
have the exact kinetics for the other three I'm not aware that we do but
I think as far as how fast, basically we're looking at -- we use a -- 

		I guess we use the bioassays really to determine as to whether these
things fail or not.  We do know what the vapor pressure is of the
formulation itself.  We know the densities of the formulations and the
other physical chemical characteristics.  Does that answer your
question?  Okay.

		CHAIR FISHER:  Okay.  Let's move on.  This is for everybody because we
are behind schedule.  Let's try to --

		MR. CARLEY:  I'll talk just as fast as I can.  In addition to the
studies themselves I reviewed our EPA protocol reviews and the HSRB
reports of the two meetings at which these protocols were discussed and
carolized responses to our specific questions on each study that I
referred to before.

		The same six deviations were reported for both studies and there was a
seventh one for WPC-001 involving the miscalculated dose.  Two of them I
thought had potential ethical significance.  One was the use of
experienced subjects to assist in application of repellents to other
subjects.  		The second was the failure of subjects to maintain
prescribed distance, dismissed, I think, and somewhat misleadingly but
with a statement that all subjects were wearing the same repellent. 
This occurred on July 12th.  

		That was the date that all five repellents were being tested.  In the
supplemental material Dr. Carroll said that the people treated with oil
of lemon eucalyptus were physically apart from the ones treated with
DEET.  

		Unless I missed it, there hasn't been any clarification about whether
the people treated with different DEET products were isolated or whether
they were intermingled.  But the ethical points about that is that this
may have been mischaracterized as distinct from what the potential
scientific implications of the distance were.

		There were a bunch of unreported deviations.  These are quite
specific.  Subject limb measurement in WPC-001, as I showed you on the
calendar slide, was begun almost a week before the reported data study
initiation.  It was done well before -- it was initiated well before the
subject's signed WPC-001 consent forms.  I'll address all of these
points further in later slides.

		Subject No. 29 participated in limb measurement for WPC-001 on July
7th, four days before signing the consent.  Subject 60 participated in
dosimetry testing for WPC-001 on the 10th of July, two days before
reportedly signing a consent form on the 12th.  		The untreated control
signed the final version of the appropriate WPC-001 consent form after
all field testing had been complete.  Then this point that I made
earlier, 46 percent of the treated subject days involved subjects who
had used the repellent the day before violating the exclusion factor in
both protocols.

		In terms of responsiveness to EPA with respect to SCI-001, this
section of the presentation addresses only comments that were made in my
ethics review or under the ethics heading in the HSRB review with
respect to either of these protocols.

		I commented that the protocol inadequately described recruiting of
experienced subjects.  There were changes in this respect made in the
version of 29 December which preceded HSRB review but there were no
further changes made after the meeting.

		I said that the consent forms should be corrected to delete references
to alcohol in the test repellents since there isn't any and that was
acceptably addressed in the 29 December version.  I said the consent
from should be revised or split to better inform untreated controls of
what they would face as it differed from the treated ones.  That was
acceptably revised on 29 December.

		The HSRB made some similar and some different comments on this
protocol.  Suggested collecting mosquitoes for subjection to analysis to
confirm the absence of known pathogens.  The protocol was amended on the
2nd of July to incorporate the same viral assay of the collected
mosquitoes that had been written into the amendments to the other
protocol. 

		That protocol occurred -- that amendment occurred on the same day as
the amendment to change the material to LipoDEET 3434.  Neither of them
was reviewed by the IIRB.  HSRB said the protocol should clarify how
untreated controls will be recruited.  

		As I mentioned, this wasn't addressed further after the HSRB meeting
but I think this comment from the board actually may have been in the
earlier version of the protocol, the December -- well, I don't want to
go there.  It's too complicated.  There were a bunch of different
versions of this protocol.

		The board commented that they would like to see some -- that it was
hard to judge the abilities of the IIRB.  They would like to see some
evidence of member training accreditation.  No additional information
about the board member qualifications or accreditation has been
provided.

		There were three points made about the informed consent in the HSRB
report in April.  It said it mischaracterized the test materials as
containing alcohol.  That was already fixed.  Structured so that it
doesn't apply to untreated controls.  There were changes made in the
December 29 version.  

		And the comment was made that it should say up to 48 subjects would
participate whereas it already said up to about 40.  The actual number
of subjects was somewhere between 37 and 41 depending on who those
unidentified controls were.  

		The change wasn't made but understand that there were not two controls
per arm of this study.  There were two controls per day in the field so
the arithmetic doesn't add up to 48.

		On the other study we asked that the data collection forms be changed
to refer to subjects only by coded number.  That was done.  We said they
needed to better characterize how experienced subjects would be
recruited.  This was done in the June amendments.

		We called for description of recruiting in Florida in comparable
detail to the description concerning California.  The amendments deleted
all references to testing in Florida making that point moot.

		CHAIR FISHER:  John, I'm really concerned about time because we are
supposed to go to 6:00 something today and I do think that most of what
you are saying, first of all, your first presentation covered a lot and
overlapped with this.  Also a lot of this was in the beautiful report
that you wrote for us.  I know it's difficult and I'm going to ask
everybody the same thing.

		MR. CARLEY:  I will do my best.

		CHAIR FISHER:  If you could just highlight, that would be great.

		MR. CARLEY:  The only major point on this slide is that the informed
consent was split in the June 14th amendment so it went two ways.  There
were two completely different reports.  Thereby, I should add
necessitating preidentification of controls, a topic that came up from
the consultants earlier today.

		The investigator was generally responsive to our suggestions.  There
were a couple of exceptions.  The applicable standards are the usual
ones.  I'll skip over this and explain further to Drs. Parkin and
Johnson if they have questions about it later.

		My findings.  The completeness standard for documenting ethical
conduct was met for both studies.  All subjects in both studies were at
least 18.  None were pregnant or nursing.  There is not an issue with
26.1703.

		FIFRA Section 1282P has been in the law for 35 years and it says that
you can't use people in a test with pesticides without their being fully
informed and without their participation being fully voluntary.

		There is some question about whether they were fully informed because
of the failure to acknowledge the change in test materials and change in
the description of materials in the consent document for SCI-001.

In the case of the other study, there is this issue about the date
sequence for all of the consents which technically may fall into a
problem with 1282P.

		Then with the tricky question about whether it was in substantive
compliance with the rules in Subparts A through L of which K and L are
the only ones that apply to third-party studies.  We thought about this
problem before.  

		The rule requires that the IIRB have procedures that will ensure that
changes in approved research aren't initiated without IIRB review and
approval.  The rule does not forbid making the changes or implementing
them without IIRB approval.  It says that the IIRB has to have
procedures that will ensure this doesn't happen.  

		The procedures from the IIRB, which we do have under a claim of
confidentiality were in this case not effective in insuring that
amendments to approved research, changes in approved research, were not
initiated without IIRB review and approval.

		DR. PHILPOTT:  I'm sorry.  I hate to interrupt but can you elaborate
by what you mean because we haven't seen these.  When you say not
sufficient, I mean, do they say you have to submit?

		MR. CARLEY:  No.  The relevant part of the rule is right there on the
wall.

		DR. PHILPOTT:  I'm talking about the IIRB's rules.  You have seen
their procedures.  We have not so you are stating that their procedures
were not effective in ensuring that amendment.  Do you mean that the
IIRB screwed up here or did Dr. Carroll violate their procedures?

		CHAIR FISHER:  Are the procedures you're talking about written in
their document or not so they don't even have those procedures or did
Dr. Carroll not follow the procedures that are written in the document?

		MR. CARLEY:  There's another question to which I'm not certain of the
answer which is whether the board has -- to what degree has the IIRB
shared their procedures with Dr. Carroll, what did they tell him in the
approval.  We have seen the approval letters.  This is kind of an
unclear area.  I am simply pointing out --

		CHAIR FISHER:  It didn't work.

		MR. CARLEY:  -- that this protocol -- that whatever this was it didn't
work because --

		CHAIR FISHER:  It didn't work and let's put it there.

		MR. CARLEY:  -- we know that it was amended without oversight.

		CHAIR FISHER:  Yes.

		DR. MENIKOFF:  Do the IIRB procedures allow amendments to the protocol
to be effective without the IIRB being told about those amendments?

		MR. CARLEY:  Not except for this exception that is in the rule where
it's necessary to eliminate immediate hazards.

		CHAIR FISHER:  Okay.

		MR. CARLEY:  That is the only exception in the IIRB rules.

		DR. MENIKOFF:  That clarifies it.

		CHAIR FISHER:  So the IIRB has a policy and the policy wasn't
followed.

		MR. CARLEY:  The IIRB policy such as it is is consistent with the
requirements of the rules which are spelled out here.

		CHAIR FISHER:  The IIRB policy in order to be consistent shouldn't
just be written but also the IIRB should carry out its policy and
whether or not it informed Dr. Carroll, etc., etc., is a question.  I'm
sure the rule was not intended to say just have this written but ignore
it.

		MR. CARLEY:  I'm sure that was not the intent of the rule but the
trick of the rule is that it doesn't direct --

		CHAIR FISHER:  Let's move on because there are issues.  Informed
consent wasn't taken so I think we can move on because there are other
issues here.

		MR. CARLEY:  Okay.  Next point has to do with the consent.  I won't
repeat all of this stuff but I do want to explain a little bit.  The
limb measurements that began for WPC a week before the stated initiation
date, in general that is not a very big problem because the same
subjects and the same limb measurements were used for both studies and
with one exception everybody whose limbs were measured before the
initiation of the second study was also part of the other study and had
presumably done a consent before they started in there.  

		There is only one anomaly and this was the subject 29.  Then the case
with subject 60 as mentioned by Dr. Carroll in his supplemental
materials.  If that is not clear, he can explain it better than I.  And
the untreated control signed their consent form after completion of
field testing but the last generation of changes had to do solely with
pregnancy risks and all of these subjects were males so there was no
substantive effect of that change or that delay.

		CHAIR FISHER:  All right.  I think we really have to move on.  Is
there anything you haven't said before?

		MR. CARLEY:  Yes.

		CHAIR FISHER:  Okay.

		MR. CARLEY:  There is at least technical noncompliance summarizing all
of this.  The question is how far does it rise on the scheme of the
criteria.  I want to say a little bit more about the change in the role
of the experienced subjects to serve as assistants to the investigators.
 

		That was not reflected in the protocol or the consent forms.  It was
not considered by the IIRB and that strikes me as a fairly significant
topic that deserves some careful attention.  

		The explanation by Dr. Carroll of his understanding of his discretion
to determine what amendments have to be reviewed seems to me to be in
direct conflict with the plain language of the rule.  If the IIRB
decides that they are not interested, they can say so but it should be
their call and not his.

		The summary of concerns, there are two of them here.  Just take a
quick look and I won't read them allow and waste your time.  They are
listed in the study.  Next slide.

		CHAIR FISHER:  Excellent and we appreciate that.

		MR. CARLEY:  And so there were plainly shortcomings in the conduct of
the studies.  In my judgment they didn't put the subjects at greater
risk for different reasons depending on a specific exception.  

		Taken all together with the other deficiencies noted they may have
compromised the studies a little bit.  They are at least technically out
of compliance with Subpart K.  I defer to the board about whether they
rise to the level of substantial noncompliance.

		CHAIR FISHER:  Okay.  Only questions if it's a technical question. 
We'll save our discussion for after.  

		Yes, Jerry and then Sean.

		DR. MENIKOFF:  Two points.  Do we know did the IIRB get informed of
the various changes like changing the test compound?  If so, what action
has it taken?

		MR. CARLEY:  I don't know the answer to that.  Dr. Carroll probably
does.

		DR. MENIKOFF:  Secondly, I take it it was probably intentional on your
part.  You don't characterize the change in the test compound as a
protocol deviation.  Was that because he amended the protocol?

		MR. CARLEY:  Yes.  I'm not sure.  That is an important distinction.  I
think the important point is that the protocol was changed without IIRB
oversight and without reflection of that change in the consent
documents.

		DR. MENIKOFF:  It might reflect in terms of how you then characterize
this.  I would assume normally protocol deviation would be related to
the approved protocol.  There was a deviation from the IIRB approved
protocol.

		MR. CARLEY:  I wasn't making that distinction.  I was just saying yes,
he amended the protocol, perhaps improperly.  If he was consistent with
the amended protocol, it wasn't a deviation but it was a different kind
--

		CHAIR FISHER:  Sean and then Sue.  Sue.

		DR. FISH:  One quick question, John.  Did you find any evidence that
any of these protocol deviations had been reported to the IIRB?

		MR. CARLEY:  I did not.

		DR. KIM:  Short question.  Has the informed consent form indicated
that subjects may be tested with more than one product?

		MR. CARLEY:  The consent form for the SCI-001 study describes the
study has we know it.  This is going to be a test of several different
repellents and you are going to get one or another of them.  

		Your odds of getting any one of them on any given test day are such
and such and so forth.  The consent document for the WPC study makes no
reference to any other materials.  I should add that neither consent
document makes any mention of LipoDEET 3434.

		CHAIR FISHER:  Kannan.

		DR. KRISHNAN:  One of you presented some of the MOEs based on
compression of the LD-55s.  Since there were some repeated applications
on the same individual, was any attempt made at all to compare NOAEL
from any repeated demonstration studies?

		MR. CHARLIE:  We did one calculation.  I assume you have seen Dr.
Carroll's response to that of the multiple washings and all.  The
changes that there was some interference from day to day of significance
there are pretty slim.

		CHAIR FISHER:  All right.  In an effort to catch up, and we're not too
bad but we are going to break for lunch.  I remind the board we have an
administrative meeting so everybody get right into the room and then
we'll come back at 1:45.  It's a little less than an hour but we'll come
back at 1:45.

		(Whereupon, at 12:53 p.m. off the record for lunch to reconvene at
1:52 p.m.)

	A-F-T-E-R-N-O-O-N  S-E-S-S-I-O-N

	1:52 p.m.

		CHAIR FISHER:  Okay.  We're going to get started.  

		Dr. Carroll, I believe we're ready for public speakers.

		Okay, Dr. Carroll.  For the record, please introduce yourself, your
affiliation.  You have five minutes.  We did all get your response
letter, and we've read it.  

		DR, CARROLL:  Thank you.

		This is Scott Carroll from Carroll-Loye Biological Research.  And I'm
here today to make comments on the Ethics and Science Reviews of our
insect repellent efficacy studies of SCI-001 and WPC-001.

		By way of context, I'm sure you remember this at some level, but I'd
like to say that these protocols and the conduct of these studies in
terms of human protection greatly exceed, as far as I know, anything
that's previously been conducted in the field of insect repellent
testing.

		The science of these studies is basically very sound.  And I'll give
you a quick overview of that in a minute.  I'm just going to show a few
slides to highlight, I think, some of the main points that Mr. Carley
brought up.  It's a bit like playing three dimension chess with a
computer, but I'll just skip over these things in a way as I best can.

		To give you context for why these studies were conducted together, and
I think this is very relevant to the Board.  Each spring what I do now
is to map the spread of mosquito numbers from south to north in
California and West Nile Virus and other potential pathogens from south
to north.  And in contrast to all the rest years and since we've had
West Nile Virus in California, as the mosquitos moved north to where we
conduct our studies and began to appear in sufficient numbers to conduct
one of these high intensity botany studies that we do, West Nile Virus
was very close on its heels.

		We've had previously a window of several weeks to conduct such
studies.  But in analyzing the data through June, it became apparent
that we might have as little as a week to conduct all of our 2007
testing.  At that point I made the judgments that we would need to
complete both studies as quickly as possible. Hence, we ended up with a
lot of temporal overlap in their conduct.

		One of the first things we came up with was a question of spacing. 
Mr. Carley also brought it up because he was concerned about misleading
statements in our reports saying that when spacing protocol was violated
that all subjects were testing the same active ingredient.  

		And this just gives an idea of what it looks like.  Here on the left
we have pairs that are too proximate.  This happened on the 12th.  We
had quality assurance personnel present and they engaged me and a couple
of technicians simultaneously, and that meant we had less monitoring
going on briefly.

		And these subjects are clumping, the ones farther away but still too
close, are testing the OLE.  All the people in the foreground are
testing DEET.  There are also people in back of the photographer.  So
this was ephemeral, the kind of distribution we prefer.  It gives you a
picture of how these tests are conducted, what a proper distribution
looks like, at least in my eyes on the right.

		And in this case what you see there is, those are DEET subjects, all
these subjects are in this case on the right in back of the
photographer. So we kept them spatially segregated.  And the untreated
controls worked in the appropriate proximity then to both treatment
groups.

		The questions about how we -- well, let me go to the amendment.

		I'm sorry. I advanced my slide with that button.

		The most serious error, other than that, that I made and this is very
interesting to me in terms of how I tried to learn about proper
consenting, et cetera, how to run one of these studies.  Had to do with
not seeking approval for the two amendments I made, one to include the
virus screening and the other to substitute the very similar DEETs
formula, and that's the I-001.  I did not understand that all amendments
require IRB approval.  Obviously, that's very basic to this process and
once that was made clear to me, many things fell into place.

		I have been conducting studies with IRB oversight since 1996.  No one
ever mentioned that specifically.  It might have been assumed that I
would know that.  I have taken the online CITI courses in this  conduct.
 And in conducting these studies it's not mentioned in any of the
materials I've studied for that.

		And in approaching IRB, it's not the current one the IIRB and state
and federal agencies I've heard time and time again we only need to know
about things that influence subject safety. I hear that all the time. 
And the reason I hear that a lot is because I frequently volunteer
information or make inquiries about do you know this or that or this or
that.  I'm not trying to ask them about trivial things.  And so I've
always felt I had surprising latitude to make these judgments.  

		And it's not uncommon for there to be confusions among the -- and less
experience between deviations and amendments.  And I think that was
probably not clear in discussions I've had with people about this in the
past.  And so those were frank errors.  They've been reported to the IRB
and along with what I regard as minor consenting errors on a couple of
subjects that I don't have time to give you the details on right now. 
And those are in process at the IRB currently.

		So again, the last thing is simply to talk about the distribution of
treatments across days and across sites.  Obviously we have multiple
sites to improve generalizability.   Having multiple days within the
sites probably does not decrease generalizablity.

		The results of these studies are relatively unambiguous.  Even though
there are site effects, the major effects, the major differences between
treated and untreated subjects.

		Here the treatment effect is the third one down.  And it's lower here
because we're testing DEET products again.  DEET products principally.

		If we're interested in the effect of previous treatment, previous
treatments falls out as highly nonsignificant.  This sample is
structured in a way that makes it very easy to determine that.  For
those of you who didn't noticed in my written comments, the exclusion of
previously treated subjects was originally introduced, I mean long ago
when I was trying to keep people from uncontrolled backgrounds with
respect to participating in the studies.  At this point we know that
anyone previously treated has gone to failure.  They've washed two to
three times between exposures, between different treatments.  And these
data corroborate the fact that there wasn't an impact.  So my intention
was not to exclude this kind of study conduct based upon that exclusion
criterion.

		And next slide.

		Likewise we see no effect to previous treatment in the WPC study.

		That's all.  Thank you.

		CHAIR FISHER:  Thank you.

		Now are there any technical questions for Dr. Carroll.  Just technical
questions?  Jan?

		DR. CHAMBERS:  Dr. Carroll, a couple of questions.  One is what do you
know about the ability of soap and alcohol to wash either the DEET
preparations or the OLE preparation off the skin?

		DR, CARROLL:  We don't have quantitative data about that.  You can
certainly tell that the smell disappears and the skin sensation
disappears, especially with the -- well soap and water does a very good
job.  Alcohol, I think, enhances that.

		I do know in terms of the response of caged mosquitoes to recently
washed arms, I think there's a little reduction in avidity immediately
after washing and the alcohol stripping.  Maybe just the attraction cues
that are normally present on the skin are reduced.  Certainly the
effectiveness of the product disappears with that cleaning.

		DR. CHAMBERS:  And the other question I have is what do you know about
the distance that the smell, or whatever the insect would be repelled by
that's volatilized, how far away from the person does that aura go?

		DR, CARROLL:  It varies by treatment.  It's been quantified in a few
cases, but I don't know of any studies that quantified it in anyway
that's very generalizable.  You know, people know how far they can smell
repellents on one another; mosquitoes, you know, they experience a
similar universe.

		DR. CHAMBERS:  And then one further question.  In light of one of the
comments that one of our consultants made this morning, I guess there
are repellents that are contact repellents and there are others that are
volatile repellents. These two are?

		DR, CARROLL:  These would be volatile repellents.  Yes.

		CHAIR FISHER:  Kannan?

		DR. KRISHNAN:  When the participants were treated more than once, I
mean it could literally on subsequent days and so forth, were those done
on the same limb?

		DR, CARROLL:  Yes, those were on the same limb.

		DR. PHILPOTT:  I have a series of rather quick questions.

		The first is I do not or at least cannot find in the thousand pages of
documents that we have an MSDS for LipoDEET 3434.  Was one available and
was it available to study participants?

		DR, CARROLL:  Yes, it was.

		DR. PHILPOTT:  Okay.  My second question is in your letter you
mentioned that different personnel managed the subjects for each study
during simultaneous efficacy testing.  Now it's a little unclear to me
exactly how many study personnel were on hand, and particularly for a
day like July 15th in Butte County where you have 26 study participants
including 14 that were involved in the SCI-001, 10 in the WCP-001 plus
two controls.  How were these personnel divided up and how did they
manage and oversee the participants?

		DR. CARROLL:  There were two people per untreated control. And then
there was a group leader for each of the two studies.  And then three
additional personnel that provided support, assist in keeping the groups
segregated and provided food, water, shelter.

		DR. PHILPOTT:  So about nine?

		DR, CARROLL:  Seven total.

		DR. PHILPOTT:  Seven total?

		DR, CARROLL:   Eight.  Eight.  

		DR. PHILPOTT:  Let me just go through my list here.  Informed consent
documents have specific study titles on them.  The controls, the
untreated controls crossed studies on three days.  Did they sign two
consent forms?

		DR. CARROLL:  Yes.

		DR. FISH:  Dr. Carroll, were any of the protocol deviations that were
listed in Appendix five, I think, were any of those submitted to IIRB?

		DR, CARROLL:  I can't recall at the moment.  I think they may not have
been.

		CHAIR FISHER:  Any other?  Yes?

		DR. JOHNSON:  Could you put up the previous screen with all the tables
that you had on the previous screen?  Thank you.

		The treatment included the controlled or uncontrolled --

		DR, CARROLL:  No, it does not include the untreated control.

		DR. JOHNSON:  The active ingredients?

		DR, CARROLL:  Yes.

		DR. JOHNSON:  And period within site, was that the comparing days?

		DR, CARROLL:  Yes, it is. Yes.

		DR. JOHNSON:  And how come we have six degrees of freedom for that?

		DR, CARROLL:  Because of the number of sites and days.

		DR. JOHNSON:  There's three sites.

		DR, CARROLL:  Two sites.

		DR. JOHNSON:  Two sites, three days so there should be two degrees of
freedom within one site and two degrees of freedom within the other
site, making four degrees of freedom.

		DR, CARROLL:  I'll have to review that.  

		CHAIR FISHER:  Any other questions?  Yes.  Did you want to clarify
something, John?  Okay. So John and then KyungMann.

		MR. CARLEY:  In response to the earlier question, you said that the
subjects did get an MSDS for LipoDEET 3434?

		DR, CARROLL:  Yes.

		MR. CARLEY:  The one that was included in your response to our request
for additional information which you sent to us on September 24th --

		DR. CARROLL:  Yes.

		MR. CARLEY:  -- was dated September 24th.  Was there a different one
that the subjects saw when they signed consent?

		DR. CARROLL:  I don't think there's a different one.  I can double
check that.  Dated by the --

		MR. CARLEY:  The MSDS itself was dated September 24th. that you sent
to us on September 24th.  

		CHAIR FISHER:  KyungMann and then Susan and then Kannan.  

		DR. KIM:  I had this question to Mr. Carley earlier.  But the study
involved four treatment.  And I believe protocol said  randomization of
subject among four treatment.  And what method of randomization was
used?

		DR, CARROLL:  What we've normally used in the past was a random number
generation system.  And assignments based upon -- well, I'll tell you
what we did in this one because I have, as you might imagine, newer
technicians in my laboratory when we were ready to assign the intended
randomization table was not available. And so what I simply did was I
had people -- I had a technician that morning put name -- put subject
numbers in a hat, okay.  And then I actually filled one material -- one
test material at a time with lots needed sequentially from that hat.  As
simple as that.  

		It wasn't actually a hat, but you know what I mean.

		DR. KIM:  Yes.  But that doesn't seem to make any sense knowing that
on the day July 8th of 2007 there's a total of 13 subjects tested.

		DR. CARROLL:  Yes.  Yes.

		DR. KIM:  And if you want a randomization, you probably like to do it
close to three tags, say, three DEET, L302, 30L3434, 30--

		DR, CARROLL:  Yes, that would have been a better way to do it.

		DR. KIM:  Right.

		DR. CARROLL:  I didn't do it that way.

		DR. KIM:  Then how did you do it; that's what I'm asking?  I mean,
because you had two, three, five, three and that raises the entire
question about whether the randomization was appropriate.

		DR. CARROLL:  You saying the number of subjects, two, three, five and
three?

		DR. KIM:  Yes.

		DR. CARROLL:  Yes.  Well, I structured that because my goal was to
have on any -- throughout the -- over the five exposure days to have to
the extent possible  the same number of subjects within a given
treatment compared to Ultrathon across all those days.

		DR. KIM:  Right. So that --

		DR. CARROLL:  So it's very close to a symmetrical design based upon
that goal.

		DR. KIM:  Well, your explanation doesn't seem to jive with the numbers
you had.  The natural thing would be to do equal randomization each time
you --

		DR. CARROLL:  Randomized within a given treatment.  But I already
decided how many of each treatment would be present on a given day given
the number of subjects I had available to work with.

		DR. KIM:  Well, I don't quite understand when you say "within a given
treatment."  Randomization is done --

		DR. CARROLL:  Which subjects were assigned to a treatment.

		DR. KIM:  Right.  So that's across treatment, right?

		DR. CARROLL:  I'm talking about within a treatment.  Oh, yes, well who
is going to end up in which treatment, yes.

		DR. KIM:  Right.  So if you want to balance the allocation evenly each
day, the natural thing would be to do equal randomization.

		DR. CARROLL:  To have the same number for each day?

		DR. KIM:  Right.

		DR. CARROLL:  Yes.

		DR. KIM:  And so your explanation and the number that we see just
doesn't seem to jive.

		DR. CARROLL:  Yes. Given the number of -- given the ways in which I
had to distribute subjects among these two tests that I had not
originally planned on running simultaneously, I was not able to for each
day use the same number of subjects across treatments.

		DR. KIM:  But I believe your reason of protocol said that you were
going to test for treatment randomizing subjects to these four
treatment?

		DR. CARROLL:  Yes.

		CHAIR FISHER:  Well, it sounds like he didn't do that and he knows
that that was in the protocol, right?

		DR. CARROLL:  Yes.

		DR. KIM:  Your explanation doesn't seem to jive with the protocol
there.

		CHAIR FISHER:  I'm not sure. I mean, I think he's answered that he did
not do it.  And we have the protocol to know if it was in there or not.

		DR. KIM:  All right.  The next question is earlier this morning Dr.
Gupta mentioned that a typical exclusion of the substance takes between
30 and 48 days.  And the problem with the multiple treatment given to
the same subject consecutively is the whole issue of a washout.  And
with the biological inflammation of about 30 to 48 day sort of --

		DR. CARROLL:  Hours.

		DR. KIM:  Hours.  Thirty-eight hours.  Okay. All right.  I withdraw
that question then.

		DR, CARROLL:  It's still relevant, I don't think it influenced of this
study.

		DR. KIM:  So there is an issue of whether there is a sufficient
washout period to ensure that previous treatment doesn't effect the next
day's outcome?

		DR, CARROLL:  This data set can be used to analyze that.

		DR. KIM:  But as was pointed out, I think there's some fundamental
issues usually in the analysis --

		CHAIR FISHER:  I think these questions are -- I'm not an expert in
this, but I don't think we should be discussing with Dr. Carroll
methodological issues in terms of whether or not we agree with the
methodology or think it's appropriate.  I think if there was specific
questions that we need to inform decisions about it.  So let's keep it
to whatever the specific question is, not whether he kept to this or
that.

		DR. KIM:  So I'm sort of getting at the sort of validity of Dr.
Carroll's argument that the period didn't affect the outcome.

		CHAIR FISHER:  Okay.

		DR. KIM:  Because I have a question about that analysis.

		For example, can you tell me what kind of error models that was using
these-- there's four types of errors one can generate.

		DR, CARROLL:  Well, type three sums of squares and subject was random.
 But I can't tell you beyond that.

		CHAIR FISHER:  Okay.  Suzanne, Kannan and Lois.

		DR. FITZPATRICK:  I had a quick question.  I was going to ask you if a
person was in the experimental group and then was in controlled the next
day. 

		DR, CARROLL:  No.

		DR. FITZPATRICK:  But I see that it didn't. So you must have picked
the controls out.  At randomly you might expect that some of those would
have become a control the next day when they were an experimental person
before.  So did you specifically pick the controls out to make sure they
hadn't been in treatment in a group the day before? Do you see what I
mean?

		DR. CARROLL:  Yes. None of the controlled subjects were ever treated.

		DR. FITZPATRICK:  Right.

		DR. CARROLL:  But that was really for other reasons.

		DR. FITZPATRICK:  Okay.  Subject 13?

		DR, CARROLL:  Not that I recall. But I don't anticipate that would
have an effect anyway.

		CHAIR FISHER:  Remember if it's in the data -- if it's in the
materials, you know, if it's problematic that's something for us to
discuss.  But not the --

		DR. CARROLL:  Things without correction, yes.  But the control
subjects were chosen in advance as a segregated body, which is relevant
to the conversation earlier this morning.

		CHAIR FISHER:  Kannan?

		DR. KRISHNAN:  I see that the same number of subjects were involved in
both protocols.  But is that also true between locations?

		DR. CARROLL:  Twenty subjects per -- total of ten per treatment per
site.

		DR. KRISHNAN:  So in those two locations was the same set of subjects?

		DR. CARROLL:  No, no.

		DR. KRISHNAN:  Okay.  

		DR. CARROLL:  There was some overlap, not complete overlap in subject
constituency between the sites.

		DR. KRISHNAN:  And one of the exclusion criteria was that the subject
did not use the repellent within 24 hours?

		DR. CARROLL:  Yes.

		DR. KRISHNAN:  But when that was not respected, was it communicated to
the IRB, was there any communication there?

		DR. CARROLL:  No, that wasn't.  That was simply the fact that I was
not thinking bearing that exclusion criterion in mind I hadn't initially
intended it to apply to this kind of case.  But that is how it reads.

		CHAIR FISHER:  Okay.  Lois and then Richard.

		DR. LEHMAN-McKEEMAN:  Very detailed question.

		DR. CARROLL:  Yes.

		DR. LEHMAN-McKEEMAN:  What was the time of day that these studies were
started?  And what I'm asking is when studies were run on successive
days and a subject was treated with one agent on one day and a different
agent on the other day, what was the actual duration of time between
those actual treatment periods?

		DR. CARROLL:  Twenty-four hours, you know give or take probably an
hour maximally.

		DR. LEHMAN-McKEEMAN:  Give or take an hour?  So they would start in
the morning?

		DR. CARROLL:  Yes, 7:00 a.m.  Yes.

		DR. LEHMAN-McKEEMAN:  Okay. And how long were they in the field?

		DR. CARROLL:  Usually until early evening.  So, you know, 7:00 p.m.

		DR. LEHMAN-McKEEMAN:  Okay. And I noticed, again to Kannan's question,
part of the informed consents does say you haven't used repellents
within a 24 hours period.

		DR. CARROLL:  Exactly.  Yes.

		DR. LEHMAN-McKEEMAN:  What was the basis on which that parameter --
what was the rationale for having that stipulated?

		DR. CARROLL:  When I first included that criterion in my protocols,
which must have been quite a while ago, it simply seemed like a good
idea to exclude people who had been using repellents where I didn't know
what they would have been using, or you know might not have been
familiar with the product or how it was applied, or exactly when they
had applied it. I wasn't thinking in terms of a situation at all where a
repellent would have been monitored to failure and then washed two or
perhaps three times subsequently.  So a failed repellent removed and
then a reapplication the next day.

		DR. LEHMAN-McKEEMAN:  Okay.

		DR. CARROLL:  Okay.

		CHAIR FISHER:  Richard?

		DR. SHARP:  I think we already had this, but we had a hard time
hearing Dr. Krishnan's question.  So the change with regard to the
protocol allowing individuals that had received some form of treatment
within a 24 hour period to be eligible was that deviation actually
reported to the IRB or not?

		DR. CARROLL:  No.  Was not.

		DR. SHARP:  Okay.  And a question about the control subjects. You
mentioned that there were two members of the research team who were
overseeing the two controls.  Were members of the research team actually
serving as control subjects?

		DR. CARROLL:  No.

		DR. SHARP:  Okay.

		DR. LEBOWITZ:  One quick question.  You talk about one minute periods
of exposure in 15 minute intervals.  Did that period of exposure occur
at the beginning in all subjects during all such 15 minute periods?

		DR. CARROLL:  I'm not sure what you mean by "at the beginning?"

		DR. LEBOWITZ:  Well, you have a one minute period of exposure during
15 minute intervals.  So in a 15 minute interval it could occur at the
beginning or anytime in the middle or at the end?

		DR. CARROLL:  Well, the interval between all exposures was at or very
close to 15 minutes.  Does that help?

		DR. LEBOWITZ:  Okay. So they were exposed to one minute --

		DR. CARROLL:  And then 15 minutes later they were exposed another
minute.

		DR. LEBOWITZ:  Okay.  Thank you.

		DR. CARROLL:  Or 14( minutes or --

		CHAIR FISHER:  Anybody else?

		Okay.  Thank you very much, Dr. Carroll.

		DR. CARROLL:  Thank you.  

		CHAIR FISHER:  Any other public comments?

		Okay.  So now we will go to Board discussion.

		John, I just wanted to make sure; I don't remember if you presented us
with the charge questions?

		MR. CARLEY:  No, I did not.

		CHAIR FISHER:  Okay.  So I guess should we do them one-by-one because
it seems like we divided people one-by-one?

		MR. CARLEY:  There are three pieces here.  One is this quick recap of
the split protocol specific issues that had come up. And then there's
the charge question for SCI-001 and then the charge question for
WPC-001.

		I think the Board needs to think about whether you want to talk about
those studies together for a while and then look at the specific charge
one at a time or what.  I don't know what the best way for you to think
about it is.

		At the end of the day you have to think about them separately but you
may want to start by discussing them together.

		CHAIR FISHER:  I think I'll leave that.

		Jan, what is your recommendation, because you're on many of them here?
 And then anybody else who has a recommendation.  How did you organize?

		DR. CHAMBERS:  I tried to write them up separately like you had the
questions.

		CHAIR FISHER:  Okay.  

		DR. CHAMBERS:  So I think I'd rather pursue it the way I'm going to
try to get through my write up here.

		CHAIR FISHER:  Okay.  

		DR. CHAMBERS:  Are we ready for that?

		CHAIR FISHER:  I think he has to read them.

		DR. CHAMBERS:  Okay.

		MR. CARLEY:  Back up one.

		The specific issues that we raised, which are pretty much embedded in
the charge questions, but just a quick recap:  

		Is sequential testing of different repellents by the same subject on
successive days likely to have effected the results?

		Can partial results testing the same repellent in the same habitat on
different days be polled to comprise the designed sample size?  Is this
likely to have effected the results?

		If only one limb is treated, does it matter which one or how it is
selected?

		If a minimum intersubject distance isn't maintained, could this effect
the results?

		And then the ethics related question:  If one arm of a study is deemed
unacceptable by virtue of not having been in substantial compliance with
the rule, what are the implications for other arms of the same study?

		Bill reminds me that there's also the specific question of
substitution of materials LipoDEET 3434, which I inadvertently omitted
from that last.

		The specific charge question for SCI-001, is that first on your list,
Dr. Chambers?  Is this study sufficiently sound from a scientific
perspective to be used to assess the repellent efficacy of the
formulations tested against mosquitoes?  Please comment specifically on:
 

		First, whether participation in field study testing by several
subjects on the day after they had been treated with a different test
repellent is likely to have effected the validity of the results for
those subjects on those days;

		Second, the effects of changes to the experimental design resulting in
evaluation of repellents using fewer than ten subjects per treatment per
day followed by pooling of results by site for a statistical analysis.

		Next.  Does the available information support a determination that
this study was conducted in substantial compliance with subparts K and L
of regulations at 40 CFR Part 26?  Please comment specifically on:

		The decision to use a different test formulation in place of one of
the test materials described in the protocol reviewed b the IRB, EPA and
the HSRB;

		And how to assess the ethical conduct of an insect repellency study
involving multiple test formulations when there is an ethical deficiency
in the conduct of the study with respect to one of the test
formulations.  If the ethical deficiency warrants not relying on the
results of the testing with regard to one test formulation, under what
circumstances, if any, does the ethical deficiency affect the
acceptability of the results from testing the other formulations?

		CHAIR FISHER:  Thank you.  

		Jan?

		DR. CHAMBERS:  I think this is the most complex issue you've presented
us with yet.  Testing our stamina, I think.

		I'd like to start out with one editorial comment.  And it is for me,
but I do believe others on the Board concur, is that I'm wondering
seriously if this not premature to be bringing to us.  

		I read Dr. Carroll's responses on the plane, because we got them
really, really late. I don't know whether you guys have actually had a
chance to look at those and whether that has changed your analysis any
or not. But if it does,  I think it would be better to present that to
the Board in terms of a modified analyses, if that's the case. But I
don't really know at this point. So take that for what it's worth.

		My question was A1, and that was with respect to the consecutive days
of treatment.  So if I can find my notes here.

		This deviation -- this is what I wrote up before.  So let me pretty
much go over what I wrote up before and then I'll comment based on what
I read with Dr. Carroll's responses.

		The deviation of not allowing a day of non-repellent use before
testing may or may not have had an influence upon the results.  There's
insufficient information available to the Board to make a judgment about
the impact of this experimental result on the results. Certainly this
testing strategy may have been more efficient in terms of getting all
the products tested, certainly this testing strategy of not having days
in between would have been more efficient for getting everything done in
a set period of time with a suitable group of subjects.

		I've noticed that some of the other protocols we've gotten from other
people do not have this specific one day of non-repellent use as an
exclusion criterion would suggest that the one day probably is really
not necessary, at least it's not necessary in their protocols.  So that
implies that the effects of the repellents dissipate over the course of
the night following testing.

		However, on the other flip side of that these products were designed
to improve durability. One of them was designed to improve durability
and another was designed to inhibit evaporation. So that suggests that
there might be a more prolonged effect. So there might have been
residual effects; it's really kind of hard to say.

		Let's see what makes sense here.

		If there is information available on the length of time that there is
still some residual efficacy, such as a laboratory test, this
information was not provided.  And I gather from the answer a little
while ago that's not known.  

		I'm going to be a little random here because I've had to change my
write up.

		I guess it appears that with these particular products the complete
protection time was at nine to ten hours, which suggests that if they're
starting them all at 7:00 in the morning, nine or ten hours later the
efficacy is gone.  Waiting the rest of the 24 hours to dissipate the
rest of it; they've taken a bath, they've washed twice wit Neutrogena
soap, they've washed once with alcohol suggests to me that probably the
residual effects are gone.  And so my gut feeling, not based on any
science or anything, but my gut feeling is that probably the results on
consecutive days are valid because I think the residual effects of the
previous day's repellent would have been gone.  But, again, there's not
a lot of information to conclude that.

		Let's see.  A couple of the other things that you had as your
preliminary questions above that is:  One limb or the other.  I cannot
see as a biologist why that would make any difference. So I can't see
that testing on one limb is invalid in any way.

		The minimum distance between the pairs of people?  Again, if the are
solarization doesn't extend very far, then it shouldn't make any
difference. But, again, we don't have very much information. So my gut
feeling is that that probably didn't make a whole lot of difference.
But, again, there's no information.

		I am sympathetic -- well, let's see, let me just say this first.  One
of our consultants, I think if I got it right a little while ago this
morning, suggested that testing over several days was probably more
desirable than testing all on a single day.  And so this protocol that
was supposed to be done all on one day that ended up being over several
days may have actually provided more information.

		Again, I got the impression from our consultants that these data,
regardless of how you run them in the field, are not going to be
terribly accurate anyway.  So you know, I'm really not sure that those
little deviations make a whole lot of difference.

		Also, I have to question, I am sympathetic to the reasons that Dr.
Carroll gave for comprising all of this and that the West Nile Virus was
moving up the state and they really didn't have much choice in trying to
get it done quickly.  And you don't want to wait and put the subjects in
danger of infection. So how much flexibility does anyone at EPA allow in
the protocols like that to allow for changing environmental conditions? 
Again, this is not a white rat in a cage study where you can control
those sorts of things.  You have to depend upon the environment and
adapt to it.  So is it reasonable to go ahead and allow deviations like
that in light of the changing environmental conditions?  That's a
question I can't answer.  You all have to answer that.

		So I think I was a little more random than I usually am here.  But it
seemed like it was just too much information here at the end.

		I guess bottom line in my opinion is that I think the consecutive
days, in my mind, are probably not impacting the results.  And the one
limb, in my mind, doesn't impact the results.  And the minimum distance
that was mostly maintained correctly, probably did not impact the
results.

		CHAIR FISHER:  Thank you, Jan.  And I think you've said that clearly,
as usual.

		Who?  Michael?

		DR. LEBOWITZ:  Now I know why we sat next to each other.

		Mine's going to be a little bit disjointed, although I didn't think
Jan's was.

		There was definitely some straits to the study, including
predetermining dosimetry in the lab and reaching a predetermined dose
therefrom to use in the field.

		The statement that then differed from former industry standard was
probably correct.  And the fact that the repellent was applied in the
field by techs was a good idea.

		Now there's an argument in my mind whether the differences in
temperature, relative humidity and light intensity within a day was
pertinent or not to the investigation.  The fact that they were similar
among days and between sites, which appears to be the case, is probably
a strength.  And if they had been accounted for within days given the
wide range each day, then it would have been a good attribute for
repellency testing.  The same is true of the question about extending
the starting time so that dusk was included in one page. But I didn't
find any indication as to what impact that had on any of the results;
the days in which dusk was included.  So in the end, I guess, I had
questions about the variability and impact of the invariability in
temperature relative humidity  and light intensity.

		I was pleased that no mosquitoes pools are sent -- no chicken flocks
and neither of the counties in which testing occurred were positive for
West Nile Virus.

		I did like the fact that the corrective mosquitoes were identified and
tested by RT-PCR for both West Nile Virus and equine encephalitis types
and at QCD, which I think was good.

		It was interesting that Culex tarsalis is one of the primary viral
carriers  did not land on any of the treated sites, so that was nice.

		The number of samples, actually, corrected from treated limbs and used
in pooled specimens for testing was good.  And, in fact they weren't
positive was good.

		I thought exclusion criteria were appropriate.

		And let's see.  Okay.  The fact that the untreated control subjects
experienced a minimum of one branding was intensified per exposure and
448 of the 450 periods was good.  I didn't actually check whether there
was, indeed, 450 periods.  But certainly if random over the number of
days and number of sites it would indicate an adequate amount of
mosquito landings. I guess it's called mosquito pressure or what I call
density in each site at each time was sufficient.  And I was pleased
that the Kaplan-Meier test was performed and give similar results.

		Going on to some of the other criticisms. It still bothers me that
different test materials were applied on the same day in each test site
and ending up with insufficient number was tested for each repellent
during each day in each site to determine day-to-day affects.  And I
don't think the ANOVA was calculated correctly or was powered
sufficiently to look at all of that.

		And I think Mr. Carley has gone over that in great detail, I don't
want to go over it any further length.

		Applying the repellent 150 to 210 minutes before first exposure and
the effects of travel I don't think are fully known.  And I'm a little
bit concerned about that as well.

		Okay.  And comparisons between all of the three test materials may not
be appropriate if environmental conditions differ for one of them.
That's actually an expansion of one of my other comments.

		I still would have to see confirmation. We had some discussions from
consultants, but I'd have to see more confirmation for the
investigator's statement that the LIVE is identical to that of an FCB
classically used to measures of repellency by insects.  Okay.  So that's
still a question in my mind.

		And then in response to the charge, I'd like to read that last,
although I've come to some tentative conclusions based on the fact that
protocol criteria were not followed very well in terms of randomization,
attempts at pooling, testing on subsequent days given potential for
effects.  It might not have been visible given the numbers of subjects,
et cetera, tested repeatedly on different days.

		And we're going to talk about pooling, et cetera, next.  So I'm going
to  leave that for now before I decide whether one can conclude or not
that the study was sufficiently sound from a scientific perspective.

		CHAIR FISHER:  Thank you, Mike.

		Steve?

		DR. SCHOFIELD:  I think Jan almost exactly summarized my point of view
on this.  I mean, I sense a certain tentativeness in her conclusion.  So
I guess for me the key question was the potential for a residual 
repellent on the limb 24 hours later to influence the response of
mosquitoes.  And like Jan, I think it's probable that it didn't because
these things largely their repellent efficacy or they have substantially
have lost it within eight hours.  There was an overnight rest period and
there were frequent washing.  So I think that it's probable that there's
no accumulation.

		And I think I captured from Dr. Carroll's presentation that the ANOVA
failed to pick up, although it was probably not adequately powered,
failed to pick up any signs of cross interactions.  In other words, it
found some differences among the treatments which would have been
blurred if there had been a great deal of carryover back and forth. That
would have been a much more powerful internal evidence of independence
if instead of using preselected controls, if there were untreated
controls on one day who became treated subjects and other days and so we
were actually able to see whether someone was a control on day two but
had been treated day one responded differently from a control who had
never been treated.  But at the end of the day I still think it's likely
that the influence would have been below the limit of detection by the
relatively inaccurate methods that even if you had perfect precision in
this matter, you will still have a great deal of imprecision in your
final conclusion.  So, actually, I'm just lending my support to Jan
Chambers' view on the scientific aspects here.

		CHAIR FISHER:  Dallas?

		DR. JOHNSON:  Thank you.  

		I don't think I have too much to add.  Many of you may be aware that
I'm a co-author of a three volume set of statistic books called Analysis
of Messy Data.  And this experiment would qualify. But it's there are
some experiments that are too messy to know how to analyze, and this
experiment may actually fall in to that category as well.

		But I think it does give you some general ideas that are probably --
the good thing about statistics is if it looks pretty good and it looks
obvious, it's probably okay even though you may not be analyzing it
exactly the right way.

		I did do a back of the airplane calculation on the way out as to try
to get at the question as to whether there was a difference due to
having a treatment on a previous day, whether it would have any effect
on the results on the following day.  And so what I did was I tried to
go through subject-by-subject and see whether the time to first bite, or
whatever criteria they were using, whether that changed very much
depending on whether they had gotten a treatment on the day before.

		So, for example, subject A in Butte was a 7.5 on day one and a 9.75 on
day two. And so there was a change there of 2.25 hours in terms of that
particular thing.

		And so I did that for every subject assuming that the four days in
between the testing that there was sufficient time to not worry about
whether the treatment had been received on the previous day or not.

		And I sort of said in my mind that if the change was no more than a
half an hour, that I would call it a tie.  And if the change was more
than a half an hour, I'd say I got an increase.  If it was less than a
half an hour or if it went the other way a half an hour more, then I
would say I got an increase.

		And so in doing that, there were 19 data points in which the subject
got a treatment on the previous day.  Twelve of those the following day
had an increased hourly rate, seven had a decreased hourly rate and four
sort of tied.  And so there was a little bit of evidence there, 12
versus 7.

		I tried to get on the web here a while ago to see if I could get the
sign test and get the P value for that, but I couldn't.  And so I kind
of did another back of the wall normal approximation, knowing I don't
satisfy all the requirements. And it doesn't appear that it's large
enough to be statistically significant.  But there is a bit of evidence
that maybe there is an effect.

		I think any other comments that I have will wait until the next charge
question.

		CHAIR FISHER:  Thank you, Dallas.

		I think it sounds like we should go on to the next charge question
about the pool, right?  Because they're all interrelated.  Okay.

		So KyungMann, I believe. So the charge question, the effects of
changes to the experimental design resulting in evaluation of repellents
using fewer than 10 subjects per treatment per day followed by pooling.

		DR. KIM:  When I first saw that question, I mean the thought that came
to my mind immediately is that generally if you have more observations
available for your analysis, generally better.  But that requires
certain assumptions to be satisfied. And so I started to go over the
data points, how they are distributed in terms of treatment sites and
dates, much in the way Mr. Carley presented.

		I came to the realization excluding the negative control observations,
we had only 33 unique subjects representing 80 data points.  And then I
started to look more deeply into the distribution of the subjects across
dates, sites and treatment.  Then I started to realize that perhaps the
subjects may have not been sort of properly allocated using
randomization mechanisms.  Again, putting some limitations as to what
kind of analysis one may be able to perform.

		So in order to combine results from different sites, for example, you
would require statistical independence of the observations.  And so if
you go down to each of the tables of the list of data points, you see a
fair amount of overlap between sites.  And so, I mean,  it's something
that you can do it.  But in the way that it is presented, it completely
ignores the dependency of overlapping subjects.  So the error estimate
would be wrong, for example.

		And in terms of there being less than the originally specified number
of subjects per day and having the experiment spread over five days,
again when I first saw that I was kind of delighted.  Because by having
the experiment conducted over multiple days you will be able to account
for the potential confounding by the difference in the wind and the
temperate in the different date. But then I realized that there's a lot
of overlaps of subject from day-to-day, standard analysis of combining
the results across date will not work.  So that sort of created sort of
a problem for me.

		Another point that I would like to make is that the study, at least
according to the protocol, was going to randomize three test material
along with the positive controlled treatment with Ultrathon.  So when
you randomize subjects into four treatment, the appropriate analysis
would be to look at four treatment together.  And after that you may
want to go into what we call parallelized sort of analysis of comparing
-- I mean looking at positive control to each of the three test
material.  But that was not done sort of in an appropriate way.  And I
found that sort of somewhat troubling.

		And, I mean, we had lengthy discussion about the first confirmed LIVE
versus just first LIVE. I don't want to get into that. We have said
enough of that issues.

		The other point that I would like to make is that after I have seen
the data set, in this case there is 80 data points from the treated
subject. And for 80 data points there was not a single censoring noted. 
So  every data point had an event.

		So I went back to the earlier data from EMD-00123.  I have the numbers
to quote. In the EMD-004.1 there were 40 percent censoring.  EMD-004.2
there was 10 percent censoring in site one, 30 percent censoring in site
two. And then in EMD-004.3 that we reviewed in January there was 100
percent censoring in one site and 90 percent censoring in the other
site.

		Now we come to SCI-001.  From two sites four treatment there is zero
censoring.  And that was so troubling to me because I understand that
there is sort of a natural variability of the mosquito pressures and
difference in the temperature and the site selections.  But precisely
because of the reason the investigator confirmed the mosquito pressure
at the beginning of each day, and I'm wondering how could we have such a
stark variability from experiment-to-experiment?

		And let me stop with that?

		CHAIR FISHER:  Jan?

		DR. CHAMBERS:  If I'm recalling what the censor data was, and correct
me if I'm wrong here, but isn't that people that went to the time when
darkness hit and they had to quit?  And they got no complete protection
time because it lasted longer than the experiment could run?  So I
don't--

		MR. CARLEY:  Yes, that's what happened.  And the difference in this
case is that the function of the pretreatment and then just planning for
a longer test date.  Because they started off thinking that these were
going to be potentially 12 hour repellents.  So they pretreated before
they traveled, then they had the delayed start and then they did it at
high summer. They designed it to keep going to breakdown, and they got
there for everybody.  EMD was a shorter period. I think the field time
was eight hours.

		DR. KIM:  The EMD study had a field time of more than ten hours each.

		MR. CARLEY:  Okay.  But they didn't do the pretreatment and they
hadn't planned for as big a window.  And if I'm not mistaken, those were
done in the mid-fall when day length was much shorter. And so it was
harder to keep running. It got dark sooner.

		DR. BRIMIJOIN:  That's my recollection, too.  It was a false --

		DR. LEBOWITZ:  Well, I'd like to be corrected.  But my recollection is
that some of the censoring occurred because some of the participants got
tired and quit.  And that's just because it got dusk.

		MR. CARLEY:  What the censoring instances had in common was that a
particular subject did not go to the point of repellent failure.  And
the reason why that stopped doesn't change the fact that the time that
they used was the time that they went off test rather than the time of
repellent failure.

		DR. KIM:  Not all of them.  There were subjects who actually dropped
out before reaching the day end.

		MR. CARLEY:  Yes.  That's right.  There were some that they didn't
analyze because they dropped out too soon.

		CHAIR FISHER:  This study or the past study?  The past study?  Okay.  

		Where are we here?  Dallas?

		DR. JOHNSON:  Yes.  I'm still here.

		I want to talk a little bit about these Kaplan-Meier things.  I
haven't done the Kaplan-Meier analyses before, but I've heard of them. 
I have no doubt that they're appropriate things to do.

		But the analyses that are done here I assume is assuming that they
were different subjects on each group.  And the analysis of variance
that we saw a little while ago sort of took into -- I think, although I
have some doubts about the analysis, it apparently takes into account
the fact that some subjects were used more than once.  And so it didn't
treat them all as being independent since subjects was a random effect
in the model.

		And the conclusion in these three studies here, if I'm reading it
right, seemed to be that there's no difference in these treatments.  And
yet -- said there was a difference in the treatments.  So I'm a little
bit confused about that.

		The other thing, the other comment that I wanted to make is in
statistics it's always a lot easier to prove things are different than
it is to prove things are not different.  And so it takes a lot bigger
sample sizes to show things are equivalent than it does to show things
might be different.  Because once you can show they're different, it
doesn't matter what sample size you have. But in trying to say that
they're the same, then that often takes a much bigger sample size.

		And I don't know whether it's possible or not, but I guess I would
like to see some -- if we're going to claim these are the same, I'd like
to see some confidence interval for the difference between the two means
and to see how wide and how big that confidence interval is.  If that
confidence interval ends up being very wide, then they're really saying
we don't know anything at all.  If the confidence interval ends up being
very narrow, then we can probably say we know a lot.

		And so I guess right now that's my comments for this particular --

		CHAIR FISHER:  Can I ask a question, and then Jan wants to ask a
question?

		The ANOVA, that was on the treatment groups and the control?

		DR. JOHNSON:  No. I asked that question, he said it did not include
the control.

		CHAIR FISHER:  Okay.  

		DR. JOHNSON:  So it was only the four active --

		CHAIR FISHER:  But why do we care if they were different?  Was the
point of the study not to compare to each other, but to compare them to
the control?   No?  It was to compare the efficacy of each to one
another?  No?

		DR. JOHNSON:  Well, in the reports that we have here, the three
reports are comparing each of the others to Ultrathon.

		CHAIR FISHER:  Right, to the Ultrathon.  Not to one another?

		DR. JOHNSON:  Yes, they're comparing each --

		MR. CARLEY:  Each test material was compared to the positive control.
None of them were compared statistical to the untreated control.

		DR. JOHNSON:  Right.  But the ANOVA, as I understood it, compared the
four active treatments to one another.

		CHAIR FISHER:  Well, that's what I didn't understand why does it
matter whether they're alike or difference?  I mean, I understood your
analysis --

		DR. JOHNSON:  Except in their conclusion.

		CHAIR FISHER:  Well, yes.

		DR. JOHNSON:  Here it seems to be that they're all the same and the
ANOVA said that they were different.

		CHAIR FISHER:  But not for the outcome.  The purpose of the study is
not to compare those, but to compare those to the controls, supposedly,
the positive and negative controls?

		DR. JOHNSON:  Well, I didn't see the protocol so I'm not sure that I
know the purpose.

		CHAIR FISHER:  Okay.  I'm just confused as to why it matters, other
than something interesting about whether there's something there in
day-to-day. But I'm just not sure.

		Okay.  

		DR. CHAMBERS:  Yes, just continuing that same point, Celia, I think
our perspective is not whether you can make a valid comparison amongst
the various products or not.  That's for somebody else to be interested
in.  But this is, can you trust the complete protection time that were
calculated from this particular protocol for each one of the compounds
independently.  And that was the point that I got out of it.

		CHAIR FISHER:  And can I just ask, so the smallest -- the shortest
interval in which there was a bite was what?  Time period?  I guess my
question is I'm trying to get it-- I think it was Dr. Strickman who was
talking about there's always that odd mosquito that nothing bothers. 
And I'm wondering why wasn't that odd mosquito around.  I mean, I don't
think we saw the odd mosquito.  Yes. Yes.  And so like is that realistic
not to see that.

		I mean what you were saying just changed everything and it really
questions, you know, complete protection time.  Because really what
you're saying is there should never be complete protection time if
you're counting every single.  Because there is always some oddball
mosquito around that doesn't care about repellency or something.  So did
we see that oddball mosquito in the data?

		DR. CHAMBERS:  Well, there were some unconfirmed landings that
occurred earlier in the study that would not confirmed within half of
hour. So those would have been some of the earlier ones.

		DR. LEBOWITZ:  But that's also site specific. And the example he was
using was an extreme site.  So the oddball may only occur in the extreme
sites.

		But let me go on since Paul asked me since Alicia wasn't here, to also
review it.   My review is very quick.  I agree entirely with what Kim
said.

		CHAIR FISHER:  Okay.  I like that kind of review.

		DR. JOHNSON:  To answer your one question, it looks like the shortest
time was 6.25 hours.

		CHAIR FISHER:  But that's for the confirmed.

		DR. JOHNSON:  Right.

		DR. LEBOWITZ:  There was an unconfirmed bite at -- 

		DR. JOHNSON:  That's the date --

		DR. LEBOWITZ:  I'm sorry. I think it was five hours.  The first
confirmed bite was about 6(, 6.45 depending  on --

		DR. JOHNSON:  That was confirmed, yes.

		DR. LEBOWITZ:  -- which stack you look at.

		CHAIR FISHER:  Thank you.  Because I was thinking something different
after Dr. Strickman's talk.  So thank you.

		COL. GUPTA:  No, that's not ordinary.  I think Dr. Strickman and I
both emphasized that when you're looking for the complete protection
time, there's some mosquitoes which are on the either end of the normal
distribution can bite anytime.  So it's farther out.  So it's expected
all the time.

		DR. SCHOFIELD:  It's a probability game.  I mean we say it's a 100
percent protection.  Maybe it's 99.999 percent protection and we just
don't encounter that one.

		So I think it's fairly representative and we see it all the time.  And
it's not unexpected.

		CHAIR FISHER:  No.  I guess my question was is it unexpected not to
find that?  To have the outside be five hours until a confirmed bite if
that's a --

		DR. SCHOFIELD:  It's a question of the size of the population that
you're exposed to.

		CHAIR FISHER:  Okay.  So it's not out of the ordinary?

		DR. SCHOFIELD:  It's not -- absolutely no.

		CHAIR FISHER:  Okay.  Dan?

		DR. STRICKMAN:  Yes. I think five hours is actually pretty impressive.
That's half the protection time.

		CHAIR FISHER:  Thank you.  So that helps me a lot.

		Okay.  I'm a little confused here because we still have -- I'm truly
confused. Then we do ethics and then we would be finished with 001 and
then we'd go to -- right, then we do --

		DR. LEBOWITZ:  Celia, we haven't summarized the scientific.

		CHAIR FISHER:  No. That I know.  That I know.

		DR. LEBOWITZ:  Okay.  

		CHAIR FISHER:  I just didn't know if there was more science to
summarize.

		DR. JOHNSON:  I guess I'd be interested in whether the consultants
have any input with respect to these charge questions.

		CHAIR FISHER:  Well, I think what we might want to do is maybe if
there are questions that we have for them, I don't know if Kannan wanted
to say something.  But at least what I'm hearing up to now is that the
study was messy, but there's no evidence -- and this is I guess maybe
the feedback from the consultants.  I'm just trying to summarize what
I've heard.

		There's no evidence that there were residual effects.  So that closing
the time period, there's no evidence it would matter and they also had
washings and there's an eight or 12 hour, something like that, that
would have been done anyway in terms of effectiveness.  

		There's no evidence that one limb versus another limb would have made
a difference.

		There's no evidence that the proximity would or would not have made a
difference.  Differences in the temperature, humidity may or may not
have been a factor, but because it was on two Mondays or something like
that, there's a little more confidence in I guess the temperature or
weather control.

		There's something about the 

ANOVA and something about whether or not the test materials were
different from one another.  But that wasn't the primary issue of the
experiment, nor was there enough power to really have that ANOVA mean
something.

		In terms of pooling, here I'm not sure what we've concluded or what
people have been saying.  The subjects were not properly randomized. 
Right.  

		No, I know. He can correct me.

		The subjects were not properly randomized.

		You didn't have the statistical independence and the error estimates
were wrong.

		All experimental statistically tested against the control together. 
But that wasn't done before the pair testing was accomplished.

		So maybe, Dr. Kim, if you could say how -- I wasn't clear how strong
that was in terms of questioning the results?

		DR. KIM:  Well, as was presented, I mean I cannot take stock in the
results that was presented because it's completely ignoring the
dependence within the summary.  I mean, when you combine site one and
site two there's up to like three subject overlapping.  That has not
been accounted for.

		CHAIR FISHER:  So what you're saying is that the overlap and lack of
independence makes the results uninterpretable?

		DR. KIM:  Right.

		CHAIR FISHER:  Okay.  

		DR. LEBOWITZ:  Yes.  Actually I'm going to amend that and say because
of the attempted pooling and the number of subjects per day, and the
limitations, the avoidance of following the protocol and the lack of
appropriate randomization et cetera, et cetera, one goes beyond -- one
has to reach the question of whether we could conclude that it was
scientifically sound.  And a number of us -- I mean at least I have
reached a decision because of the pooling et cetera that it was not
scientifically sound.

		CHAIR FISHER:  So the first question in terms of the residual time and
all that kind of stuff, there's no evidence that it wasn't
scientifically sound?  But the pooling questions the scientific
interpretability and validity of the results.  That's what I'm hearing
so far.

		Richard?

		DR. BRIMIJOIN:  I want to interject here.  So this is -- so we're
virtually on the brink of essentially trashing the whole study on the
basis of the statistical -- flawed statistical analysis.

		So my question is granted that the design is not optimal and it was
not analyzed or it was analyzed in ways that would require assumptions
which cannot be confirmed.   

		Are you also saying that going back to the raw data there is no way to
extract any meaningful scientific information from this study?  Because
that's what you seem to be saying, and I want you to say that flat out
if that's your opinion.  Or is there some way that this data set could
be reopened and examined, messy though it may be, and to draw some
reasonable conclusions?

		DR. KIM:  In answer to your question, if the experimental controls
were in place in terms of a distribution over a date, control of a site
this is something that we do all the time by proper control.  But what I
see in the presented data is that there was no control of this critical
experimental conditions such as allocation of subject to site,
allocation of subject to treatment and allocation of the subject over a
date.

		I mean, you can analyze the data, I mean whatever you like.  I mean,
but can you trust that analysis?  Because all the assumptions that we
need is not met here.

		CHAIR FISHER:  So I think that he's answering your question as no, it
can't be analyzed based on its design, at least from what you're saying,
right?

		DR. KIM:  Right.

		CHAIR FISHER:  Okay.  Mike?

		DR. JOHNSON:  Yes.  Let's back off the pooling for a second.  I mean,
I think Mr. Carley did a wonderful job of going over the number of
subjects per day and each treatment arm, et cetera, et cetera.  Even
forgetting whether they randomized or not.  Just the getting to the
number of subjects per day in each treatment arm and any confounding
that may have occurred day-to-day, et cetera, or within treatment arm,
from treatment arm to treatment arm, is basically the reason why I feel
that I cannot say it's scientifically sound.  Not the analysis per se.

		CHAIR FISHER:  Thank you.  That's helped.  Because I think I did not
phrase it appropriately.  And I'm glad you asked the question, Steve.

		Richard, is yours a different or is it related to this?

		CHAIR FISHER:  Okay.  We'll stay with the statistics and then we'll go
to Richard. And I also see Dr. Gupta and Jan.

		DR. JOHNSON:  Well, I think that what the consultants told us this
morning is that there's a lot of variability from subject-to-subject. 
And I also heard from Dr. Gupta that he tends to use it -- sounded to me
like he tends to use Latin square type treatment structures, which also
fits into what could be called cross over designs that Susan mentioned.

		And part of the reason I think to do that is to control for that
difference among subjects.  And so if you have -- oh, it's not so bad
that they have the subject experiencing both treatments or three
treatments or all four treatments, it's just that it's not balance in
anyway.  And if it were balanced in some way, then the fact that we used
the subject over and over it wouldn't be a problem.  We know how to take
care of that in the statistical analysis.  But it's the unbalance here
that really gives us a lot of trouble in terms of whether the analysis
means anything or not.

		DR. CHAMBERS:  I think Steve and I, Steve Brimijoin and I are, I
think, on the same page.  Is that this is a field study.  And, you know,
it's not ideal, it's not a white rat in a cage study where you can
really control things.  But if it were redone, would it come out any
different if it were designed properly?

		I'm not at all convinced that it would be.  Our consultants have
pretty much said that things are going to vary day-to-day, site-to-site,
time of day-to-time of day and everything.  So what you look at here is
you've got four products, one of them came out a little bit lower than
the other three.  You've got the other study we haven't even gotten to
yet, and it's two or three hours lower than this one. 

		So the study design, even with its flaws, was able to make some
discrimination amongst the various products.  So I have confidence that
it's about as accurate as you're going to get for a field study on
mosquito repellents. And to say that it's flawed because it wasn't set
up statistically quite right, I mean that is true. I will agree to that.
 But whether that makes it unusable and requires that it be redone, I
don't think so.

		CHAIR FISHER:  Dr. Gupta? 

		COL. GUPTA:  It's a very interesting discussion I'm hearing.

		What happens is in the -- your point is well taken.  When you're in
the field there are multiple factors, variables which are out of
control.  If a study is done as is mentioned, the location of the
treatments and all there it is done appropriately, the analysis can
help. Like in the study where we only had a partial data. You can use
statistical methods to calculate the missing points.

		See, I have done personally a lot of studies where we could not get
all the observations in the experiment, but you can use statistical
methods if the study design is sound.  You can use the statistical
analysis to predict or calculate the missing points, and thereby
validate your studies.  So maybe in this case what is needed is maybe go
back and do a different statistical analysis to account for the
variables which we are having discussion on.

		CHAIR FISHER:  So I guess the question raised by Jan and also others,
and maybe Steve and Dan would comment on it, is there's a problem with
the design in terms of they didn't use a balancing randomization, Latin
square, appropriate ways of controlling for those kinds of things was
not put in.

		So there are two points going on. 

One is Dr. Gupta has suggested well is there a way to extrapolate for
that error, which is different from missing numbers.  I mean a pooling
is different from empty cells. But is there a statistical way to account
for that?  Jan's question is does it matter?  Is there macro evidence
there enough that we might have gotten the same -- that one might feel
confident you'd get the same thing if you balanced the design? 
Something like that.  Okay.

		DR. SCHOFIELD:  First off, I'm a big bar little bar kind of guy, so I
think I need to state that caveat.

		I think it's a very valid point.  I think my expectation would be that
we would not necessarily see substantial difference in the estimates,
even were we to have a more robust statistical algorithm.  However,
that's just an expert opinion in terms of what we typically see.

		My concerns would be more focused on whether or not the environmental
parameters were different.  Whether or not, for example, some of the
discussion we had earlier in terms of intermittent sampling would have a
more profound impact on things.  In all likelihood the relative sort of
performance for these products is probably would you would expect to see
in a more robust design.

		DR. STRICKMAN:  It comes down to one of the comments that was made; if
you're trying to make a comparison between the products which was the
way it was designed, apparently, then it's not robust enough to do that
for the reasons that the statisticians are saying.  But if you're just
trying to get an estimate of duration for each individual, then you have
something to talk about.  But the part of the design that worries me the
most is the way that it was carried out throughout the day.

		And these mosquitoes in general, although they bite during the day all
day, they bite more vigorously toward the evening.  And so if anything,
they probably have a bias to underestimate the duration.  And, you know,
that's a real bias there.  

		So, I don't know if that helps.

		CHAIR FISHER:  I don't know why if they -- I'm sorry, I'm just
confused. But they started out in the beginning of the day and
mosquitoes bite more at night, why would that be an underestimation of
how long?

		DR. STRICKMAN:  Because the population is more avid at the time that
the repellents are starting to degrade.  So if they were avid --

		CHAIR FISHER:  I see. I see. Thank you.  

		DR. STRICKMAN:  -- then it might go on a little longer.  Your average
might move out there.

		CHAIR FISHER:  Okay.  Thank you.  

		Richard and then Jan and then -- wait a minute.  Kannan. I missed him.
 Kannan. I also missed Rich.  So I'm totally confused.  But Kannan,
Rich, Jan and then KyungMann.

		DR. KRISHNAN:  Mine is going to be somewhat general, the specific one
to myself unless we -- because I don't want to talk about some of the
issues that we already touched upon.

		CHAIR FISHER:  Well, wait a minute.  Are we done with this?  Was
anybody going to address this particular issue.  Okay. So let's stay
with that.

		So Richard, Jan and KyungMann.

		DR. SHARP:  And this a question, actually, directed for primarily Dr.
Kim, actually.  And that's I think there's agreement that the data came
up a bit short.  But what I'm confused about is why you think the data
came up short?  Because this was a study that the Board actually
reviewed just a short time ago.  And so I think from my point of view
there are three explanations as to why things might have went wrong.

		First is that we looked at the study designed and we got it wrong;
recommendations we made were sort of short.

		Another explanation is that perhaps that Dr. Carroll fielded this in a
way that didn't make a lot of sense. He sort of operationally got it
wrong.

		The other possibility I think is just that it was bad luck.

		And certainly there are other possibilities.  But I'd be interested in
hearing from you whether you think the Board is acting consistently in
saying now that this data came up short given the fact that we did
endorse the project earlier?

		DR. KIM:  In fact at the other January's meeting Dr. Alicia
Carriquiry, who was a reviewer for this study, had a lengthy discussion
about proper design of the experiment.  None of them was taken up by the
sponsor.

		And then there was the operationally issues of not being able to
essentially, in my view, enough volunteers to be able to conduct the
study as originally planned.  So what happened at the end of the day is
not design, sort of the spread of the subject over days site, but
occurring in a haphazard way.  At the end of the day you cannot do the
proper analysis.

		Getting to Dr. Gupta's comments when you have a properly designed
experiment, and as Dr. Chambers pointed out, with a field study I come
from a human clinical trial sort of field, there's all kinds of
deviation.  That's given.  But if you have the proper experimental
design and if there's logistical issues, some data points do not get
filled, then you can use the imputations, the kind of Dr. Gupta makes
and carry out the proper analysis.  But if you don't have that plan in
place and you just follow along haphazardly, you have no statistical
basis to do analysis.

		Another point I want to make is that perhaps estimation itself.  If
you are looking at the biasness, probably this results give the proper
-- I agree with Dr. Chambers if you do carry out the proper performance
study in the same setting, probably your predictional time will come out
the same. The question is you cannot estimate the errors associated with
your mean because of the way the experiment is done.  You cannot
estimate the variance.  Mean is okay.

		CHAIR FISHER:  Right.  But I have a couple of questions, but one to
follow what Dr. Strickman was saying.  When you're saying you can't rely
on the mean and you're talking about the comparisons among the products
or -- he was saying there it's too small and you can't do that. But what
about the estimation of duration?

		DR. KIM:  That's what I'm talking about.

		CHAIR FISHER:  That's what you're talking about.  Okay.  

		DR. KIM:   I think probably this will give you the proper data.  But
the problem is you cannot estimate the variability of the estimated
mean.

		CHAIR FISHER:  He's agreeing with you?

		DR. STRICKMAN:  Yes.  I think that's very clarifying.

		DR. JOHNSON:  Another way I think to say what Dr. Kim is trying to
say, is you get an estimate plus or minus numbers.  He says it's the
plus or minus stuff that you can't estimate right.

		DR. CHAMBERS:  But I thought the duration was what this was all about,
right?  That's what you're trying to get; complete protection time.  And
the variability isn't really part of the label, right?  

		And just to go back, I guess was to Dr. Strickman's point a few
minutes ago.  If this ends up being a conservative estimate of complete
protection time because of the evening more activity and everything,
then your amount will be more protective of people if that's what your
estimate was to be.

		CHAIR FISHER:  Okay.  Yes, you may.

		DR. BRIMIJOIN:  Just take my own take on this again.  Is I'm starting
to think of this, and the one that follows, as a bunch of disconnected
studies. Each one is a separate study on a separate compound.  Had
nothing to do with each other.  We're not going to compare one with the
other.  It's just by an accident that these studies happen to involve in
some cases the same subjects or overlapping subject populations.  

		And it's also an unfortunate fact that each of these products was
examined in two sites.  And some of them were examined more in one site
than the other site.

		And from each of these studies, then we've got some estimate, we've
got a report back about how long it lasted.  That's data.  It's field
data.  And it has this variability in it.  Same kind of variability we
might have if we did an agricultural study and extend it over days.  And
some of the things were done on -- you know, nothing is ever quite the
same.  So if there's an uncontrolled variable in there, which we know
might influence things, probably did a little bit.

		So we have these numbers. And so EPA knows. So my bottom line is that
these numbers have to be taken with a grain of salt, but they're not
worthless numbers.  

		And it's true.  It's true that we just do not know how precise they
are, and there's no way of extracting that information.

		CHAIR FISHER:  Before Kannan speaks, I did want to clarify in response
to Richard's question it sounded like what you were saying, Dr. Kim, was
that the pooling issue was brought up by Dr. Carriquiry  last time the
issues.  What wasn't done was -- it was not that Dr. Carroll was
following what we told him to.  It was that some of what we recommended
was not incorporated and some of what was in the protocol was not
conducted, is that what you were saying?  Okay.  So it's not our
inconsistent.  So I'm glad you asked the question and clarified that.

		Yes?

		MR. CARLEY:  I don't have the report from the January meeting in front
of me.  But my memory is that several of the issues, the specific issues
that have arisen in review of these completed studies were not addressed
by the Board in the January meeting.  They were not among the point in
Dr. Carriquiry's list.   One specifically that wasn't mentioned, for
example, was this pooling across days.  I'd have to go back to that--

		DR. SCHOFIELD:  That's my recollection also.

		CHAIR FISHER:  Right.

		DR. BRIMIJOIN:  What she really harped on was the sample size.

		CHAIR FISHER:  But I thought the pooling was not in the design to
begin with.  So there wasn't a plan. When we reviewed it, was there a
plan to pool?  No.

		So the issue really here is whether or not we're giving conflicting
advice.  And it doesn't sound like we are.  That the issue of pooling
was not raised because that was not in the protocol.  And so to suggest
that Dr. Carroll shouldn't pool was really not presented to us because
we didn't assume he was pooling.

		So Paul had asked me, and I just wanted to make sure that we clarified
that.

		DR. LEBOWITZ:  But I thought we had told him -- I mean, we had
accepted that he would use ten subjects per treatment on per day,
basically in order to test it.  And  then -- okay.  

		DR. PHILPOTT:  I actually do have it on my computer.  And I mean our
scientific considerations explicitly say that we raised several comments
in terms of the statistical design.  And that if the recommendations
provided by the EPA and those suggested by us were followed, then the
protocol would be likely to generate scientifically valid data.

		So I think it's both what Dr. Kim has said and what Mr. Carley has
said that we're not being inconsistent.  We did make a number of
statistical recommendations, however a lot of these were issues that we
expected to come up.

		CHAIR FISHER:  Yes.

		MR. CARLEY:  But I think we can also say that if we looked at another
protocol that had several substances, some specific questions about
pooling, about Latin square designs and so forth, would be on our
agenda.  We would have to think about those another time.

		CHAIR FISHER:  Absolutely.  Absolutely.

		DR. KRISHNAN:  One thing that is underlined is that there is no
feedback loop here. I mean, we do make these recommendations.  The
protocols don't come back to us.  That's not how it's intended.  But if
some level of exchange happened between EPA and on this, maybe if they
just update their protocol or something and provided the EPA something,
that probably would be useful.  Because there is no communication
occurring after we make the recommendations.

		CHAIR FISHER:  Well, they're not supposed to -- right. 

		DR. KRISHNAN:  So maybe that's a part of the problem.

		CHAIR FISHER:  Okay.  We have got to finish.  We've got to go home
tomorrow.

		DR. KRISHNAN:  Yes.  

		CHAIR FISHER:  Yes?

		DR. KIM:  I'm looking at Dr. Carriquiry's comment about the
experimental design.  She used italicized expression indicating that if
the attached studies replicated in two locations using  different
subjects.  It's right there.

		CHAIR FISHER:  Okay.  I think it's clarified.  We are not
inconsistent.

		I'm going to rush through.  Maybe where I think we might be agreeing. 
So, Kannan, are you going to destroy that.

		DR. KRISHNAN:  I'll speak fast.

		I just had a couple of things to say.

		Okay.  (1)  It's only as some of the protocol deviations are
attributed to -- you know, unanticipated environmental conditions it
says here, West Nile Virus, virus issue as the other deviations clearly
are not related to that.

		A general suggestion or question that I have is that shouldn't there
be a place for a contingency plan, particularly for field studies such
that certain deviations could occur but still be ethical?

		CHAIR FISHER:  Let's save that for the ethical.  The ethics
discussion, which is going to be very different.

		DR. KRISHNAN:  Yes. I mean still be scientifically valid and ethical.

		CHAIR FISHER:  Yes.

		DR. KRISHNAN:  And finally, back to the protocol deviations on the
scientific part.  EPA, you know, listed a few questions for specific
issues, five or six of them. When I look at those if those issues
existed during our review, if that's how the study was going to be done,
I think there would have been hesitation on request of additional
information before letting it roll.

		CHAIR FISHER:  Okay.  So now we have to -- Michael, don't leave. 
Don't leave yet. We're just trying to conclude on the basis of the
science.  Okay.  

		So the study was messy.  And we know all the problems with it.  Some
of the messiness -- there is no evidence that the messiness itself
changed the data with respect to legs and time and how close people were
to one another. But the overlap and pooling in the design makes any
comparison of products not legitimate. There's no way to legitimately
compare products.  

		The data might be useful for an stimulation of duration, but the
standard deviations, right, I think that's what you were saying, Dallas,
one can't really be secure about the standard deviations.  There might
be a macro -- it might be useful to EPA as a macro estimate of hours
that in fact, as Dr. Strickman was saying, going into the night, in
fact, ends up to be conservative in terms.  Because it becomes more
dense potentially. I don't know if I'm using the right language.

		So I guess if we were going to have an answer to this question, for
the first question, we have no evidence that all of these factors would
have changed the results one way or the other.  And that's the only way
that we can answer them, right?  Jan, that was your -- right.  Okay.  

		The second one is that pooling did present problems.  That the major
problems are in comparison of the -- we don't recommend using this data
to compare the products to one another.  We only recommend the data for
some kind of global estimate of the number of hours, but not individual
differences vis-a-vis the standard definitions.  And we're not happy
with studies that aren't performed as planned.

		So is that about our consensus here?  Okay.  All right.

		Let's go to ethics.   

		DR. PHILPOTT:  Okay.  Do we have the first question up.

		Yes.  So I'm actually going to start by answering specific comment to
first in the context of the studies that we're looking at here.  It is
my opinion and my colleagues may agree or may differ and I'll leave that
up to them to comment, that because of the way that these studies were
conducted and were so -- that the different compounds were tested
simultaneous and that essentially we've got three or four substudies
here that are integrally related. If we find that the conduct of one of
them was conducted unethically and it is our recommendation that you
can't use the data, that applies to the whole data set.  You can't
cherry-pick data on the basis of -- well, let's not talk about WPC-001
just yet.  Let's focus on SCI-001.  And that if we find that these
protocol deviations reach a level of leading us to recommend that you
not the use data, I would personally say you have to throw out the whole
data set.  You can't cherry-pick.

		Now, let me just start by saying Mr. Carley did an incredible job of
summarizing and bringing together what was a huge amount of vary
disparate data spread across multiple studies.  My only regret was that
I did what I always do, which was I read the protocols first and started
trying to draw my conclusions before I read his review.  So I ended up
wasting a lot of my time trying to trace who did what when he so nicely
did that all for me.  And, actually, did a better job than I did.  But
thank you very much. I mean that was an incredible amount of work and I
think you deserve a lot of kudos for your review.

		In general I agree a 100 percent with your conclusions.  And for the
purposes of time, I am going to only focus on I think what is the big
issue for the ethicists with regard to this particular set of studies,
and that is the serious protocol deviations.

		In particular, I'm going to focus on three of them.  That is the
conduct of the serological or molecular analyses of mosquitoes, the
change of the exclusion criteria and then the substitution of 3434 for
Insect-Guard II without notifying the IRB of these changes.

		And I'm going to start with the serological testing of the mosquitoes
-- no.  It was molecular testing of this PCR.

		I personally should that protocol change, have been submitted to the
IRB and reviewed?  Absolutely. I don't consider it a big deal, though. 
And my rationale for it is this.

		It increased in all likelihood protections for the subjects.  And if
you considered those analyses done separately and independently; if Dr.
Carroll has collected the mosquitoes and then a few years later had
decided to look back at whether or not they were infected with any
arthropod-bourne pathogens, it wouldn't have been a big deal for us.  We
would have even expected it to go to an IRB.  The fact that it was
conducted sort of in conjunction and those data would have been used to
inform participants of potential exposure does require IRB review and
approval. But I don't consider that to be that serious of a deviation to
warrant any other mention other than don't do it again.

		Now, the second is the change of the exclusion criteria. That is a
bigger deal.

		The third is the substitution of a different test compound. That is a
huge deal.  I would have liked to have known with the IRB's response was
when Mr. Carroll informed them of these protocol violations.  And I have
not seen that information.

		The question that we have to really ask ourselves is does the serious
nature of these violations in any way either put the subjects at greater
risk or did it somehow compromise the informed consent process to the
point that the study is so deficient that we cannot recommend the data
be used.

		I am going to use my prerogative as to lead discussant not to answer
the question yet myself, but to wait and see what my colleagues say.  I
have an opinion, but I'm doing five of these studies.  So I'm taking a
little -- I do have to say, though, that I am horribly disappointed that
this level of protocol violation occurred, particularly this is not the
first time that we've had a serious protocol violation involving the
failure to inform an IRB of changes prior to their initiation.

		And once again, it really raises questions about whether or not the
study investigator in this case is cognizant of exactly what is required
with respect to human subjects, regulations and protections.  And so I
would strongly recommend that before a single additional subject is
enrolled in any trial, that those once again be reviewed.

		CHAIR FISHER:  Thank you.  Richard, thank you.

		DR. SHARP:  Someone once said to me I know who I am.  I later married
her.

		I also want to say that I'm quite disappointed with what's going on
here, too.  And my disappointment comes from a very different source,
though, and that's that were a number of subjects that volunteered their
bodies in the service of science here.  They believed that they were
doing something that was going to actually yield results that would
serve some higher purpose.  And I think that the way in which the study
was operationalized here compromised the ability of the stated to have
any value.  So in that sense I think that this a serious ethics issue
because it raises broader issues of public trust and the ways in which
we may have inappropriately asked these individuals to make some level
of sacrifice in a manner which simply wasn't warranted here.

		But the two issues I want to come back to are the two that Sean has
already alluded to.  For me I think the two important protocol
deviations are two of the three that he highlighted.  And, actually,
they're not quite the same ones that Mr. Carley highlighted.  So let me
just reemphasize that point.

		The first issue is an issue that's an IRB issue.  Right?    There was
this departure from the stated exclusion criteria for the study.  I
really strongly say again I think that that is something that ought to
have been reported by the IRB.  Dr. Carroll reported to us this
afternoon that that had still not been reported to the IRB.  That is a
substantial deviation that needs to be reported.

		My own sort of personal assessment on that level is that this may not
actually be a matter that changes the risk to benefit assessment. 
Right?  There may not be any additional risk incurred to subjects
because of that change or the fact that they may be enrolled in sort of
the same study in multiple arms and so forth coming back again and
again.  But that's not my decision to make. That is simply the IRB's
decision to make. Pure and simple.

		It's very clear what their charge is.  It's not our charge. It's not
Dr. Carroll's charge. It's something that the IRB should have been
informed about and they should have made that determination as to the
implications for what needed to be relayed to subjects and so forth.  So
that's a major issue here.

		The second big deviation here has to do with the change in formulation
of product.  And here I think that this is really a complete no-brainer
when it comes to the implications for the HSRB.

		The regulations clearly state that before studies that involve these
intentional exposures can take place, subsequent to the issuing of the
rule, right, the rules that we operate under here require that the study
be reviewed by this Board and some approval issued in advance.  A
product that we did not hear about was deliberately given to these
subjects.  For me it seems absolutely obvious that none of that data
having to do with LipoDEET 3434 can be used by the EPA.

		And so those are the two major issues.  Whether or not that by
extension then implies that all the other data points can't be used from
this study as well, I'm less convinced of that.  But surely the data
that has to do with that particular formulation of product is not
consistent.  The way in which it was obtained is not consistent with the
rule.

		CHAIR FISHER:  Thank you, Richard.

		Jerry?

		DR. MENIKOFF:  Yes.  I basically think my colleagues expressed it very
well.  In terms of showing three points, I'm actually less troubled
about the serological analysis and the exclusion criteria. But as has
been noted, the change in the test compound is huge from an ethical
viewpoint.  The only reason that this Committee is in existence is
because Congress passed a law and EPA imposed regulations that are about
intentional exposure. It's not about mosquitoes biting, it's not about
all these other risks we're often reviewing.  But being exposed to
particular types of compounds.

		And I must admit, my personal sense of many of these compounds is that
actually they're pretty benign.  I spend my day job usually dealing with
drugs, which seem like a lot, lot riskier. But nonetheless, Congress
choose and EPA backed it up in terms that there have to be certain types
of reviews of intentional exposures to these types of compounds.  This
study was changed in a way that it changed a compound that the person
was being exposed to. It did not get the reviews, as Richard has noted,
that are required.

		I mean, it is huge that the IRB did not review this.  I mean, I think
what would happen if an IRB in another context that was reviewing drug
studies and somebody changed a drug, changed from one antibiotic, could
be improved to another antibiotic; I can't imagine how people would
respond to that.  And that's basically what happened here.

		So we talk about substantial compliance.  There's no way in my opinion
you could conclude that this is in substantial compliance with the
appropriate regulations.

		I think there are obviously concerns in terms of a huge amount of the
data, just in terms of the group that were exposed to this compound. And
as Mr. Carley has noted, that's like two-thirds of the data points. But
even beyond that, I think you're going to have issues, and I guess we'll
get to it in a moment, but in terms of even the other data points
because the extent that people were randomized between this compound and
others, I think you have serious approval for any of those data points. 

		CHAIR FISHER:  Sure.

		MR. CARLEY:  In my presentation I said that if this effected 66 of the
100 data points across the two studies, but that was assuming that all
data on a subject who received LipoDEET 3434 was dropped out.  If you
dropped out only the data points, it would effect only the 20 data
points, the 20 treatment days for that compound.

		CHAIR FISHER:  Okay.  Sue?

		DR. FISH:  I would just like to point out for the record that in the
IRB application to IIRB the investigator signs an investigator
acknowledgement which includes a statement that says "I agree not to
make any changes in the research without IRB approval."  And so I think
that that states that should not have been done, and that just supports
everything that my three colleagues about how important this issue is.

		CHAIR FISHER:  Okay.  Are you going to give us your opinion now, Sean?
 We've been waiting.

		DR. PHILPOTT:  I am.  And it's exactly what Rich and Jerry have
already articulated.  That while this may not seem to outsiders as such
a big deal because this is a compound, as Dr. Carroll argues, where it
was only slightly increased formulation, it's similar to other
formulations that are already out there.  The fact of the matter is that
we are in serious violation of both the letter and the intent of the
regulations here. And because of that it is my opinion that ethically
you cannot use these data.

		MR. CARLEY:  Could you please clarify the extent of the data that
you're saying we can't use?

		DR. PHILPOTT:  I've clarified my opinion of the extent of the data. 
Because, as has been pointed out both in our science discussions and
everything else, because of the randomization and the integration of all
of these compounds into a single protocol that to me means that you
cannot use the data.  Any of the data.  

		Yes, I'm not referring to WPC-001.

		MR. CARLEY:  Any of the data in SCI, in any of the three subreports
for SCI-001?

		DR. MENIKOFF:  We haven't had the panel addressing the second of the
ethic, right?  You have a separate panel for the second question.

		CHAIR FISHER:  Yes.

		DR. MENIKOFF:  Which is what I think you're asking about.

		MR. CARLEY:  All right.  We started with the second question.

		DR. PHILPOTT:  We have that charge question.

		DR. MENIKOFF:  But Sue, isn't Sue in charge of that?

		DR. PHILPOTT:  No.  So do you want to go back to that question, Jerry?
 So I had sort of touched upon it and beginning as to my opinion that
the --

		CHAIR FISHER:  Wait.  Before you do this, let's be clear about what
study are we talking about and whether in fact we should be talking
about both studies.  Are they the same issues?  Are they different
issues?  

		Jan, do you have a question?

		DR. CHAMBERS:  Well, they're written up as three independent
compounds, plus the Ultrathon and all.

		DR. PHILPOTT:  It was a single protocol that we approved and reviewed
that had all three compounds plus Ultrathon and individuals are
randomized as to which compound that they receive.

		DR. CHAMBERS:  Yes.

		CHAIR FISHER:  The ethical issue, Jan, is the randomization.  In other
words, it wasn't as if one group in their informed consent says you're
going to receive this fourth chemical that I've added, or whatever.  But
that the randomization itself that we approved was that one of these
three compounds or the control.  So the randomization from an ethical
perspective and an informed consent perspective, and what the subject
was prepared to face as risk or inconvenience is the issue from an
ethical perspective.

		But can you answer my question now or everybody's question, are we
talking about both protocols or one protocol?

		DR. MENIKOFF:  Yes.  I'm just talking about I guess whatever this is
as --

		CHAIR FISHER:  Right.  But is that the case --

		DR. MENIKOFF:  Any of the subject from my viewpoint, I think I'm
saying the same thing as Sean.  Any of the subjects who were basically
-- had the chance of being randomized to that unapproved compound, that
compound that was not part of the protocol --

		CHAIR FISHER:  Okay.  

		DR. MENIKOFF:  -- none of the data relating to their exposures to any
of the four compounds should be able to be used in my viewpoint. 
Because again, the wrong was that that protocol was not an approved
protocol.  They had one of the arms they were being assigned to was not
approved, and therefore it was improper for them to be exposed to
these--

		MR. CARLEY:  So this would then apply to any data from any subject who
was consented into SCI-001?

		DR. MENIKOFF:  Yes.

		CHAIR FISHER:  Right.  Okay.  And some of those ended up to be in the
other one.  Got it.

		Okay.  Okay.  So any comments on this, the three -- 

		DR. BRIMIJOIN:  A clarification, please.

		CHAIR FISHER:  Yes.

		DR. BRIMIJOIN:  So basically I guess we're trashing SCI-001 totally. 
Since there's overlap of subjects in the other protocol, which involved
a separate consent form, so some of those people signed the contaminated
consent form, the compromised consent form.  So are we encroaching on
making inroads on the data in the second study by that standard?  The
same subjects, but a totally different protocol through which so far, we
haven't heard the ethics review, but so far no complaints about whether
that's an ethical study?

		DR. PHILPOTT:  Correct.  But keep in mind that I did ask a very
specific question of Dr. Carroll regarding signing of consent forms.

		CHAIR FISHER:  Which was?

		DR. PHILPOTT:  Which was that if the people that participated in both
studies did sign two different consent forms.

		DR. BRIMIJOIN:  And he specifically answered yes.

		DR. PHILPOTT:  Yes.

		CHAIR FISHER:  So then we may make a conclusion about this one study? 
Okay.  

		So the ethicists are unanimous in their recommendation that I don't
know which federal regulation this -- you'll quote it.  Both?  Both? 
No, no. I knew it was K and L, I just didn't know which part of it.
Okay.  

		Are we in consensus?  Are there any other comments?

		Okay.  So now we move on to the science of the W-1.  But we're going
to take a break.  Okay.  We're taking a break.

		It's ten to 4:00.  And so let's come back at five after 4:00.

		(Whereupon, at 3:50 p.m. a recess until 4:05 p.m.)

		CHAIR FISHER:  Did you want to read us the charge?

		MR. CARLEY:  WPC-001 charge questions.

		A:  Is the research conducted under WPC-001 sufficiently sound from a
scientific perspective, to be used to assess the repellent efficacy of
the formulation testing against mosquitoes?  Please comment specifically
on whether participating in field testing by several subjects on the day
after they had been treated with a different test repellent is likely to
have affected the validity of the results for those subjects on those
days.

		And question B:  Does available information support a determination
that the research covered by WPC-001 was conducted in substantial
compliance with subparts K and L of EPA regulations at 40 CFR Part 26? 
If the conduct of any part of SCI-001 is deemed not to substantially
comply with the requirements of subparts K and L, please comment
specifically on how to assess the ethical conduct of research conducted
under WPC-001 in light of the fact that it was conducted at the same
times and at the same places as the research covered under protocol
SCI-001.

		CHAIR FISHER:  All right.  Jan?

		DR. CHAMBERS:  All of the arguments and opinions that I would have on
this I already expressed earlier this afternoon.

		The only reservation that I really had on any of this stuff was
whether there was residual effects from one day to the next in the
subjects that were repeated the next day.

		For this particular one, the complete protection time came out four to
six hours.  It was shorter, and also that suggests that there even less
residual effects.  So I have even less concern with this one than any of
the others.  But the same bottom line on this is that I think it's
useable.

		CHAIR FISHER:  Okay.  Mike?

		DR. LEBOWITZ:  Agreed.

		CHAIR FISHER:  Okay.  Lois?

		DR. LEHMAN-McKEEMAN:  Ditto.

		CHAIR FISHER:  Okay.  Discussion?  And so the consensus is the data is
useable for the purposes it was purposed for?  Lois?

		DR. LEHMAN-McKEEMAN:  I was trying to be brief, although not quite
that brief.  But I just wanted to point out that I really appreciate the
organization of the data that John Carley did for us.  Because if
facilitated my review tremendously.

		And the one point that I would make, just to add to Jan's perspective
and I'll step back for just a moment because I think that it's not
intellectually satisfying to say that these people were treated on
successive days and it raises some issues.  But I think what we rely on
is our common sense and practical instinct more so than empirical data
to the contrary.  So that's my first point of view.

		And the second is that based on the organization of subject treatment,
for this particular study from John's slide what is number 13, it
appears that the second arm in this study all ten subjects were actually
treated on the same day.

		MR. CARLEY:  In Glenn -- in one site but not the other?

		DR. LEHMAN-McKEEMAN:  Yes.  Site number two.  And what's interesting
is that that site, and I realize there are lots of factors that can
contribute to that, that that site has the shortest CPT of any of the
days trials.  So, again, I think when I put that together my sense is
that there's really not an experimental bias that's built in in light of
the compendium and composite of data and how this particular compound
looked when it was tested.

		CHAIR FISHER:  Thank you.

		Any other comments?  Okay.  So I think we have consensus on that.

		Let's go to the ethics discussion. Sean?

		DR. PHILPOTT:  It would help to have a mike.  I'm sorry.  I have bring
my brain back to the issue here.

		So even though these studies were conducted on at least on three days
in the same location and we have a lot of subjects who participated in
both the study that we just reviewed and have recommended not be used on
the basis of a serious ethical violation, on this study I think it's
useful to try to separate the two in our minds and come back to the
issue covered in the charge question about how now that we've concluded
from an ethics perspective that the research conducted under SCI-001 can
not be used, how to deal with the research here.  So I'm going to focus
solely on the ethical issues associated with WPC-001 first and then
we'll come back to the question of if we then conclude that the data can
be used, how to deal with the issue of the unethical conduct of the
concurrent study.

		And once again, I think that with respect to all of Mr. Carley's
comments in his nicely detailed review to quote Lois, ditto.

		And now I think that we can focus on just a couple of key ones.

		There were a number of consenting issues involving questions about
consent forms that were dated unusually.  And for the vast majority of
these, to me they're sort of one of the things that happens when you're
conducting these studies.  I don't think that in any review of research
that I have done that any investigator, no matter how ethical and no
matter how organized, has not made some little tiny mistake where they
didn't realize the subject walked off with both copies of the consent
form, or they performed one aspect of the investigation prior to having
them sign a consent.   And I think it's important to sort of ask
yourself is there a consistent pattern, and also what extent do those
violations have with respect to the rest of the participants, and also
the informed consent process itself.

		And so to me the only real serious one here, most of the other ones I
think have been nicely explained by Dr. Carroll, is simply the limb
measurements of the one participant several days before he signed the
consent document.

		Once again to quote myself, don't do it again.  But I don't think that
this rises to the level of such a significant deficiency to preclude the
use of these data.

		The real big issue here I think is the protocol deviation involving
the change in the exclusion criteria.  And we've touched upon lightly in
our discussion of SCI-001 the fact that this should have been reported
to the IRB before it was initiated for their review and approval.  But
also in response to questions that we had for Dr. Carroll this
afternoon, I'm very disappointed that this protocol deviation has not
been discussed with them since it has happened.  And I would have liked
to have seen what IRB's response to it would have been.

		The question then becomes whether or not this serious protocol
deviation rises to the level where it either compromised the informed
consent process or where it put the subjects at risk.

		And I tend to think in this case that it probably did not increase the
risk to the subjects, so the real issue here is whether or not the
informed consent process, particularly because the informed consent
document still includes the clear exclusion criteria of no repellent use
in the 24 hours prior to field testing, whether or not we have a serious
compromise of the informed consent process.

		DR. SHARP:  Again, I would echo a lot of that.  I do think it's
important that we distinguish the two protocols, the one that we just
finished discussing and the one that's here.  Because in terms of the
major deviation having to do with the change of formulation of product,
that does not exist in this second study.  So that, obviously, is the
single most important difference between the two studies in question.

		The departure from the statement exclusion criteria, I think I agree
with Dr. Philpott that that is a major issue, and it's a major
unresolved issue.  And I think until Dr. Carroll submits that deviation
to the IRB and we get some indication from the IRB as to how they
envision this and what they see as necessary steps, if any, in terms of
remediation I'm not sure that we can really fully evaluate this
particular protocol.

		So for me I would like to, again, as at least to be cautious with
regard to taking a strong judgment on this protocol in the absence of
having heard from that IRB.  Because, obviously, in terms of the system
of regulations that exist in the United States the authority with regard
to some of these judgments it really is at the local level, at the level
of the particular IRB that's charged with reviewing the study.

		So I don't know quite what to do with that in the absence of having
heard back from that IRB.

		CHAIR FISHER:  Sue?

		DR. FISH:  To quote some of my colleagues, ditto.  

		And I was just looking in the consent form, the approved form for this
study to see if I could tell whether or not subjects were told that they
would refrain for 24 hours before.  And I'm not finding it.  So I don't
think it compromised the informed consent process.  But the issues that
Rich raised and that Sean raised I completely agree with.

		CHAIR FISHER:  So it sounds like at least Richard and Sue -- I think
Richard -- it sounds like everybody's in the agreement in the sense if
in fact 24 hours was not in the informed consent --

		DR. PHILPOTT:  I did see it.

		CHAIR FISHER:  Okay.  

		DR. PHILPOTT:  It's there.

		CHAIR FISHER:  Oh, it is there. It is there.  Okay.  So that's
problematic because they were asked to do something once they were
already in the study that they had not consented to.  And I assume -- I
don't think we need the IRB to make a judgment about that.

		And then Richard raises an interesting point, and I guess maybe we
should just discuss this.  Is there's -- I'm not sure. It's an
interesting question.  It was not submitted to the IRB, it still hasn't
been submitted to the IRB even though the investigator has been well
aware for a couple of weeks that this was a violation based on feedback
from OPP.  And so on that basis it seems to me that we an make a
judgment about that.  So I'm not sure that in terms of what our
responsibility is that we have to wait to see what the IRB says.  I'm
just not sure of that.  Because I think we can make a determination that
it should have been sent to the IRB.

		Richard?

		DR. SHARP:  I guess the question I would have is if we made a
determination that it ought to have been sent there and that that's
sufficiently troubling for us to lead us to say that this is
substantially out of step with the normal ethical practices in this
context, does that sort of prevent us from revisiting that issue at
another meeting where perhaps after Dr. Carroll does go back to the IRB,
explores that issue with them and receives from them an indication that
they believe that this was not an ethically troubling event, can we then
revisit that or does that now preclude us from doing so?

		CHAIR FISHER:  Well, I guess the question is an IRB is important but
it doesn't dictate our judgments.  We have been in disagreement with
IRBs before.  So I think that it's important for us and also in terms of
sponsors and everybody else to know where we stand on issues in terms of
regulation.

		And so I would hesitate to make our judgments always dependent upon an
IRB.  Because I think, as many of us have noted, there have been
inefficient IRB evaluations.  We see IRBs that are approving informed
consents that have nothing to do with the study.  So, you know, I don't
want to go down that road, I guess.  So, you know, that was just my
general concern.

		So on that basis then so we have two things.  And I guess the
question, as everybody has raised, that we need to discuss is whether or
not this raises to the level of substantial noncompliance that the
informed consent told them there'd be a 24 hour wait period and then
there was not.  They were asked to be in something the next day without
a renewed informed consent.  And the other was that given this change in
exclusion, not only was it not reported to the IRB before it was done,
but it still hasn't been reported to the IRB.

		So discussion?

		DR. PHILPOTT:  And I think if we as a Board are going to make a
recommendation about the deficiency, we need to ask ourselves
essentially what we have defined as criteria since the first meeting in
April 2006, which is risk and the informed consent process.

		I mean, it may be a little surprising given what we just talked about
maybe 20 minutes ago.  But I guess my feeling on this one is this is a
major violation, however do I think that changing that exclusion
criteria and the fact that it wasn't noted in the document but we don't
know what said to the participants, did that effect their understanding
of the risks and the benefits to their participation in the study? 

		CHAIR FISHER:  Let me just ask it a different way to get your feeling
about this.

		DR. PHILPOTT:  Yes.

		CHAIR FISHER:  I think we're in agreement about the level of risk.

		DR. PHILPOTT:  Yes.

		CHAIR FISHER:  We have no evidence that it's more risky 24 hours
versus not 24 hours.  However if people were told they were going to
have a day in between --

		DR. PHILPOTT:  Yes.

		CHAIR FISHER:  -- and then told -- and I don't know what the pay
schedule was or whatever, but to me that's the issue.  I agree that it's
not risk in the sense of harm.  But I do think they agreed to have at
least a day in between.  And so they did not.

		MR. CARLEY:  No, I don't think so.  		CHAIR FISHER:  Oh, okay.

		MR. CARLEY:  The way this was presented in the informed consent was to
be eligible to participate in the study you mustn't have used a
repellent during the day before testing.

		CHAIR FISHER:  I understand that.  But if you weren't going to be
repeatedly tested?  I thought you couldn't be tested the next day.

		MR. CARLEY:  I think there is a plausible interpretation of that
language that says before you start working with us on this testing
under our supervision and control we don't want you to have been using
repellents outside our supervision and control.

		CHAIR FISHER:  I see.  But it didn't say that every other day you're
going to be in the study.  Okay.  

		MR. CARLEY:  Right.  It said nothing about days off between days in
the field.

		CHAIR FISHER:  Okay.  I got confused when you said 24.

		MR. CARLEY:  That plausible interpretation seems to me to be
consistent with what Scott Carroll told us he was thinking about when he
originally put this into his protocols.

		DR. PHILPOTT:  And that was sort of the interpretation that I was
working with this.  I thought about this.

		So my general feeling is that I don't think that this violation
effected the informed consent process to such a level that we cannot
recommend that the EPA use the data if it's scientifically valid.

		CHAIR FISHER:  Discussion?

		DR. SHARP:  I'd like to separate out two questions.  There's the issue
of his compliance with the IRB approved protocol, which I think we're
all in agreement that he was out of compliance with that. And then
there's the question of his compliance with 40 CFR 26 which requires
that he submit a protocol to the IRB and have it approved, and then
there's presumably other language in there.  And I think that in the
absence of looking at that specific language at that section of the
regulations, it would be difficult for me to make a hard call as to
whether he's in compliance with that regulation.  Again, as distinct
from his compliance with the terms of his protocol.

		DR. PHILPOTT:  Well, and to add another layer of complexity, the
regulations say "substantially complaint," if I recall.  So good enough
for government work.

		MR. CARLEY:  Let me clarify that a little bit further.  The
regulations address the idea of changes to approved research only in the
context that I put up on the wall during my presentation in saying that
the IRG has to have procedures to ensure that it doesn't happen; that
changes without approval don't happen.  

		And then as Dr. Fish pointed out, this IRB gets a signature from the
investigator that says I understand that I can't change anything without
your approval.

		That's kind of the points of leverage to decide what to do, how big a
deal this is.

		CHAIR FISHER:  I'm discomforted.  So I'll just say that.  It's just
taking two together, I don't -- you know, I conduct so many studies. 
And, by the way, I've never tested a subject without giving them
informed consent.  I have never had an assistant who has done that ever
in 28 years.  So I'm just saying maybe in other kinds of studies or
field studies it's harder or something like that.  But I have never ever
done that. I just wanted to point that out.

		But I am just discomforted by the combination, I guess, of these two
things.  And I'm discomforted by the fact that we have discussed this
issue with this particular investigator numerous times.  That we even
recommended an ethics course.  That I'm concerned:  What do we say next
time?  I mean, it just seems like we're constantly saying "Well, okay,
that was all right.  You know, you didn't know."  But this would be
something that would not be approvable in an IRB or, you know if it came
to us beforehand.

		So I have a certain level of discomfort.  And I also look to balancing
the data.  I know the regulations speak to, and I agree with it, that if
there are some ethical violations but the data is so important that it's
going to protect the public, I think that there's a weight to be given
to that.  But I don't see that here.  I don't see this data as
critically important.

		So it's just I'm not saying where we should go with it, but I am very
uncomfortable.

		Yes?

		DR. PHILPOTT:  We originally started this conversation and I guess I
was still thinking of it in terms of considering there'll be a PC-001 in
isolation.

		CHAIR FISHER:  Yes.

		DR. PHILPOTT:  And so now, you know, we have to address the issue of
okay now the interrelation with SCI-001.  And I share your discomfort
once we bring them back together.

		If I was to look at WPC-001 in isolation without, that would be my
recommendation. Now we've got some broader issues to talk about.

		CHAIR FISHER:  I just want to point out too, because I do this all the
time, anytime I make a change I e-mail the IRB immediately. Immediately
before I implement the change.  So I'm saying this is not a big deal to
inform your IRB, especially when it's expedited and it's minimal risk.

		Suzanne?

		DR. FITZPATRICK:  Well, I don't share the discomfort because I think
Dr. Carroll's interpretation of what that exclusion criteria meant was
he'd used a foreign repellent.  But the fact that once he realized that
other people interpreted differently and he didn't inform his IRB, that
yes it could have been interpreted in a different way and I've done
this, that's the part that bothers me.  

		But we are auditing our studies and we do find problems with consents.
 A lot of problems.  Troubling, it's troubling to us but I think it's
common.

		DR. PHILPOTT:  Yes, and that's what I was trying to imply.  No one's
perfect.

		CHAIR FISHER:  Except for me.  But then, again, I have a super charged
conscience because of this field.  

		Yes?

		DR. LEBOWITZ:  I wonder if we can just treat this one on its own merit
with the concerns and with the advice to EPA maybe that before they go
ahead with approval they have to get assurance from the IRB, et cetera. 
And come back to the issue of instruction to Dr. Carroll after we were
done with Dr. Carroll's studies, including the proposed studies.  So as
not to take the time much further in WPC-001 and be able to proceed, but
not to ignore the concern we have.  I mean, I think part of our mission
is also to advise, you  know to give better advice to the investigators.
 But I think we still can finish this issue now and then come back to it
again when we deal with the proposed studies.

		MR. CARLEY:  We really do need clarification with respect to the
overlap across the two studies.  And I've asked Leshawna to put my slide
13 up on the wall, which is the one where you can see the spread. 
She'll have it there in a minute.

		What it comes down to there were only four subjects who participated
exclusively in WPC-001.  And there are only two data points at each site
that are not touched by the concerns about SCI-001.  And two data points
at each site isn't a whole lot to go on.

		So we've got to figure out how broadly that stain spreads.

		DR. PHILPOTT:  Right. But one of the key issues here, of course, is
that participants were individually consented for the two studies. So
that does, in a sense, create a wall between them in that regard.  So
even though only four participants were in WPC-001 that weren't also in
SCI-001, I think that we can consider their participation in the two
different studies separately, even though there is overlap as to when
they were conducted.  Because there have been many times when I've known
people who were involved in multiple research protocols that were being
conducted concurrently.

		MR. CARLEY:  With shared data?  Remember that all the limb
measurements or all but one of the limb measurements for WPC preceded
the first consent form for that study.

		CHAIR FISHER:  Sue?

		DR. FISH:  Going back to this in this current protocol about what the
consent form says about the 24 hours prior, the only thing I can find is
that in the consent form it says you must not have used repellents
within a day prior to the start of the study. And if the start of the
study is dosimetry, then the fact that they were dosed day after day
after day from a subject's point of view is completely consistent with
what they were told.

		I understand the enrollment criteria-- I mean the protocol.  But from
the consent form as far -- I'm still on the compromised consent process.

		DR. PHILPOTT:  Not everyone participated in the dosimetry, though. But
some--

		DR. FISH:  But even day one, I mean even if you just participated in
the field part of the study, if you didn't use before day one, then if
you were dosed on day one and day two and day three as a subject reading
this, I think it would be fine.  I would have assumed it were fine.

		CHAIR FISHER:  And let me just summarize, because I don't want to beat
a dead horse if everybody's on the same track.

		So it sounds like it's not clear that there was a violation of
consent.  It's just not clear.  Because given it could be interpreted
and Dr. Carroll thought it was interpreted in that way, I don't see that
it's a clear consent violation based on what Sue was describing.

		In terms of the overlap between -- the constant issue of the overlap
between 001 and the WP, that doesn't seem to be problematic on the basis
that they were independently consented for this study, whether or not
they were in that other study or not.  I believe that's the conclusion
of the ethics panel.

		And with respect to the exclusion criteria, I think as Suzanne pointed
out, Dr. Carroll did not think that that was increasing risk which was
how he interpreted informing the IRB.  So that aspect of the initial
misjudgment was not substantial.  The fact that he didn't report it once
he knew it, I don't know why that.  But it doesn't seem to rise in
people's mind to the level of a substantial noncompliance.  

		So is that where we are, or am I mistaken? 

		DR. PARKIN:  I just have one other question for the efficeces to help
me remember.  John raised one thing on one of his slides that hasn't
been discussed yet. And I wondered if it was a concern or we have just
forgotten about it, or it was so minor that you'd not worry about it. 
But the point was stated that there was a change in role of the
experienced subjects to include service as assistants to the
investigators. And it was listed only under WRC-001, not SCI-001.  And I
can't recall exactly what that change was, and I wonder if somebody
could help me remember and we could decide whether that was significant
or not.

		MR. CARLEY:  The first thing to remember is that there was this
special class pre-identified experienced subjects who were used as
candidates for the untreated controls.  From within that group, and I
think this applied equally to both studies.  I'm not absolutely certain
of that.  But from within that group of experienced subjects the
controls were asked to help with dosing the other subjects so that more
of them could be treated equally all at once kind of thing.  That was
the rationale that was offered was that this allowed more consistent
timing of the dosing.

		The concern that I had about the change in role is partly based on the
fact that the dosing by technicians, by skilled, trained technicians was
identified as a risk reduction, risk minimization step in earlier
discussions.  And while we're told that these people are experience, for
all I know they participated in scores of these studies and they may be
very, very familiar with them and very highly skilled at applying
repellent, I noted that there was nothing in the consent document about
their helping treat other subjects.

		They had a special consent document for WPC, but there's nothing in
there about treating other subjects.  And this is a significant change
in role, to my way of thinking anyway.

		Is that clear?

		DR. PARKIN:  That was very helpful.  And I would then say, obviously,
there was nothing in the consent form telling anyone that they might
have a role of dosing one of the other subjects in the study. To me
having a subject dose another subject, to me that's a real serious
concern.  Even if they've been through other studies.

		CHAIR FISHER:  I don't see it as much, only because I've heard about
these studies for quite a while.  But it's very interesting because are
they still participants when they're dosing?  I mean, it raises a lot of
very kind of interesting and complex issues.  But the question of
whether-- I assume they're paid no matter how long they're there. And
the issue really rests on voluntariness and things like that.

		You're absolutely right. It should have been included in the informed
consent form.  But the issue becomes, as Sean put it, did it increase
risk and did it violate their consent.

		DR. PARKIN:  As somebody being dosed by another subject, I would have
had concerns that my risks may have been increased.

		DR. FISH:  Were they control subjects and working in the dosing at the
same time, John?

		MR. CARLEY:  That's my understanding.  But I can't be absolutely sure
of that.

		DR. FISH:  So might it have been possible that the control subjects
might have been exposed themselves to the repellent and there might be a
scientific compromise here?

		MR. CARLEY:  I assume they did not apply the repellent with their
lower legs.  I believe they were probably wearing gloves and using a
gloved finger to apply it, and that it would not have confounded the
control testing of their untreated leg.

		DR. FISH:  Contaminate?  Okay.  Okay.  Thank you.  

		And just to add and since we've been told previously they're all
people working in this area.  They might have been skilled technicians
that happened to also be in the study but weren't employees of Dr.
Carroll, employees of somebody?

		CHAIR FISHER:  Jan?

		DR. CHAMBERS:  This is not an invasive form of dosing.  Rebecca, this
is not an invasive form of dosing.  It's just applying lotion on the
skin so it seems pretty trivial.

		DR. PARKIN:  Yes.  I'm just looking at it from the viewpoint of
another participant and whether I was not informed in the informed
consent that somebody other than the staff or the PI was going to dose
me.  And then I find out that my partner in the field is going to dose
me, I might have a different decision about whether I was willing to
participate or not.

		CHAIR FISHER:  Okay.  Well, I think that in principle is a very valid
concern.  I think at least for this context, the fact that it's rubbing
a little cream on the foot, or the leg, you know I think many of us feel
it doesn't rise to that at least.  But it gets back to the fact that we
would like to see these much -- you know, it doesn't rise to substantial
incompliance, but it does raise all the problems that we've spoken
about.

		Have we answered all your questions, John?

		MR. CARLEY:  I'm unclear about the answer to this one.  Are you saying
that to have some subjects treat other subjects without saying that in
the consent documents for either the treated or untreated subjects is a
change that's within the discretion of the investigator and doesn't need
to be reported?

		CHAIR FISHER:  No. Within the context of this particular study where
it's rubbing cream on a leg.

		MR. CARLEY:  Isn't that the same argument that Dr. Carroll made that
said to switch LipoDEET 3434 wasn't a big deal because it doesn't change
the risk profile?

		DR. PHILPOTT:  I think it's more along the lines of it is a violation.
 The question is what level of seriousness does that deficiency rise to.
And I don't think it rises to the criteria that we've established
previously about risk and informed consent.

		MR. CARLEY:  The risk and informed consent criteria, which I agree the
Board did set early on, my memory is that those were in the context of
pre-rule studies where the significance of deficiencies had to be
assessed by the standard of clear and convincing evidence of a
significant deficiency. And here we have a different standard. We have
the substantial compliance standard with all of the provisions of K and
L, which it's a very different context for applying that standard.

		My view is that this change in role is at least as significant a
change in the terms of consent then the point about the violation of the
exclusion factor.  So I'm puzzled.

		DR. PARKIN:  And I would think that it's also in the same ballpark of
needing to be reported to the IRB.

		CHAIR FISHER:  William.  Bill and then Rich.

		MR. JORDAN:   At the risk of wading into waters over my head, what I
see as common to the change in the role of the untreated controls and
the exclusion criteria is that the protocol has described one particular
process, the IRB has reviewed it and blessed it, EPA and HSRB have
reviewed it and said go ahead.  The subjects have consented to it.  And
then the investigator says, yes, here's your consent process but I'd
like you to something a little different.  And the dynamic of changing
that sort of agreed upon review process is what I find troubling.

		In the particulars it may not have altered the risk very much, but it
reflects a difference in the relationship between the participants and
the investigator that sort of departs from the IRB process. And the
review process is intended to ensure that the investigator doesn't do
something at odds with the best interests of the participants. And it's
undermined that relationship.

		CHAIR FISHER:  It gets harder and harder.  Thank you.

		Rich?

		DR. SHARP:  So I think your reasoning is right, Mr. Carley.  At least
I'm inclined to agree with that that each of these things are, by in
large, on pair with each other.  These are a series of relatively minor
compliance issues, all of which should have been reported to the IRB. 
But any one of those individually doesn't rise.

		I think what we're here as sort of a matter of consensus here is that
none of them rise individually to a level where we would say that there
is substantial lack of compliance with the rule.

		Now the question that I think Dr. Fish was raising earlier, though,
was whether or not sort of collectively there are so many of these
things together that they sort of make us all feel a bit uncomfortable. 
But I think we're moving away from that point, unless I'm wrong here.

		CHAIR FISHER:  I'm seesawing.  I have to say, you know I don't see the
-- I would prefer -- you know we've been doing this for 18 months now or
two years?  How long have we been doing this?  April 2006.  We've sent
the same message over and over again, whether to this particular
respondent or all respondents who read what we say.  And in terms of the
integrity of the Board process, you know I think that the first six
months to a year we were sympathetic of the newness of this process and
saw ourselves in an educative mode.  But I don't know why we're
continuing to see ourselves in that mode.

		Jerry?

		DR. MENIKOFF:  I guess I would just say in terms of getting back to
what the specific deviations that we're talking about here, to what
extent are some of them perhaps just misunderstandings or confusion, for
example in terms of the 24 hour thing that it really was maybe just a
misunderstanding in terms of what they thought was going on.  In terms
of, for example, the technician applying the thing, I guess my viewpoint
which it sounds like similar to most of the other ethics people, is that
it's sort of a de minimis thing.  

		I mean, whoever applies some compound to your skin, it's not if that's
a skilled technique that is going to dramatically alter, particularly
dealing with the compounds we're dealing with here.  If it was somebody
doing a spinal tap on somebody else, that would be a big deal.  But it's
that plus the other thing -- now, I guess again you could add them
altogether.  But, by in large, I could see whether there are extenuating
circumstances or whatever in terms of some of these things, there are
explanations there are explanations for why they happen.

		I take a different take of the notion of changing the compound that is
being tested.  I have seen no explanation of why you wouldn't know,
particularly again in terms of the series point of the history of this,
that you should let the IRB know what's going on here.  So that's why I
do see a difference there.

		CHAIR FISHER:  Gary?

		DR. CHADWICK:  Finally provoked.

		I mean basically what we're talking about here is science that's
equivalent to a high school project.  And, you know, we would anticipate
for things that are being submitted to the EPA for registration purposes
that they would be some higher standard.

		I think the swamp that we are wading into on this one is that it is
not within the investigator's call to determine what the effect on the
risk profile is when a change is made, regardless.

		I agree with absolutely everything that's said.  I mean, we're talking
about off woods DEET here, whatever it is.  I mean, it's no big deal.
People spray this stuff on, put it on their peanut butter sandwiches and
everything else.

		So, you know, I think there's an argument that we can make around this
table and should have been discussed at the IRB, but was not.  And I
think that's the problem here. And it does equate to the one that we
have talked about before.  It is the same ethical deficiency, is that a
change that was made, wasn't approved, wasn't sent to the IRB and it may
or may not have had a change in the risk profile.  But the IRB, who is
our ethical overseer on these in the United States, didn't get an
opportunity to exercise their appropriate oversight.

		So, you know, how small is small?  Well, I don't think that's our
issue to debate.

		DR. FITZPATRICK:  But in looking at the protocol in the consent form
it doesn't say who will apply it to the person. So I don't--

		DR. CHADWICK:  Yes, I know.  But don't you think the IRB when they
approved this assumed that it wasn't being applied by "oh, you're
walking through the park, take my picture sort of thing."  That happens
all the time.  Okay.  

		DR. FITZPATRICK:  Well we don't know how technically skilled the
person was that applied the -- all I'm saying -- I don't see that as
really a violation of protocol if that protocol doesn't say who put it
on.

		CHAIR FISHER:  But it also places the subject into an investigative
role.

		So I think as Gary is framing the question, and I think as Bill and
John are concerned about, and I think as Jerry used the de minimis, I
mean you know whether you do this or that, I mean I think from a
practical perspective it's not that important in terms of risk.  But the
way Garry's talking about it, and I understand there's a consistency
here.  There's a lack of alerting the IRB to changes in the protocol.

		Here there are two changes.  One was the use of participants also as
researchers or whatever, and the second one was changing the exclusion
criteria.  And here there's a repetitive not informing the IRB, not even
informing them now when it's been I don't know how long since the study
has been conducted and the investigator has been aware.  And for
consistency, you know, we all agree that changing a compound is scarier.
 But yet the consistency of the issue really is, as Gary has framed it,
does EPA want to communicate to the investigators or do we as a Board
want to communicate that it's within your purview and we'll decide when
it gets to us how dangerous it was, but is it in your purview not to
comply with the regulations in terms of informing your IRB when you make
a change.  And so I guess that's what we're grappling with, as I
understand Gary.

		Yes?

		DR. PARKIN:  I just wanted to explain a little bit on what Suzanne was
saying.

		You know, whenever there is a change in the research team, if you add
somebody onto the team or they go off the team, you have to report it. 
This IRB didn't get the opportunity to decide whether this particular
activity that these subjects were going to do to other subjects would
then constitute them being part of the research team.  So that's another
reason why it should have been reported.

		MR. CARLEY:  A point of clarification.  In section 9.3 of the protocol
entitled Study Material Administration the first sentence reads as
follows: Study materials will be administered to each subject by
Carroll-Loye technicians.

		DR. CHADWICK:  And if it wasn't in the protocol, the IRB forms
certainly should have asked who was performing the procedures.

		CHAIR FISHER:  Okay.  So the question before us, it seems to me so far
framed, is the major violation here is consistent with the violation
that we found in the other study which is a failure to report to the IRB
on not just one, but two changes, and whether or not that rises to
substantial noncompliance is I think the issue.  That we're not arguing
that putting cream on or not putting cream on is a high risk.  And we're
not even arguing, because we're not sure what the subjects actually
expected.  Although the informed consent says a technician is going to
be putting it on.  Because we also know there's dual roles.

		So it seems to me the singular issue that we know is true is that
there's a lack of compliance and we do not want investigators thinking
it's up to them to judge what they tell, especially since there's e-mail
and you could just e-mail somebody.  I'm doing this and it's expedited
review.  It doesn't take forever.

		So, I'm shifting back and forth. But anyway, that's how I see where we
are right now.

		So is anybody in a different place than that right now or wants to
make a recommendation for where our consensus might lie?

		DR. PHILPOTT:  So that's what I'm asking, going to ask.  Are you
framing that in the sense of the question to the ethics reviewers now of
do we think given the discussion whether or not we have reached a
substantial compliance barrier?

		CHAIR FISHER:  Right. And also to look at consistency in our
judgments.

		Yes?

		DR. PHILPOTT:  I have been seesawing back and forth and back and
forth. I think we're all troubled by this.  And a lot of it is it's just
very hard to just separate this out from the previous discussion that we
had before the break.

		Framed that way and framed in the larger sense of the HSRB's role in
setting precedent, I am leaning towards saying no, don't use it.

		DR. MENIKOFF:  Okay.  I guess I haven't changed my mind on this.  I
mean, if the two things we're referring to are the change in who applies
it and the 24 hour rule, I think those are the two we're talking about,
right?

		On the 24 hour one my take on this, and maybe I'm just misinterpreting
it, is that there's either a miscommunication or whatever that it didn't
sound as if a change was taking place.  That the only thing that was
clear in the protocol was that they weren't applying it to --

		CHAIR FISHER:  Oh, Jerry, I don't think that's the one we're talking
about.  I think we realize, we agree with you.

		It's really lack of telling the IRB two things.  The IRB wasn't told
about the exclusion.

		DR. MENIKOFF:  The 24 hour?

		CHAIR FISHER:  Yes.  

		DR. MENIKOFF:  But that's what I'm saying there wasn't anything to be
told.  I thought there was notion that that wasn't changed.  That in
fact the investigator's understanding was what the rule was was that the
only people they were excluding were people who before they enrolled in
the study had applied the compound within 24 hours.  So that -- I could
see a legitimate argument here that he didn't understand that there was
anything he was changing that he was supposed to tell the IRB about. 
And given that take, which sounds pretty reasonable to me, I find it
hard to blame him for that.  That that isn't a real noncompliance issue.
It's a lack of communication or just a different viewpoint on two people
in terms of whether the study was changed on that point.

		It sounds perfectly reasonable that what we're talking about is the
only exclusion was applying the stuff during 24 hours before you
enrolled in the study. So I guess on that ground I don't think that was
an issue that he should faulted for.  At least there's a plausible
interpretation that there was nothing for him to report on that because
he didn't make a change.

		So then we're left with one change in terms of my viewpoint.  I think
that's a relatively minor thing.  And in terms of consistency if we're
saying there's substantial noncompliance every time sometime happens in
the protocol effectuating the study that differs from what was laid out
in the protocol, I suspect if we went back and reviewed every other
study we did before, we would be concluding none of the data from any of
those studies could be used.

		I draw a huge line between those sorts of things, which I again if
we're back to just a technician who applies the cream or whatever,
that's minor versus again changing the actual test compound, which is
the reason this Committee exists and we would spend the most time over
the past year and a half, or whatever it is, is huge.

		So I do think the Committee has been making a consistent line between
these two things.  And I acknowledged you could fault him for, again,
not giving every change to an IRB.  But unless you're going to
individualize this and say it's a particular investigator who has a
history of doing X or Z, but as a specific matter if we're talking just
about the changes that were made in this study, I don't think we'd be
consistent to say that the particular change of having a fellow
participant apply the cream is such a huge deviation that that would
cause us to not allow the data to be used.  That we would be
inconsistent in applying a stricter standard than we have applied to
other protocols we reviewed.

		CHAIR FISHER:  Richard and then Steve and then Sean.

		But I also want to make a suggestion in terms of wording.  Because we
do need to move on.  And it seems like we're all kind of torn.  Probably
some people probably think what are they talking about?  Why are they
spending so much time on this?  I understand that, too.  So I don't bore
you to death.

		So it seems to me perhaps what we might say in the report is is that
there were, you know, violations of specific requirements but the Board
could not reach consensus regarding whether they went to the level of
substantial noncompliance, and leave it at that.  Because I think we
could talk about this all day and keep seesawing.

		So, you know, if that's where we can go, I think sometimes that's
where we have to go.  But let's try to -- rather than keep talking about
that, see whether we're comfortable with that language with articulating
exactly what we saw as wrong.

		I think Jerry -- If we do choose to go this way, that we could not
reach consensus on whether or not it was substantial noncompliance, I
think Jerry's framing of why we're being consistent is a good one. 
Because I think it is important that we're comfortable with consistency
in our judgment.

		So I'll ask Rich and Steve and whoever else it was. Oh, Sean.  For
feedback.

		DR. SHARP:  What's swaying me is the way in which you presented the
problem.  And I think this is helpful. To focus not on the changes that
were made, but on the failure to report the changes to the IRB,
following up on Gary's lead there.

		And for me I think the failure to report those series of deviations,
not one, not two, there's actually multiple deviations here.  That that
was an error of judgment on Dr. Carroll's part.  But if you were to look
closely at a lot of different protocol, you're going to see other errors
of judgment being made.  

		And so, again, just focusing narrowly on that failure to disclose the
change to the IRB for me doesn't rise to that level where I would want
to say that he's substantially not in compliance here.

		CHAIR FISHER:  Steve?

		DR. BRIMIJOIN:  Well, I just want to come down heavily on Jerry
Menikoff's side of the seesaw.  Yes.  And I mean I would like to argue
that we should try to see consensus around some version of that idea
rather than abdicating.  

		And I don't feel that we are at risk of sending a mixed message, an
inconsistent one.  We've already told the investigator in the strongest
possible terms how we feel about substituting one substance for another,
however similar they might seem in his judgment. And here the issue is
whether the investigator even recognized these things as being
deviations that needed to be reported rather than deliberating willfully
withholding information from an IRB.

		CHAIR FISHER:  Okay.  So now if we're framing it as Steve and Jerry
have, is that what we're comfortable with in terms of the reasons why it
was not substantial noncompliance?

		DR. PHILPOTT:  I guess I'm going to be writing a minority report then.
 Because I'm still going to come down on the other side.  And part of it
is getting back to the exclusion criteria.  We have a response from Dr.
Carroll to Mr. Carley's review in which his interpretation is more the
interpretation that we have been running with, not with how you think
you could look at the protocol if you looked at the strict wording.

		And so it's fine splitting hairs, or fine splitting of hairs I should
more accurately say.  I guess I'm troubled by the historical pattern
here. And I'm very leery of setting a precedent where we're going to say
a certain level of doing this is okay.

		CHAIR FISHER:  Okay.  So I think we do -- we have a precedent about
this before when there's been a somewhat lack of consensus.  We have
said a majority thought this, however some thought that.  Okay.  And we
that's how we will write it.  We will write it up in that manner.  And
using both I think Garry's and Jerry's kind of frame of reference, which
is very good.

		And we also have time.  Maybe we'll also be convinced about something
by the time we get to our teleconference.  Because that has happened
also where we've just withdrawn our disagreement.

		So that's where we are.  And we will go on to the next.  Good.  Okay. 


		MR. CARLEY:  WPC-001.

		CHAIR FISHER:  Right.  Kevin and John are going to speak so quickly.

		MR. CARLEY:  We're waiting to get the right presentation on the wall. 
There it is. That's the one.

		We going to skip some of this.  Please go to slide 3.

		This protocol was submitted in the July, the day after the consent
forms were signed for these by the untreated subjects for this last one.

		It proposes a field study in the mosquitoes repellent efficacy of
three materials, all of them contained picaridin as the sole active
ingredient.  Two of them are registered, one's a 7 percent pump spray,
one's a 15 percent pump spray. And there's also an unregistered lotion
that's the subject of a pending application that is 15 percent
picaridin, but it's combined with a sunscreen.

		And this is an interesting case because the registrant has several
picaridin products.  And we told them that we needed data on their
picaridin products.  They came up with a proposal that said we'll test
the 7 and 15 percent pump sprays, and we want to extrapolate from that
data to our other products that have comparable concentrations and
similar formulations of ingredients; they're all alcohol-based spray
solutions or alcohol-based moist toilette wipe-on sort of things and we
agreed to that scheme.

		So the two registered materials are surrogates, in a sense, for
several others.  I've forgotten exactly how many, eight or so I think.
Yes, there's at least six.  So that's the context in which this study is
being proposed.

		The lotion product, it's the only lotion.  It's the only one combined
with a sunscreen.  So we said no extrapolation for that one.  We want
you to test that one directly.

		The protocol is closely similar to the ones that we have just been
discussing.  And we think it's rip for HSRB review.  And it needs to
come to the HSRB, and Kevin is now going to pick up with his science
review.

		MR. SWEENEY:  Slide 8.

		Okay.  The objectives are essentially the same as the studies we just
discussed.  I think John just articulated why the data are being asked
for.

		So go to the next slide.

		In terms of toxicities, materials they're pretty benign.  I think the
MOEs are quite high, 7 to 15 percent sprays and got a sunscreen
combination product as well.

		Next slide.

		The study design is quite similar to what we've discussed all day. 
Ten subjects per treatment.  

		Dosimetry phase, and it'll be shared with the next protocol as well. 
Aspiration training, et cetera.  And the study is not blind because
we've got two different types of formulations.

		Okay.  Sample size, the same thing.  Same criteria at this time. And
I'll talk a little bit more about it relates to what we just discussed
today when we get to the end here.

		Okay.  Next slide.

		Okay.  In terms of efficacy trials, it's exactly -- it's pretty much
the same thing.  I mean, you have untreated subjects monitoring mosquito
pressure, they attended by various technicians that are aspirating
mosquitoes before they can probe or bite.  And everybody, both treated
and untreated subjects, are exposed one minute every 15 minutes.

		Again, we're going to determine a CPT value.

		The next slide.

		Field sites.  We don't know if they're the same yet, but we know that
they're either going to be in the central valley of California in
Southern California.  It depends on seasonality.  Probably West Nile
Virus activity as well.

		Lively mosquito populations, again Aedes vexans, Aedes melanimon,
Aedes taeniorhynchus, Culex tarsils and Culex pipens.  And it could be
there could be other mosquito species in the habitat, and we've
mentioned some others today.

		Next slide. 

		End points are going to be exactly the same as what we discussed all
day long.  We're going to have a CPT value, a mean CPT value for each
site reported, probably a pooled one as well as far the value itself. 
And then also a Kaplan-Meier analysis to look at the survivorship.

		Next slide.

		The deficiency we've noted is that the lotion product, that is the
sunscreen 15 percent picaridin product wasn't adequately categorized in
the protocol, although we have full categorized in the application for
registrations pending with the agency at this time.

		Next slide.

		Okay.  In terms of the scientific standards.  Like here's the
appropriate place, of course, to make a few comments.  In light of what
we discussed today, I mean this protocol is written and it's very
similar to what we had reviewed previously last January.  And, of
course, we had a study that was reviewed today based on that protocol.
So I think some additional comments that will probably be appropriate
for us to make would be related to randomization, pooling of data, et
cetera, how we're going to handle to this data. It's pretty much what
we've talked about all day long.  And what's most appropriate in terms
of the experimental design, how many days you could test should all be
on the same day, et cetera.  And those are things I think we'll carry
into our discussion with reference to this.

		So I think that's pretty much it.

		MR. CARLEY:  And you want to do clarifying questions, if any?

		CHAIR FISHER:  Yes.  Is there any clarifying questions in terms of the
science?

		Okay.  

		DR. CHAMBERS:  I have one.  The alcohol-based formulations that's the
same in the pump spray and the toilette that they're planning to use? 
Is that correct, the same?

		MR. SWEENEY:  Correct.

		DR. JOHNSON:  Yes. You said ten subjects.  Are they going to be the
same ten subjects on all three repellents, or it'll be ten new subjects
for each repellent?

		MR. CARLEY:  I'll say a few words in a moment about the subject
selection, the pool they come from.

		DR. JOHNSON:  Okay.  All right.

		MR. CARLEY:  I think I'll address that.

		The ethics assessment is going to be the whirlwind tour.  And for
those of you who are new to this exercise, I hope you can keep up.  If
you have a problem, just ask.

		I'm going to skip this one.

		Subject selection.  Subjects will be recruited from -- this is a
quotation from the protocol "subjects in previous Carroll-Loye repellent
efficacy tests have agreed and requested to be in our volunteer
database."

		Dr. Carroll has told us in the past about how many people are in that
database.  My memory is it's like  60 or 70, something like that.  Yes. 
That's about the size of it.

		And depending on your availability at a particular time when he can do
this study, it may be the same ones that we were just talking about, it
may be different ones at the different sites.  It's hard to predict. 
They will come from this small pool.

		Past studies have involved typically some overlap between sites.  Some
of the same subjects being done at both sites.  

		And as Kevin pointed out, this study is really more like SCI-001 than
WPC because it has the three materials.  So there's the potential for a
more complex design, but it's not addressed.  This is what is says about
the subject selection.

		Again, it has experienced subjects serving as untreated controls.  So
far as we can tell, no eligible subjects would come from populations
that would be vulnerable to coercion or undue influence to participate.

		The usual risks from repellent studies are present here.  There are
risks from the repellents themselves.  Risks from bites, potential bites
at least. And risks from arthropod-borne disease.  There are some other
risks associated with the mandatory pregnancy testing. And the protocol
is designed to minimize them effectively, but they're not addressed.
They're not discussed.  This is something I realized in reviewing this
protocol. It's also been true for the previous protocols from
Carroll-Loye, I just didn't notice it before.  But there it is.

		There are many actions taken to minimize risk.  Note that the protocol
calls for applying the repellent by skilled technicians. And this one
calls for post-test analysis of aspirated mosquitoes for pathogens and
promises to report any positive findings back to the subjects so that
they can be particularly alert to any symptoms of possible infection.

		The probability of harm is correct. Characterized as extremely small.
We agree with that assessment.

		There are no direct benefits to the subjects.  And the primary direct
beneficiary would be the sponsor.  If the materials are proven effective
and remain on or enter the market, in the case of the new one, indirect
beneficiaries might include repellent users who like these better than
other repellents.

		There are not a lot of obvious opportunities.  There aren't any
obvious opportunities to further reduce risk while maintaining the
scientific design. The residual risks are very low and almost certainly
offset by expected social benefits.

		The ethics review, as for the two earlier studies, has been provided
by an independent investigational review board incorporated of Florida. 
They reviewed and approved these protocols and consent materials in
mid-July.  They're independent of the sponsors, independent of the
investigators registered with OHRP.  

		I'm sorry, Dr. Parkin, I didn't include their number, but at your
suggestion I will in the future.

		They are not currently accredited by the AAHRPP.

		They have told us in an e-mail communication that their procedures
haven't changed since they'd sent them to us before.  These procedures,
as you will recall, have been covered by a claim of confidentiality.

		No new information about the IRB qualifications, training,
certification, et cetera has been provided with this protocol either.

		Back one.

		The protocol describes two distinct processes for the pre-identified
untreated controls and for the plain old treated subjects.  There are
separate consent forms, as was the case for WPC-001.  The consent form,
like the protocol, says nothing about any risks of embarrassment or
other psychological risks like to the pregnancy testing.

		The respect for subjects is okay.  We saw nothing to indicate that
their privacy or confidentiality would be violated.

		Medical care, if needed, would be provided at no cost.

		The applicable standards here are in 40 26 K and L.  There is still a
glitch in it.  One of the data collection forms which is missing. There
is no form attached to the protocol for recording the primary efficacy
data here.

		And approved product labels and the proposed label for the pending
product should be attached and used in the dosimetry phase as part of
the effort to get a typical consumer dose.  We ought to let people read
the label.  It's available to typical consumers, make sense of it and
use that as a guide to how to apply it, how much to apply it.

		The conclusions are that everything is okay except for those two
relatively minor points.  And that if further revised consistent with
those points, it will meet the applicable requirements.

		The charge questions are the familiar ones.  Is the proposed research
described in protocol SPC-001 from Carroll-Loye Biological Research if
it revised as suggested in EPA's review, does the research appear likely
to generate scientifically reliable data useful for assessing the
efficacy of the test substances for repelling mosquitoes.  And does the
research appear to meet the applicable requirements of 40 CFR Part 26
subparts K and L.

		CHAIR FISHER:  Thank you.

		Jan?  Oh, I'm sorry.  I should have asked -- 

		DR. PHILPOTT:  That's fine. I just have, and I may be thinking of a
completely different IRB 8( hour into a very long day.

		One of the independent IRBs, not the independent IRB, but one of the
commercial IRBs that we have reviewed in the past there was concern that
we have not had the information to adequately review their competence,
essentially. If I recall there was an IRB that hadn't submitted
information about their membership and background and things like that.

		CHAIR FISHER:  Is this a question about whether or not this is the
IRB?

		DR. PHILPOTT:  Well, and if this is the IRB because we don't see that
information, do we have it now?  Or was that EIRB?

		MR. CARLEY:  No. It was this IRB with respect to which you raised the
question in your comments on the WPC protocol.  In the ethics comments
about that protocol you said something like we don't -- we'd like to
know more about the qualifications and the accreditation status of IIRB.
 And we haven't gotten any more information about those two points.

		I'm not certain at this point.  I, too, am suffering a certain amount
of smear in my brain.

		 I think the lists of members and their degrees and their role on the
Board and some of those things have been shared with you.  The question
that was raised in the earlier comment had specifically to do with their
training and the Board's accreditation.

		We have not gone back to the Board and asked them to please send us a
list of all the training that all of their members have had.  That would
be pretty extraordinary behavior.  We'd want to do it across the board. 
We haven't got there yet.

		CHAIR FISHER:  Jan?

		DR. CHAMBERS:  I don't really think --

		CHAIR FISHER:  Comments?  Okay. Jan?

		DR. CHAMBERS:  One more time.  I really don't think there's very much
for me to comment on this since it is so similar to the protocols we've
seen before.  

		I presume that all of the extended discussion earlier today will go
into EPA's comments to trim up that protocol.

		The only deficiency you all noticed was the information about the
formulation of the lotion product.  And I assume that that will be
fixed.  So that doesn't concern me at all.

		So I really have no concerns at all about this, except in the data
interpretation.  And this is kind of a new wrinkle in that three
products will be tested, and I do understand that.  But two of them will
be used to extrapolate to two others.

		And the 7 percent pump spray will be used to extrapolate to the 10
percent pump spray. That's conservative.  I think that's fine.  But it
will also be used to extrapolate to the 5.75 percent towelette, the same
sort of thing with the higher dose pump spray.

		And I'm not real comfortable with the interpretation that that's valid
if the towelette would not administer the same amount of the insect
repellent.  However, the dosimetry might show that you would apply more
using a towelette than you would with a pump spray.  So if the dosimetry
shows that the towelette formulation administers about the same amount
as the pump spray, then I am perfectly comfortable with the data being
extrapolated. However, if the average person would apply less with the
towelette, then I think those response considerations, just like the
people talked about earlier with the DEET and everything would not allow
you to extrapolate to the lower one.

		But that's using the data for the extrapolation a little bit later. 
As far as the way the protocol is laid out, I have no issues with that.

		CHAIR FISHER:  Thank you.  

		Kannan?

		DR. KRISHNAN:  This is very similar to the SCI-001 I think, pretty
much.

		I have nothing to add except to say that I think I look forward to
some of the comments and discussions about the design and results, what
it statistically shows.  Because it very much relates to those issues
that we alluded to in January.

		CHAIR FISHER:  Suzanne?

		DR. FITZPATRICK:  I don't have anything additional to add.  I'm just
wondering if EPA has the ability to request that before this lab starts
to do another study they all get trained.  I don't know if that can be
one of your recommendations or not.

		MR. CARLEY:  We could certainly recommend that.

		CHAIR FISHER:  Science and ethics?

		DR. FITZPATRICK:  Compliance with human subject regulations

		CHAIR FISHER:  Didn't we ask for that before?

		MR. CARLEY:  Yes.  And as Dr. Carroll pointed out to your earlier
today, after that recommendation he took the training that you
recommended where he did not find anything that directly addressed this
question of --

		CHAIR FISHER:  Or remembered, anyway.

		MR. CARLEY:  Yes.

		CHAIR FISHER:  Okay.  Dallas?

		DR. JOHNSON:  Yes.  Looking at page 14 of the protocol there's a
listing of subjects and subjects one through ten are assigned a lotion,
11 through 12 are assigned the 7 percent pump and 21 through 30 the 15
percent pump.  And then there's two subjects, I guess, are untreated. 
So I take that to mean that there's going to be 32 completely different
individuals involved in this study.

		And I currently serve as a consultant to the Army Research Office
Human Research and Engineering Directorate in which I review protocols. 
And I've reviewed, I don't know, 30 protocols a year over the last four
years.  And in the process of doing that I've review them from a
statistical point of view and an experimental design point of view. And
in the process of doing that I've tried to increase the level of
sophistication and the level of statistical expertise among all the
researchers that I consult with, or work with, or provide advice to. 
And this would not be acceptable.  If I were reviewing this for them, I
would not accept this protocol.

		I would want a listing of what subjects were going to be given what
product on which days.  I would like to know exactly how the process is
going to go on. And if you do that, then you should avoid the kinds of
things that you saw in the other protocols that we just currently
reviewed.

		And so I think -- if I'm going to serve on this Board for a long time,
at least I think I've been appointed for a three year term until I get
kicked off.  But I think in the process of doing that we have to
increase the level of sophistication that goes into these protocols with
respect to the experimental design.

		CHAIR FISHER:  So let me ask you question.  I'm sorry. I just wanted
to ask a procedural question.  Because I think what you're saying is
that in some sense without this detail it may be premature for us to
make a judgment as to the extent to which this could be carried out and
a way that could be analyzed.

		In addition, I don't see our role as designing this study.

		So if we thought it was premature and wanted to see it again in terms
of a design, are the elements that we talked about for these other
protocols, is that sufficient information to know what needs to be in
there?

		DR. JOHNSON:  I've not seen the previous protocol, so I don't know
whether the level --

		CHAIR FISHER:  No. I mean what we talked about today.

		DR. JOHNSON:  I'm not sure I understand the question. I'm sorry.

		CHAIR FISHER:  Okay.  Sue, did you want to --

		DR. FISH:  Yes.  And I think this is what you're getting at.  And so
I'm directing my question to Dr. Johnson and Dr. Kim.  But from what I
heard this morning from our consultants and the tremendous variability
of field study situations, the tremendous number of factors, et cetera,
if we're thinking about improving the quality of the science, wouldn't a
crossover design deal with some of that variability in a --

		DR. JOHNSON:  Right.  And I --

		DR. FISH:  So is that what you're asking, Celia?  Did we learn this
morning things that would help improve the science?  Is that what your
question was?

		CHAIR FISHER:  I'm thinking that one option we have is to say it's
premature for us to look at this because we really don't have any idea
what they're doing with the subjects, we don't know if it's a cross over
or not.  And I guess my question is -- at the same time I don't think
we're here for you to design this particular study.  And so the question
is is there any other resources other than what we were speaking -- what
we were speaking about today and the issues raised about pooling and
about cross over, and about Latin square design, if anybody read the
transcript would that be sufficiently informative as to the kind of
things we expect to see in additional to what we were told by the
consultants?  Right?  That's what I was asking.

		DR. JOHNSON:  Yes.  My point is that I don't care whether it's a cross
over design.  I mean, I would be glad to approve one that's a cross over
design.

		I don't care whether each subject is only evaluated once or you have a
parallel line study or a parallel treatment study.

		And I'm happy with either one of those, but it needs to be stated in
the protocol.

		CHAIR FISHER:  But we're in a -- not even going there.  I mean, are we
at a state where we're saying it's premature for us to see this because
there is no -- to be consistent if we approved it, we don't know what
we're approving.

		DR. JOHNSON:  If we approve, we could be in the same problem a year
from now.

		CHAIR FISHER:  The same problem.  I mean, if that's what you're
saying, it seems to me the conclusion is this is premature.  Something
has to come back with a much more sophisticated explanation of subjects,
design, how they're randomized, whether it's cross, whether it not
crossed.

		Jan?

		DR. CHAMBERS:  If this were a lab study, I think that would be
perfectly feasible.  But what concerns me about that is that this is a
field study, and don't they have to have a little flexibility for
changing environmental conditions and being able to make modifications? 
And if it's too detailed, then it's just going to be a whole series of
deviations that are going to have to be dealt with, I think.  Right?

		DR. JOHNSON:  Well, I don't think so.  I think if I listened to what
Dr. Gupta was saying about the kinds of experiments that he runs, it
seems like he does field experiments, and I think he does them in a well
designed way, from what I can tell. And where he may have some
deviations from that, may have some missing data that shows up from
time-to-time, there's still that basic structure that underlines the
process he's going to go through.  And if he's not able to complete all
the issues that are involved with that, then you deal with that at the
end.  But there's still this plan what you're going to do.

		And I think I'm probably saying the things that sounds like Alicia
might have said last time.  And sort of in agreement with what I think
she probably said.  And I just think that I realize it's a field
experiment and most of the Army stuff that we deal with are all field
experiments. And --

		CHAIR FISHER:  Let me place this into whether we're being consistent.

		As I understand how we evaluated the last study, we said that because
of all these design problems it could not be used in terms of efficacy
or comparison of products, right?  And at minimum it could be used in
terms of some kind of estimate of design.

		So it sounds like we're being consistent, not inconsistent in saying
-- because it sounds like the purpose of this study is actually to
compare the products.

		MR. CARLEY:  No.

		CHAIR FISHER:  It's not?  I'm sorry.

		MR. CARLEY:  The purpose of this study is to get a value for complete
protection time for each of the three substances.  There are no
comparisons whatsoever between products or between products and controls
proposed in his protocol.

		CHAIR FISHER:  Okay.  

		DR. PARKIN:  Celia?

		CHAIR FISHER:  Yes.

		DR. PARKIN:  To respond to the point that Jan brought up.  One way
that the changes in the field can be dealt with is there can be an
understanding up front between the investigator and the IRB as to what
the most significant factors are that may effect risk.  And have a plan
laid out that if X happens and you can't do what is your plan A, you've
got a plan B that's already approved by the IRB.  So you don't have to
go and keep asking for modifications the day of.  You have a laid out
plan that says here's the ideal of what we're going to do. If for some
reason that doesn't work, this is the next step we would take or this is
the next step we would take.

		So there are ways that you can deal with anticipating field changes
that would be significant enough that would change what you would have
to do in the field and get prior approval for that sequence of decision
making.

		CHAIR FISHER:  So I'd like from the science people to let me know
where we're at.  I don't want to be in a situation in which we approve
this protocol, for whatever reason, and it comes back to us and we say
can't be interpreted but because we approved it, it's going to go
forward. 

		So we've got to decide what it can actually demonstrate and what it
can't at this point given the information that we have so that we can be
absolutely clear.

		DR. JOHNSON:  Okay.  Well if the idea is to have three different
experiments and you're going to make an inference of which one, then in
some sense there could be three different protocols; one for each
product.

		And if you're not going to try to compare them, then why worry about
which subjects get randomized to which product?  In that case, you could
say --

		CHAIR FISHER:  That's where I got confused.

		DR. JOHNSON:  Yes. You can do experiment one on day one, do experiment
two on day two, do experiment three on day three.  And if that was
written in the protocol, I would have no problem with that.

		CHAIR FISHER:  That's  -- because I didn't understand why they
randomized.

		Mike --

		MR. CARLEY:  Could you do them all on one day?  Could you do part of
the ten for each substance on the same day?  Like, say, 15 people on one
day and another 15 people on the other day and make three tens out of
that?  Or is the point--

		DR. JOHNSON:  Well, you'd have to do that as long as it was written up
that that's--

		MR. CARLEY:  Yes.  Yes.  I think we can cope with the idea that the
randomization, the allocation of subjects to treatments needs to be
fully specified before the protocol goes forward.  And I think that's
pretty much what you're saying comes down, isn't it, Dr. Johnson.

		CHAIR FISHER:  Hence we're not approving it.

		DR. JOHNSON:  Right.

		MR. CARLEY:  Well-- well -- okay.

		CHAIR FISHER:  I am not comfortable saying only -- the reason I'm not
comfortable is we'll be in the same situation we were just in.  We're
saying -- you know, we recommend that you do this.  Then we have no clue
as to what's being done. Then it comes back and it's a false premise and
then we're in the same boat.  So I'm -- you know, because what you
suggested is what you're suggesting.  We don't know what the sponsor
himself is going to decide to do. And we don't know if it's going to be
a repeat of measures.  We don't know if it's between subjects design, at
cross -- I mean, we don't know what it is.

		And I'm just reluctant if we're going to come back and say -- now if
in fact we say it doesn't matter, ten subjects in each cell is enough. 
We don't care if your randomizing or not, pooling or whatever.  For the
limited amount of information you're going to get from this in terms of
how many hours it takes for a mosquito to have a confirmed bite, or
whatever we're talking about here, you know then that's okay.  But
what's the outcome and how confident is everybody going to be and happy
when it comes back to us?

		DR. LEBOWITZ:  Yes.  We've spent the whole day talking basically about
design the science and the ethics of these studies.  It is not obvious
in the current proposed study or studies, or necessarily in the ethics
that we have that.

		Under other circumstances I too would say, and the groups that I work
with, study sections, et cetera, would say come back to us in three
months with a real design and we will evaluate the design at that point.
 We've given you a whole day of lesson.  You've heard all the problems,
all the objections, all the points, all the critical issues that need to
be addressed scientifically and come back to us with what you propose to
do so that we can evaluate it and make recommendations to EPA as to
whether the proposed study might be scientifically sound sufficiently to
do it.  But based on what we've seen so far today and the problems we
had, and all the discussion we had about those protocols prior to this
proposal, I would say that we're not prepared to evaluate the protocol 
where there's a study to say that the proposed study would provide that
or that it was scientifically sound.

		CHAIR FISHER:  Any other comments?  Okay.  That's our conclusion.

		Should we go into the ethics?  

		MR. CARLEY:  No.

		CHAIR FISHER:  No. Okay.  

		MR. CARLEY:  If there are ethical issues that are raised now, so it'll
happen next time --

		CHAIR FISHER:  Yes, that would be helpful.  IF there are ethical
issues, that would be helpful. Please.  Yes.

		DR. PHILPOTT:  Mr. Carley's evaluation is spot on.  

		In 30 seconds or less, three risks; the products themselves, the
bites, arthropod-borne illnesses, the products themselves, exclusion
criteria, already used product and clear stopping rules.  Bites
themselves, mosquito bites, training the people to aspirate the
mosquitoes.  The bites themselves can be treated with over-the-counter
steroidal cream.  And, you know, arthropod-borne illnesses.

		Right.  But the bites themselves are a risk to participants.  But
they're being trained to remove them before they bite, hopefully.  And
they're working in pairs to minimize that.  And if they do get bitten,
you got a steroidal cream and they're excluding people with a known
reaction, severe reaction to the bites.

		Arthropod-borne illnesses.  They've got clear criteria for
establishing using sentinel flocks and vector-borne surveillance to
choose the time and area they're doing.  And they're also doing the
molecular screening of the mosquitoes that they capture.  

		So I think in overall the risks have been minimized.  You have some
risks regarding confidentiality, which we've discussed previously.  And
we've got an alternate subject approach.  

		They're excluding children.  They're excluding pregnant women. 
Over-the-counter pregnancy tests. Mr. Carley, once again, is spot on in
that some of those risks should be mentioned.  But I think overall the
risks have been minimized as best they can.  And the benefits to society
in coming up with new formulations of insect repellents outweigh the
minimal risks.

		CHAIR FISHER:  Okay.  I didn't hear a word you said, Sean.  But I'm
sure it was critically important.

		DR. SHARP:  I'm listed as a secondary on here.  And I think it's
important since we'll be -- Mr. Carley, I should say, will be
communicating a lot of messages back to the investigator with regard to
the series of protocols here.  And that's that with regard to ones we've
been reviewing and with regard to the two that we're looking at still
this afternoon, we really haven't been faulting the investigator for a
lot of the content of the protocol's design and the content of the
consent document.  The criticisms have been almost exclusively focused
on the communications between him and the IRB.

		And so, again, when we look at the content of this protocol before
it's been done, I don't think that there are a lot of difficulties that
I see here. But just to, again, reinforce that point that where the
problems lied in our earlier discussions were in the communication that
existed between the investigator and the IRB, particularly with
deviations.

		CHAIR FISHER:  Thank you.

		And who is our third?  Sue.  Okay.  		DR. FISH:   The only other
ethical message I think is that there are still some outstanding
information, I believe, on IIRB that we've asked for that hasn't come
forward in previous protocols.  And maybe we could ask for it once
again.

		Thank you.  

		CHAIR FISHER:  Okay.  Okay.  All right.  On to the next.

		Actually, we're going to get done.  Onto the next protocol.  

		MR. CARLEY:  Is it still Thursday?

		CHAIR FISHER:  We have an hour.  We have an hour in our schedule, and
I think we'll be done.

		MR. CARLEY:  You all have been talking longer than I have.

		CHAIR FISHER:  No. Talk fast. Talk fast.

		MR. CARLEY:  I'm doing my best for you.

		Next one Leshawna is SPC-002.

		We have to shift gears here. We're now going to be talking about a
very different kind of study.  This is a laboratory test of tick
repellency for the same three materials that we just looked at.  So
what's constant across these two protocols is the test materials and
some modules of the protocol. But the overall design of what we're doing
is very, very different because of the very different methods
appropriate for testing ticks.

		This slide is in error.  This first bullet should say SPC-002 is
closely similar to other Carroll-Loye protocols for tick repellency
studies that you've seen before.  It's complete.  We didn't have to get
supplements.  We noted a few deficiencies, but we think it's ripe for
your review.

		And it requires under the rule HSRB review.  So I now turn it over to
Kevin to talk about the scientific design.

		MR. SWEENEY:  Okay.  Again, the objective of course is an efficacy
test and this time it will be tick repellency in the laboratory.  

		Okay.  In terms of toxicity, the test materials are exactly the same. 
So you can go to slide nine.

		And just to follow up on the one question that was raised by Dr.
Chambers.  It terms of the percentage of active ingredient that's in the
product, I guess the little nuance here is when you have these towelette
products, the weight of the towelette itself is incorporated into the
calculation of the mass of the product, if you will.  So that's why the
amount of picaridin looks like it's lower, but in reality the alcohol
solution going on is pretty much exactly the same.

		Okay.  In terms of the study design, again we're sharing dosimetry
phase with SPC-001.  The standard dose is converted as the subject
specific doses and subject specific doses are applied by a technician. 

		And just to make mention here, the mean dose in these six trials are
calculated for each subject.

		Okay.  That'll be it.  Next slide.

		In terms of the ticks, and this where the difference and I'll go
through this in a little more detail now.

		These are lab-reared and pathogen-free deer ticks.  Lixodes scapularis
and the American dog tick, Dermacentor viariabilis.  They're both going
to be used in this study.  There's going to be two different species,
both are know to have disease factors.

		Subjects are trained in the lab to handle and observe the ticks and to
remove them before they can bury or bite.  And I'll just say that it
takes -- the adult ticks or a lot of the ticks that are going to be used
in this study, they don't bite immediately like a mosquito.  It takes
some time for a tick to quest and bite. So you have a lot more lead time
here when you have ticks on the skin.

		After single use, each tick is destroyed.  And before use of
repellents in trial each tick must demonstrate normal questing behavior.
 And I'll discuss that here in 11 qualifying ticks.

		Now each subject here serves as his or her own control, verifying
trackness to each tick before using it in a repellency trial.  And so
the subject places the hand to be treated on on a laboratory bench and
holds the arm upright.

		Every 15 minutes a fresh tick is placed on a mark near the subject
wrist.  Normal ticks will move upwards seeking a site to bury and bite. 
Okay.  So generally they have a behavior where they want to go up.

		Ticks that move at least three centimeters towards the elbow on a
subject's untreated arm, and then the three minutes qualify repellency
testing on the treated arm.  So we have one untreated arm that qualifies
the ticks.  One treated arm that's actually being used in the test.

		Okay.  In terms of repellency testing.  Again, we have intervals every
15 minutes.  A newly qualified tick gets placed on the mark three
centimeters below the treated area on the subject's wrist.  So you've
got a line across the wrist.  Three centimeters below that is where the
tick's going to be placed. The arm is kept up.  And basically the
evaluation of repellency is to look if the tick crosses at least three
centimeters into the treated area within three minutes it's scores as a
crossing.

		Again, tick behavior is different than mosquito behavior.  I mean,
ticks might cross to the edge of the treated zone, and in this case the
repellency is a lot more to do with contact.  It takes a little bit of
time before they actually are repelled.

		So a crossing followed by another crossing within either of the
subsequent two exposure period is considered a confirmed crossing.  And
this is similar to what we've discussed before.

		Okay.  Variables and end points.  Again, the measured variables will
be very similar to what we discussed before.  Weighted test materials,
deliver the dosimetry, subject's limb, lotion and cause dosimetry.  The
questing behavior of the tick is something different now.  The response
of each qualified tick to repellent and in time to all crossings.

		And the duration of efficacy is going to be measured as complete
detection time for each subject.  And it'll be the time from the
treatment to the first confirmed crossing into the treated area.

		And in the meantime the first confirmed crossing across all subjects
with standard deviation 95 percent confidence will be calculated for
each test material.  And, again, we'll look at Kaplan-Meier survival
analysis for the median time to first confirmed crossing.

		Next slide.

		In terms of the sample size -- I'm sorry. I'm on slide 14.  Okay.

		Ten subjects treated with each formulation will participate in the
repellency trial.  Again, the subject size and recommendations is pretty
much the same as we discussed before.

		In terms of deficiencies that were noted in my review.  The
composition of the lotion product, which is the combo sunscreen
picaridin combo wasn't incorrectly characterized, and we said before we
fully described that in the application.

		One other point I brought up about the ticks themselves, and this
really only applies to the American dog tick, and I haven't read Dr.
Carroll's sent us a response yet, but I haven't had a chance to look
over it.  But essentially the only concern we had was what the source of
the ticks were, whether they were all from one colony that was reared,
or whether they were known to be pathogen-free for such amount of time. 
Because some of the pathogens they can transmit will pass from one
generation to the next.

		And in terms of risk to subjects and just tick bites in generally,
they're very low.  The insurance of ticks or Rocky Mountain spotted
fever should a subject be bitten.  And then what action would be taken
in the unlikely event of a tick bite, and that really isn't
characterized.

		Compliance with scientific standards.  To sum it up, I think that
generally, and I think the Board's already reviewed at least one tick
study and this is quite similar.  Provided that Dr. Carroll were to
address those deficiencies we noted, I feel that this would yield
scientifically reliable data.

		MR. CARLEY:  Probably subject to some of the same reservations about
the adequacy of the description of allocation of subjects to treatments
that we've talked about before.

		And my ethics assessment, once again I'm going to skip the value to
society.  		The subject section is going to be essentially the same as
it was for the field tests.

		The same pool of candidates.

		The qualitative risks in this case are a little different.  There are
similarly risks from the repellents themselves.  There is a potential
risk of tick bites. There is a -- I'm not sure exactly what to call it.
Negligible -- I'm uncomfortable with zero.  But there's an incredibly
low, vanishingly low risk of disease from a tick bite.  And the risk
from the tick bite itself, as Kevin said, it takes several minutes for a
tick to pick a site and bury and bite.  So the chance of a bite is
extremely low as well.

		And as with the earlier protocol, there's no discussion about the
potential risk of embarrassment from the results of the pregnancy test. 


		And I would note in passing, Dr. Philpott, neither the last one nor
this is one that uses alternate subjects as a way of dealing with that. 
That was the other investigator.

		The risks have been minimized from the test materials by basically
watching things very carefully and applying the repellent by
technicians.  The risk from tick bites are minimized by excluding phobic
candidates and training them to handle the ticks and to remove them
before they can bury or bite.  And the risks of disease are minimized by
using lab raised ticks, which with the exception that Kevin mentioned,
are clearly pathogen-free.  

		There's also a mention of undefined measures to make sure that ticks
are removed before they have an opportunity to bury in the skin.  Those
measures are not themselves specified, nor is the assignment of
responsibility to take them.  And that is a point that could be done
better.

		The probability of harm is vanishingly small.

		Benefits are similar to other studies of this type.  No direct benefit
to subject, primary benefit to the sponsor and potential benefit to
repellent user.

		Similar summary assessment are the risk benefit balance is the last
one.  I've seen specifications with respect to the independent ethics
review, same IRB, same status.  

		The protocol description of the recruiting and consent processes is
complete and satisfactory.  The consent form is weak in its discussion
of risks.  It is one thing to say that there is a very low risk of tick
bites, it's another to say nothing at all about tick bites.  And in this
case this version of the consent document says nothing at all.  

		And as before, there is no mention of the possibility of embarrassment
or surprise in the face of the results of the pregnancy test.

		And the consent form from which I drew that earlier quotation about
these measures to ensure ticks are removed before they can bury, it's in
there in the context of reassuring the subjects that they're really not
going to get tick bites.  But if they don't say who is going to take
what measures, and I'd like to see that expanded.

		There is no distinction between treated and untreated subjects in this
protocol because every subject serves as their own control with that
pre-qualification step that Kevin described.

		No problems here with respect for subjects.  The usual standards
apply.  

		Here are the deficiencies that I noted.  Again, there is no data
collection form provided to record the actual efficacy data testing.

		Again, it would be helpful to have product labels to guide the
subjects when they are being expected to apply a typical consumer dose.

		The risks of tick bites and exposure to tick-borne disease, which are
mentioned in the protocol, should also be mentioned in the consent form
along with a statement that they're very, very low probability risks.

		And the measures, as I said a couple of times, to ensure that ticks
don't bury and bite them, they would be implemented and need to be
explained.

		It meets the complete standards, it appears to meet the various
decision rules. And if it's further revised to correct the remaining
deficiencies, I think it will meet the applicable requirements of
subparts K and L.

		CHAIR FISHER:  Sean?

		DR. PHILPOTT:  And actually a point of clarification on your point of
clarification.

		In 001 it's section 9.1.8 and in 002 it's section 9.1.7 about the
enrollment of alternate subjects, three of them in order to protect
individual privacies.

		MR. CARLEY:  I beg your pardon.  I don't know what got into me, Mr.
Philpott.

		DR. FISH:  I'm not sure if this is a question for Mr. Sweeney or Mr.
Carley.  But the enrollment criteria, the upper end of the enrollment
age is 55.  And I know that in the mosquito studies it has been said
that West Nile Virus is a greater risk for people over 55 if it should
contracted.  And that's the reason for the upper limit.  

		In tick studies is there any -- especially in the laboratory, is there
any reason to have an upper limit?  It relates to the justice principle.

		MR. CARLEY:  Yes.  No.  In a lab story with captive lab breed
pathogen-free populations that concern about heightened susceptibility
to certain pathogens above a certain age doesn't come into play.  It's
worth bearing in mind that as a practical manner from what Dr. Carroll
has told us about the age distribution in his pool, this isn't going to
make any difference.

		CHAIR FISHER:  But these are pathogen-free ticks.

		DR. FISH:  Right.

		MR. CARLEY:  Yes. Yes. Lab studies with captive populations that are
pathogen-free don't raise a concern that would lead us to cap the age
range just in general. It doesn't matter whether they're mosquitoes or
stable flies or ticks, or anything.

		CHAIR FISHER:  Okay.  Jan?

		DR. CHAMBERS:  This is very similar, as you noted --

		CHAIR FISHER:  Wait.  I keep forgetting public comments.  Anything,
Dr. Carroll?

		DR, CARROLL:  No.

		CHAIR FISHER:  Okay.  Jan?

		DR. CHAMBERS:  One more time.  

		As noted, this is very, very similar to a couple of protocols we saw
earlier and so it really doesn't deviate terribly from those.

		I think the two deficiencies that were noted with respect to the
science I concur with, just more information about the lotion product
and assurance that the ticks are pathogen-free.

		Thank you, Kevin for clarifying the percent on the products and
everything.  However, I still have the same concern because in terms of
applying from a toilette and using a pump spray could yield a different
amount of active ingredient on them. And so this doesn't really alter
this protocol at all, but it could alter the interpretation as to
whether or not these data sets are applicable to the towelette
formulation or not.  So that's really just kind of an after the fact
caveat.  But I have no other concerns.

		I assume that since this is a lab study, it will be much more
controllable than the field studies we've been reviewing up until now. 
So certainly the design that the statisticians have been talking about
should be implemented much, much more easily with this study.

		CHAIR FISHER:  Jan, can I ask you a question?  In terms of the problem
with extrapolation for the toilette, I understand that you're saying
that the dosimetry doesn't show that it's the same and the extrapolation
would be a problem.  How important is extrapolation to this study, or
it's not an important part?  If it can't be extrapolated what are the
implications when we see it?

		DR. CHAMBERS:  None for us, just for EPA.  If the data comes in and
it's to be extrapolated to that other product that you don't actually
end up applying as much, then the dose response curve might come into
play and it may not be as efficacious.

		CHAIR FISHER:  So you would recommend that the data is not such?  I
guess what I'm trying to say is is there a recommendation that we're
giving?  Because it sounds like we're saying should he change -- if the
dosimetry shows that it's not the same,  at that point is there
something else to do?  I guess I was just --

		DR. CHAMBERS:  I would just say that the data are not going to be
applicable to extrapolate the towelette if the dosimetry on the
toilette, which may not actually even proposed in here, I don't know,
shows that people apply a whole lot less of it.  So EPA may not be able
to use that data on that product, that complete protection time.

		CHAIR FISHER:  Okay.  

		MR. CARLEY:  That's a helpful comment.  And the way we would pass that
on I think would be to say we suggest that you add the towelette to the
dosimetry phase and see whether it produces an equivalent effective
dose.  If it does, then just go forward as planned in the field phase.
But if not, you'd better test that one, too.

		CHAIR FISHER:  Good.  Okay.  That's where I wanted to go with that so
we can be helpful.

		Who is next?  Michael?

		DR. LEBOWITZ:  I'm glad Jan goes before me.  Well, yes.  I mean you
eventually get to talk.

		Yes. I don't want to talk about the number of subjects but the way in
which they're allocated needs to be specified better, as mentioned. And
whether each subject is a replicate or not in the application phase.

		What surprises me is there's no -- maybe there has been historically,
but I didn't see any control, a positive control of the inert or matrix
ingredient.  There's a blinding in the current design, but that could
have been built in as it occurs in other kinds of lab studies. 

		There's no explicit control of temperature and relative humidity or
measurements of different levels of each to determine the relevance of
variability of each.

		There's something about the representativeness of the population, the
fact that it may represent the local population, but it doesn't
represent the U.S. population.  I mean, that might always be a problem.

		The investigator states that individual differences in repellent
performance and attractiveness in ticks weren't very much, and so
deviations from the ideal frame will not influence the results
representativeness or their generalizablity is a statement that's not
clear-cut, not proven, no data are provided.  And I have questions,
therefore, about the statement.  And there's, again, no use of the
control data.

		I don't know if this is a standard protocol and I don't want to go
through another series of discussions with consultants about ticks as we
did with mosquitoes, but some statement as to how standard this is or
whether it meets -- how it meets EPA guidelines, et cetera, would be
useful in the protocol.  But the bottom line basically is if it's
revised sufficiently to take care of the deficiencies mentioned even in
EPA review, let alone ours, then the research appears it would be likely
to generate scientifically reliable data is hope for assessing the
efficacy of the test substances for repelling ticks.

		CHAIR FISHER:  Thanks, Mike.

		Suzanne?

		DR. FITZPATRICK:  I don't have any other comments.

		CHAIR FISHER:  KyungMann?

		DR. KIM:  Thank you.  

		I think we have a similar problem as was mentioned in the previous
protocol.  But because this is a laboratory tick study I think that it's
a lot easier problem to solve, as was mentioned by Dr. Chambers.

		In the protocol they say that the patient subject will be partially
randomized.  I have no idea what that means.  

		And also in the protocol it says subject will be assigned to treatment
groups on the basis of a randomly assigned subject numbers with a table
indicating that subject number one through ten will be assigned to
lotion, 11 to 20 to 7 percent pump and 21 through 30 to a 15 percent
pump.  But then in the very next section it says individual subjects may
test more than one repellent on separate days.

		And these two statements are simply inconsistent because the former
statement that these 30 distinct unique subjects will be randomized, I
mean that's the implication, whereas the latter implies that there will
be an overlap which we saw early this morning that may create all kinds
of problems depending on how the experiment is carried out.

		So if there is going to be an overlap of subject -- among test subject
on test groups, there should be an appropriate experiment to design that
needs to be employed to allow proper statistical inference.  

		I think that's sort of the key issues as was pointed out with the
mosquito repellent study.

		There are other issues in terms of the specific language in the
statistical analysis such as by saying the Kaplan-Meier estimates
provide median estimates with substantially reduced error estimate,
which is simply incorrect because it can go either way. And also one
cannot make a direct comparison between the estimate obtained from
Kaplan-Meier methods and the typical method that the investigator has
been using, mainly using the mean and 95 percent confidence interval
based on normal theory.  Because depending on the extent of censoring
there is present in the data set, it could be widely different.

		And also another statement saying that the median based on
Kaplan-Meier method is less sensitive data censoring.  I think what that
meant is that Kaplan-Meier method can take care of data censoring
whereas normal theory method cannot.  

		The other comment -- and there are some minor errors in the study
specific instructions for the protocols.  In the site questionnaires it
refers to mosquitoes instead of ticks.  I believe this is a
mosquitoes/tick study, right? 

		And there appears to be some discrepancy in the material safety data
sheets in the document.  I wasn't so sure -- I mean, this one refers to
pump -- I mean lotion 7 percent pump, 15 percent pump. But there is some
overlap of the product item numbers given and the EPA registration
number, that appears to be different.  And so I think that needs to be
sort of clarified.

		So I believe the pontification of the efficacy of the test material
with using the normal theory and the mean and confidence interval based
on the normal theory, I think it is inadequate if there is a censoring
in the time to efficacy failure measurement. And, again, this is a
recurring theme, but discussion of the statistical power because we're
not making any comparison, there's no issue of power.  The only thing
that you can talk about is the accuracy of the estimate of the
protection times.  So that's sort of an irrelevant, sort of nonsensical
statement in the protocol.

		And, again, I mean there is really no statistical justification of the
sample size.  

		I mean when you start a line of sort of investigation at the beginning
you may not have a very good sense of what the variabilities.  Now I
understand that this laboratory has been doing this studies many times
over, they should have a pretty good sense of the variability and use
that information to properly justify their sample size.  And I don't see
that happening.  I mean, this is almost --yes, more than a year, right?

		CHAIR FISHER:  Okay.  I think it's important that we be consistent. 
And so I want to understand, number one, where we're going in this study
versus where we went in the last study.  The last study I think you
mentioned something about -- there are a number of problems that can
easily -- we can say it has to be more justified in terms of temperature
and the relevance of all these variabilities.  I mean, I think there are
some items that you picked up.  But the notion of partially randomized
and we're not sure what that means and the kind of an inconsistency that
they're going to be randomized but then they may be tested in more than
one.

		So it doesn't seem from that that we know how this could be analyzed,
unless -- so we just need consistency.

		Is there something about this study because it's a lab study that we
can say this is what we expect, don't use repeated measure.  You know
what I mean?  Otherwise, we have to say it's premature again.  So we
should be consistent.

		So is there a difference in this study that would lead us to a
different conclusion than the other study?

		DR. KIM:  Well, I guess it is the sensitivity that we have gained
through the experience over a year and a half.

		When you read a statement saying that there will be randomization with
such-and-such rules, my tendency is not to question whether there's
going to be sort of an overlap.  But apparently they are planning to
have some overlaps.  Then what they state in one section of the protocol
is completely inconsistent with the other part of the section. And what
I'm saying is that if there is going to be overlap, they should plan it
properly so that they could interpret the data.

		CHAIR FISHER:  Yes, Jan.

		DR. CHAMBERS:  These are three independent products again.  The design
is not to compare them, correct.

		MR. CARLEY:  No.

		DR. CHAMBERS:  So if there is an overlap of subjects between products,
it doesn't really matter, does it, because they're not going to be
compared?

		DR. KIM:  Right. But, again, the issue is because the experiment is
conducted, your design dictates how the estimate of error happens. But
with the overlap, the types that we have seen earlier, you can get into 
a situation where you cannot estimate the standard errors.

		DR. CHAMBERS:  But again, I think the duration is the thing that
they're interested in for the label, right?

		DR. KIM:  Right.

		DR. CHAMBERS:  I mean, are you guys going to advise Dr. Carroll about
that.  But in my mind if the only thing you're interested in is that one
time point, then it shouldn't matter.

		DR. KIM:  Well, I would like to add to Dr. Chambers' comment that the
duration is the quantity that EPA cares about.  When one does
statistical inference, mean itself is never sufficient.  I mean, I don't
want this EPA regulation to below the level of  a USA Today.  Even they
provide the standard errors for every surveys they conduct.  And this
supposed to review the scientific sort of a criteria for the study.  And
mean is sufficient?  I don't think so.  Mean itself has no value.  You
really have to understand the variability associated with the estimate.

		CHAIR FISHER:  Let me ask a question.  How many subjects are in each
of these?  Ten.

		If in fact he used ten different subjects, would this be okay?

		DR. KIM:  Oh, Yes. Absolutely.

		CHAIR FISHER:  Okay.  So isn't that our recommendation?  Let's just --

		DR. KIM:  No.

		CHAIR FISHER:  No?  Why not?

		DR. JOHNSON:  He says he has ten subjects at each of two sites.  

		DR. KIM:  No, there's no sites.  Yes.

		DR. JOHNSON:  Well, there is in the protocol.

		DR. KIM:  In the huge trial with the mosquito case --

		DR. JOHNSON:  No, but I'm looking at the tick protocol.

		DR. KIM:  Then that's in error.

		DR. JOHNSON:  Okay.  So in the other ones who actually had 20
subjects.

		DR. KIM:  Right.

		DR. JOHNSON:  Ten at each location. so now we're just going to have
ten at one location, is that right?

		DR. KIM:  In the lab.

		CHAIR FISHER:  Per condition?

		DR. JOHNSON:  Per condition.

		CHAIR FISHER:  Per condition.  And if it wasn't repeated measures, if
in fact there wasn't a partial randomization, or whatever he's talking
about, if it was pure randomization of 30 subjects each one in each
condition or pure randomization of ten subjects, the order of which is
different because as you said, they're not comparing the products, is
that sufficient to get the standard deviation that you're talking about?

		DR. KIM:  Well, I mean it's really up to the investigator how they
carry out the experiment.  I'm not in a position to dictate what they
do.  

		The only point I'm trying to make is that if they say that they're
going to randomize 30 distinct unique subjects, that will be perfectly
fine. If they want to use some sort of a cross over, that will ge
perfectly fine.  But it has to be planned.

		And we saw the danger of not planning and going into doing all these
--

		CHAIR FISHER:  Right.  But I'm just trying to understand the
difference between this study -- this study seems simpler to make a
recommendation for.  We didn't seem to have enough information from the
other study, which is why we said it was premature for us to be able to
look at it.

		It sounds to me like saying, look, use 30 different subjects and don't
pool your results or whatever, all the problems that were in the others,
and this looks like it will have except for some of the qualms about the
toilet -- towelettes.  Oh, I'm so tired.

		You know, we just have to be clear.  But we need to be consistent. 
And so it sounds to me that this -- we can make a recommendation that if
you do this, if that's the case -- I know you don't want to make the
recommendation.  You know, I understand the discomfort with it, but I
think we have to be clear about what we will not find acceptable.  		We
will not find acceptable data which statistically can only provide means
without standard deviations.  We said in the last study that we reviewed
in which we had data that that basically was insufficient except for the
most global measure, because we didn't have the standard deviations.  

		And so I think we need to articulate what is acceptable and what is
not in this particular study.  And so, I would just like us to do that.

		DR. KIM:  Well, simple answer to that question is provide an
experimental design that will provide interpretable data.

		CHAIR FISHER:  Mike?

		DR. LEBOWITZ:  Yes. I think we're giving guidance to EPA as advice as
to what they should insist in terms of what they demand of the
investigator or the sponsor.  And it's up to them to say well we won't
let you go ahead until we see a revised protocol that meets the
guidance, the advice provided by the HSRB.  Because, you know darn well
if you spend your time and effort again and money and so forth doing
this kind of study without doing that, then you're going to have the
risk, maybe a very big risk, of it being rejected.

		CHAIR FISHER:  Okay.  So I just want to clarify.  Are we saying that
this is also premature or are we saying that with letting EPA know that
one way or the other, either it's a repeated measure with whatever needs
to be there to be able to get standard deviations or it's a between
subjects design -- you know, we all have different language.  But that t
will not be acceptable if it's a partial whatever, and the randomization
has to be explicit. And there has to be a sufficient number of subjects
to have the normal -- I'm not using the right language. But the normal
estimation to allow for standard deviation because we don't want to see
a mean to which we have no idea what the real variability is.  Is that
our recommendation?

		MR. CARLEY:  A suggestion for how you frame your recommendations.  I
hope you can avoid the word "premature."  Because if you're saying that
we shouldn't have brought it you, it's not clear to us what means --

		CHAIR FISHER:  For this one I'm not saying that.

		MR. CARLEY:  -- if what you mean is -- well, for the other one as
well. If what you mean is we want to see this again before it is
executed, which I think is what you're trying to get to.

		CHAIR FISHER:  Yes. Yes.

		MR. CARLEY:  Please frame it that way rather than talking about
premature.

		CHAIR FISHER:  Okay.  So for the other study it was definitely that,
we want to see it again. But it just sounded from Mike that that's not
what we're saying for this study.  That we can be more specific in terms
of what we would see as acceptable criteria.

		So, Mike?

		DR. LEBOWITZ:  Yes. I would like some type of confidence from Mr.
Carley and/or Mr. Sweeney that if we provide that kind of advice, that
in fact that that's sufficient and useful for them to work with the
investigator or sponsor to make sure that --

		CHAIR FISHER:  I think that's an excellent question. So you heard what
our advice would be that we're not dictating a repeated measure or
between subject's design, but we are saying that whatever design it is
is not sufficient to provide some standard deviations and a statistical
analysis, that is appropriate that when it came back for us to see it,
it would not be acceptable. 

		Is that helpful to you or is it not helpful to you?

		MR. SWEENEY:  That is helpful.  Yes.  And will be used because, yes, I
don't want to come back.  No.

		CHAIR FISHER:  But we want to be informative.  I mean we want to be
helpful.  That's what -- I'm glad you raised that, Mike.

		Okay.  So that's our recommendation. I can't say it again  Whatever it
was, I can't say it.  I can't say towelettes.  We all know what it is. 
Okay.  

		So is that where we are with this one?  Okay.  

		Ethics review?

		DR. CHAMBERS:  Wait, wait, wait. I had a couple of things I wanted to
respond to about Mike. I don't know what you said about controls, but
there are no controls?  Is that --

		DR. LEBOWITZ:  That's a comment about weakness. But it's not going to
be included in the recommendation or advice we give as to how the study
should be done.

		I didn't say there weren't any controls. I said there was no analysis
in which the control data were used, as far as I could determine.  I
didn't see any analytic framework design.

		MR. CARLEY:  I agree with observation.  But you did make another
comment that you criticized the absence of testing with a vehicle --
with a matrix --

		DR. LEBOWITZ:  Yes, but that's just  a--

		MR. CARLEY:  -- which we do not require or want to see.

		DR. LEBOWITZ:  I understand that.  But that doesn't prevent me from
commenting on that.

		In the say way, let me say I said something about temperature and
relative humidity.  You know, way back when when I was doing talks we
knew the variability of response in different ranges of temperature and
relative humidity to the pollutants we were studying, and we were able
to then standardize it within a certain range of temperature and a
certain range of relative humidity. So the experiment could go on and
that was sort of accepted. And I don't know if that's true in this case.
But the issues have come up about variations based that may be related
to temperature or relative humidity, light intensity and all that jazz. 
And I just make the comment in here.

		CHAIR FISHER:  Okay.  But we do understand there is no control and
there's no positive control.  And that's still -- our recommendation is
the same.

		DR. JOHNSON:  Well, they used a control arm to decide whether to use a
tick or not, is that right?  So if the tick doesn't act like it wants to
go to --

		CHAIR FISHER:  Oh, right.  Right.  Yes.  Exactly.

		DR. JOHNSON:  -- go to a host, then they throw that tick away.

		CHAIR FISHER:  Exactly.

		DR. JOHNSON:  Throw that tick away and pick a new tick.

		CHAIR FISHER:  That's right.  Thank you.  That's exactly right.

		Okay.  Let's go to the ethics.

		DR. PHILPOTT:  Okay.  And since I've been in this building now for 10(
hours, thankfully I've written this one out pretty much already.  So I'm
going to read in the hopes that I just get two dittos.

		Once the recommended changes outlined in Mr. Carley and Mr. Sweeney's
ethics and scientific review are incorporated into the protocol as
submitted, and I'm focusing on the protocol not on the potential conduct
and the issues related to interaction with the IRB, the proposed
research described should meet the applicable requirements of subparts K
and L. 

		Risk to study participants are minimal and justified by the likely
societal benefits.  The risks are essentially three fold; reaction to
the test materials themselves, the exposure to biting arthropods and
possible exposure to arthropod-borne illnesses.

		The active ingredient of these formulations is already commercially
available and present in similar concentrations in other registered
products.

		Volunteers with known allergic reactions to insect repellents and
common cosmetics are excluded from participating.  So enrolled
participants are unlikely to be at increased risks of experiencing
adverse side effects upon exposure.

		Clear stopping rules have been developed as have plans for the medical
management of any side effects or adverse events associated with product
exposure.

		Bites from the ticks is probably unlikely. We won't say a zero
probability in deference to Mr. Carley, but negligible given what Mr.
Sweeney has told us about the seeking and biting behavior of these
ticks.

		Barring the EPA's concerns about Rocky Mountain Spotted Fever which
can be transmitted transovarially, the ticks used for the study are bred
and raised in a laboratory environment and are considered to be
pathogen-free which should minimize the risk of vector-bourn disease.

		I do support their recommendations, though, to look into the issue of
the Spotted Fever.

		And finally, the protocol contains several mechanisms designed to
minimize coercive recruitment in enrollment. Compensation is not so high
as to unduly influence participation.  As per the final rule's
requirements, minors and pregnant or lactating women are specifically
excluded from volunteering. And the use of so called alternate subjects
allows for volunteers to withdraw or be excluded without comprising
their confidentiality.

		I agree with Sue's comments that unless there is an explicit reason to
exclude individuals above the age of 55 from a justice perspective, that
should be changed.

		And I also agree with the recommendations that the psychosocial risks
associated with a positive pregnancy test, however unlikely or minimal
those may be, should be mentioned in the consent document. 		Done.

		CHAIR FISHER:  Richard?

		DR. SHARP:  Kudos.

		CHAIR FISHER:  Yay.  Sue?

		DR. FISH:  Sean, you included my one comment, so ditto.

		CHAIR FISHER:  Okay.  I have no idea what our recommendation is. But
it sounds like what he said, it's okay.  As long as that's followed it
goes.

		Okay.  So now I want you to know we ended 20 minutes early.  

		We're not done?

		MR. CARLEY:  We have one left?

		CHAIR FISHER:  We have one left?  Oh, God.

		MR. CARLEY:  I thought we still had one to go.

		CHAIR FISHER:  You are really tired.  But 20 minutes early.  Very
good.

		(Whereupon, at 6:22 p.m. the Public Meeting was adjourned.)

 

 

	NEAL R. GROSS

	COURT REPORTERS AND TRANSCRIBERS

	1323 RHODE ISLAND AVE., N.W.

(202) 234-4433	WASHINGTON, D.C.  20005-3701	www.nealrgross.com

	 page \* arabic 1 

