This document is in response to the October 21, 2020 response from FHR related to the April 21, 2020 request for an alternative means of emission limitation (AMEL) for sensory and Method 21 leak detection and repair programs in the FHR Corpus Christi Mid-Crude and Meta-Xylene process units. The questions below are separated by topic (detection response framework (DRF), leak detection sensor network (LDSN), and equivalence demonstration). Questions that were answered fully have been moved to the end of the document for reference. Topics that raised additional questions are included at the top of this document (pages 1-7). 
--------------------------------------------------------------------------------

QUESTIONS

DRF Work Practice Specific Items:
DRF Process:
Remaining questions related to question 2 under DRF Process:
a. How are the Categories related to the DTU? It is not clear how/if the Categories get us to the 18,000 ppm DTU stated for equivalence.

b. Can you explain the algorithm?

c. The response states, "Category 2 and Category 3 notifications are generated by the LDSN algorithm when positive detections are registered at least 5% of the time over a rolling 72-hr period." Are the positive detections measured or against the trend? Do all sensors contribute to the LDSN?

Remaining questions related to question 3 under DRF Process:
d. The repair timeline is too vague for Category 1 leaks. What do you mean by "appropriate immediate actions"?

Remaining questions related to question 4 under DRF Process:
e. More specificity is needed regarding the screening investigation to locate emission sources. While EPA understands that FHR would like discretion on which screening tool is used for the initial investigation, is it possible to add specificity for the minimum capabilities the screening instrument must meet to be used? This would be similar to capabilities listed for OGI or Method 21 instruments and EPA would work with FHR to write out those specifics. We need to ensure the screening instrument would be capable of identifying the emissions source, especially given that this investigation will only last 30 minutes according to the response. If this is not feasible for FHR's deployment of the program, please explain.


f. The following is in the first bullet of FHR's response to question 4:
Because the LDSN is a continuous monitoring system, if an unidentified leak source remains within that given area, the system will automatically issue a new PSL thus triggering a new investigation if a category threshold (as described in FHR's response to question #2 in the "DRF Process" section above) has been exceeded.

Please provide some additional clarity for this statement. How long does a PSL stay active? Is it until you have made some kind of repair? How will you know that a new one has been issued?


g. The following is in the second bullet of FHR's response to question 4:
If the source is determined to be associated with MSS activity or other authorized emissions, the information will be documented and the PSL can be closed. 
Please provide additional details related to MSS activity and authorized emissions. 
 What criteria will be used to determine if the emission source is associated with MSS activity or other authorized emissions?
 Will any further investigation of the PSL take place?
 Will FHR identify "authorized emissions" upfront?


h. The following is in the third bullet of FHR's response to question 4:
After 30 minutes of active searching without at least one leak source of > 3,000 ppm or at least 3 leak sources that are less than 3,000 ppm but above the leak definition for the given components having been identified, the initial search can be discontinued. Within 7 business days of completing an initial unsuccessful search, a second screening search will be conducted. This allows additional time for the system to take in more data for analysis and to refine the location probability. The 2[nd] search will utilize the same parameters (one leak > 3,000 ppm, three leaks < 3,000 ppm, or 30 minutes active screening) as the initial investigation.
 No justification has been provided for the 30-minute initial screening investigation. Why is this timeframe appropriate?
 Why is 3,000 ppm used instead of the most stringent leak definition of 500 ppm? 
 What occurs for on-going PSLs during the 7-business day period after an unsuccessful search? Are those PSLs ignored?

i. The following is in the fourth bullet of FHR's response to question 4:
If the second investigation did not identify the leak source and the PSL detection level later increases to 2x the initial level, a PSL Update notification will be sent. A new DRF search will be started within 3 business days of the PSL Update notification. This step will be repeated each time the PSL detection level increases to 2X above the previous PSL detection level.
 Please explain how this translates to the rest of the system. 

 Does the 2x increase happen during the 7 days as well? Or would any increase during the 7 days be ignored until after the second investigation?


j. The following is in the fifth bullet of FHR's response to question 4:
If a leak source has not been identified and the PSL has not updated within 14 days or more, the PSL can be automatically closed. Were the source to later return above the PSL thresholds described in the "DRF Process" section of this document, a new PSL would be generated. This reinitiates the DRF process.
 What do you mean by "the PSL has not updated"? Is this on top of the second investigation or will "no update" override the need for the second investigation?

k. The following is in the sixth bullet of FHR's response to question 4:
After 90 days without the unidentified potential leak source worsening (i.e. the 2x trigger), one more screening will be conducted. If the source still has not been identified, the investigation can be annotated to indicate No Leak Source Found and PSL can be closed. Were the source to return above the PSL thresholds described in the "DRF Process" section of this document, a new PSL would be generated. This reinitiates the DRF process.
 Would every required LDAR component in the PSL be monitored at this point? If not, why not?
 Please explain why the "leak source worsening" is defined as 2x trigger and not another value such as 10x plus DL. 

l. Questions related to screening approach, monitoring for leaks, and repairs:
 Can the screening tool be pre-determined, or will selection of the screening tool be specific to the PSL as it is first issued? How will FHR determine what screening tool is appropriate?
 There are several places where FHR mentions searching for leaks from sources with the potential to emit VOC. This should also include HAP.
 What is the justification for the statement, "the screening approach has shown to be a more efficient way to find the leaks that contribute to the generation of a PSL"?
 Would all components be monitored until a leak >3,000 ppm was found when using the screening approach to identify the emission source?
 How long does it take to generate a new PSL is the source has not been located and the PSL is closed?
 FHR states the PSL will continue to update if the emission source is not identified during initial screening. Do you ignore updates for 7 days? Do you also ignore updates after the second screen if not 2X the initial?  How would this affect the DTU? It seems this would trend the DTU upwards.
 Why would a standard repair deadline not be preferred for all leaks, regardless of Method 21 applicability status in the underlying regulation? An alternative like the LDSN-DRF lends itself easily to detecting and mitigating emissions from leaks from sources not subject to Method 21 LDAR requirements. Repair "as soon as possible" is too vague for identified sources of emissions. If the 5/15 repair deadline is not appropriate, please explain why and provide more specific timelines for these non-traditional LDAR emission sources.
 If a single leak of >3000 ppm or 3 leaks of less than 3000 ppm are found that are non-LDAR or authorized, how will this impact the identification of other leaks in the same general area that are subject to an LDAR requirement?
 EPA asked about AVO that are identified during general operations. The response from FHR indicated that "AVOs identified would continue to be investigated, repaired, and reported accordingly per the applicable rule/requirement." It is the EPA's understanding that the AMEL would become the applicable requirement. Therefore, this question still remains. How will FHR handle AVOs identified during general operations?
 FHR's response to question 6 (5/15 repair clock) indicates the simulations assumed 7 days from leak detection until leak repair, with 3 days added for LDSN-DRF to account for the time from leak existence until leak detection. What happened in practice during the pilot test? Did repairs occur within 7 days of detection?

m. EPA is continuing the review the recordkeeping and reporting responses and will provide separate feedback for these items.
--------------------------------------------------------------------------------

LDSN Specific Items:
General:
a. FHR indicated they plan to use P&IDs and PFDs to identify equipment utilizing the LDSN. There are still many enforceability concerns with this as outlined below:
 How often do P&IDs need to be updated in order for compliance to be demonstrated? It is very common for inspectors to find outdated P&IDs, which would not represent a full accounting of what equipment is covered by the LDSN.
 Enforceability concern that the AMEL/AMOC requests no tagging and tracking of component using the LDSN. How can proper sensor placement/density occur if components are not tracked/recorded?
 How can effective investigations of potential leakers occur if there are an unknown number and type of components in a sensor area?

b. The response provided related to question 2 for DOR is not clear. 
 Is a very discreet PSL generated for the DOR component alone, and then any additional leakage would still be picked up? How does that work? What if the DOR grows? What if the DOR has a phantom repair? Would no investigation occur for the area designated with the DOR?
 Is FHR ignoring sensors in the downsize area of the DOR?
 How often is the PSL updated?

c. The response to question 3 states, "when the number of leak location estimation overlaps hit a preset threshold value within a given time window, e.g. 5% of the time over a rolling 72-hour window, a notification is issued with the most probably leak source location (PSL) in the form of a boxed area." 
 Is this 5% of the time?
 What is the preset threshold value?

Sensor Placement:
d. Question 3 under "Sensor Placement" was not fully answered. No criteria for sensor node placement have been provided, the response is too vague (black box), underlying testing has not been provided (specifically to justify the DTU of 18,000 ppm), and models are not available for review or auditable. Please expand on the previous response (provided here for reference).
There are two major steps in the sensor placement design. The first step is to determine individual sensor coverage or sensor density through a leak simulation analysis that demonstrates equivalent or better emission control with the LDSN-DRF solution when compared with Method 21 CWP. Molex's default coverage for each sensor is 50 ft radius and +-20 ft elevation based on the results of our controlled gas release testing and acceptable costs for customers to be willing to adopt the sensor solution. 
The second step is to build a 3D digital model of the process unit in which the sensors are to be placed, and use the sensor density or individual sensor detection zone (i.e. effective coverage from each sensor) from the first step to determine the minimal number of sensors required for full coverage of all LDAR components in the unit.  In order to eliminate human errors, Molex developed a sensor placement optimization algorithm which takes in step files from the 3D model and the individual sensor detection zone to generate a sensor placement plan. Imagine each sensor has a disc shaped space around it with the outer surfaces of the disc representing boundaries of the detection zone of the sensor. The placement algorithm then stacks these discs inside the 3D space of the digital plant to ensure full coverage of LDAR components without excessive overlapping of each other. Although it sounds like a mechanical procedure, the sensor placement algorithm utilizes mathematical techniques including a mixed global optimization strategy and stochastic linear programming to determine the optimal sensor locations and generate X, Y, Z coordinates for each of the sensor location in the process unit. 

e. Can FHR provide a P&ID/map or drawing to show each sensor node location in the mid-crude and meta-xylene units? 

f. What criteria is used to determine if additional sensors are needed? Is this only tied to MOC?

g. Based on FHR's responses, a DTU of 10,000 ppm or lower is technically feasible. The arguments against this approach are limited to costs or rely on reductions of leaks to components not applicable to LDAR. This does not fully address EPA's concern. As discussed in the call, EPA would like FHR to examine in more detail what it would take to standardize the DTU which would allow for broad application of approved AMEL. 
Sensor QA/QC and Maintenance:
h. The bump test justification appears to contradict overall proposal basis of measuring concentration of VOCs in ambient air. Please provide additional clarification.

i. Questions related to Figure R-3:
 What is the spec?
 What is the post calibration check?
 Is data considered invalid for past period if bump check failed?
                  
                Figure R-3. Sensor Calibration Process Flow 
      
j. In question 9 on system operational availability, FHR states, "a sensor is considered to be down when it sends invalid data". 
 What is invalid data?
 Why are individual sensors allowed maximum downtime of 30% over the past 12 calendar months? This is new and needs additional discussion.
 Was uptime included in the model?
 Justification for 10% system downtime needs discussion.

k. Please provide specific justification of sensor type to be used, gas stream compositions expected, and how sensors meet response factors for these particular units in the requests or expected in general for more broad approval. 

l. In FHR's response to question 11 on "sensor data", the following statement appears:

If the LDSN-DRF has a 4σ performance level,  then an audit method such as randomly selected M21 component monitoring should detect no M21 leaks > 21,600 ppm if 100 components were randomly monitored and should detect 2 or less M21 leaks >21,600 ppm if 300 components were randomly selected for M21 monitoring. 

Please explain and provide justification for the highlighted statement.


Recordkeeping and Reporting:

m. EPA is still reviewing the information provided by FHR regarding recordkeeping and reporting and will provide separate follow up on this topic.

--------------------------------------------------------------------------------

Equivalence Demonstration Specific Items:
a. Can FHR provide the backup file (.bak) of the LeakDAS data used for the simulation for the specific process units in the request?

b. Please provide data collected during the pilot study, including information collected from operation of the LDSN-DRF and information collected from the quarterly Method 21 monitoring for the process units in the request.

c. Did the model depicted in Figures R-6 and R-7 include the 10% downtime for the system?

--------------------------------------------------------------------------------
APPENDIX  -  Previous responses from FHR provided for reference.
DRF Work Practice Specific Items:
1. What components will FHR monitor when a PSL is generated? Will the monitoring focus only on traditional LDAR components first to identify a leak source? 
When a PSL investigation has been initiated in response to a PSL notification, facility personnel will begin the search by using hand-held screening gear. The PSL includes potential leak sources that are LDAR components and non-LDAR components.  As such, rather than at first focusing on component level monitoring, the screening approach is a more efficient way to find the leaks that contribute to the generation of a PSL. By following where the screening data from the hand-held screening gear leads, the technician can gradually home in on the emission source. This source might be an LDAR applicable component or a non-LDAR component. Although experience may guide the technician towards the types of equipment where leaks are more prevalent in a specific process unit, FHR does not anticipate a bias for LDAR versus non-LDAR components.  The EPA provided list shown below is excerpted from Table 1 of the AMEL application. These are types of leak sources that will potentially be encountered within the facility during the PSL investigations.

p. 14: FHR will monitor the following components when a PSL is generated:
 Agitator - FF
 Agitator - VV
 Agitator  -  HON
 Compressor  -  HON
 Compressor  -  non-HON
 Compressor in Hydrogen Service
 Connector
 Pump  -  with permit specifying 500 ppm
 Pump  -  HON
 Pump  -  VV
 Relief Device
 Valve
 NDE Component
 Agitator hydrocarbon (HC) but non LDAR
 Compressor  -  HC but non LDAR
 Connector  -  HC but non LDAR
 Pump  -  HC but non LDAR
 Relief Device  -  HC but non LDAR
 Valve  -  HC but non LDAR
The monitoring will focus on both LDAR and non-LDAR component leaks. This is correct.

DRF Process:
The DRF process is described on pages 11-14. However, much about the proposed obligations for response remains unclear resulting in significant enforceability concerns. The follow are questions raised by lack of specificity in this section of the request.
2. How are Categories 1, 2, and 3 defined? Is there a quantified range assigned for each category?
Detection notifications are classified into 3 categories. 
If at least five proximate sensors in the process unit have detections above 1,000 ppb isobutylene equivalent, a Category 1 notification is issued immediately.  A Category 1 detection notification is intended to bring immediate attention and investigation by appropriate personnel as it might indicate emissions large enough to represent a potential safety concern (pegged at 100,000 ppm if measured by M21 for example).
Category 2 and Category 3 notifications are generated by the LDSN algorithm when positive detections are registered at least 5% of the time over a rolling 72-hr period. If the highest detection is above 1,000 ppb isobutylene equivalent at the sensor location, then a Category 2 notification is issued. Otherwise a Category 3 notification is issued. 
While a Category 2 notification is typically related to higher emissions than a Category 3 notification, it is not always the case. This is simply because the measured detection signal is also dependent on the distance from the sensor to the emission source. Category classifications can be useful in prioritizing the DRF workflow when multiple PSLs must be investigated. However, investigations will be initiated within 3 business days for all PSL notifications, regardless of the detection category.
3. No description of leak detection obligations are described for Category 1. Page 12 of the request says the following about Category 1: "Category 1 notifications are for larger potentially impactful emissions that need prompt response by facility personnel. These emissions are anticipated to likely be associated with significant maintenance activities or unplanned events. The Category 1 notification protocol may include direct alarms for operators, for example phone notifications, and/or e-mail notification to appropriate personnel." No further discussion of how leaks are found and corrected is provided.
Facility personnel will respond to Category 1 notifications by surveying the area in which the notification was triggered. Upon discovery of a leak source the appropriate immediate actions will be taken to control the release in line with any safety and/or environmental concerns. FHR's Integrated Contingency Plan will be followed in accordance with the severity of the event. If the emission source is determined to be an LDAR applicable leak, the required repairs will follow the 5/15-day schedule just as it would for a Category 2 or 3 leak.

4. On page 12 of the request, it is stated that PSLs are generated for Categories 2 and 3 and the DRF process begins. A note at the footer of Table 1 on page 14 of the request states, "for new Category 2 or Category 3 PSL: Facility personnel initiate investigation within 3 business days", however, no deadlines or time frames are provided for completing the DRF leak investigation and there is a lack of specificity of the level of effort. This is an enforceability issue. Please provide proposed enforceable deadlines for the scope of required investigation and completing of the investigation from initial of leak detection by the sensor network as elaborated in the below questions: 
 There is no clarity on level of effort required by the DRF connected with timing creating enforceability concerns. Is it one technician with an OGI first? Additional details of the DRF workflow are described in the bullet points below and illustrated in Figure R-1. 
5
 An investigation will be initiated within 3 business days of the initial PSL being issued for Category 2 and Category 3 events. The investigation will consist of one or more personnel surveying the PSL area with appropriate hand-held screening gear (e.g. OGI, PID, FID, etc.) to locate the leak source. Once a potential leak source has been identified, a Method 21 reading will be obtained and recorded to confirm the leak based on the leak definitions specified in Table 1 of the AMEL request. If a leak source is identified that is 3,000 ppm or greater per Method 21 or if 3 leak sources are identified per Method 21 that are less than 3,000 ppm but above the leak definition for the given component, the investigation can be considered complete and the PSL will be closed out once the leak(s) has been isolated, repaired, or placed on DOR. Because the LDSN is a continuous monitoring system, if an unidentified leak source remains within that given area, the system will automatically issue a new PSL thus triggering a new investigation if a category threshold (as described in FHR's response to question #2 in the "DRF Process" section above) has been exceeded.
            

 If the source is determined to be associated with MSS activity or other authorized emissions, the information will be documented and the PSL can be closed. 
            
 After 30 minutes of active searching without at least one leak source of > 3,000 ppm or at least 3 leak sources that are less than 3,000 ppm but above the leak definition for the given components having been identified, the initial search can be discontinued. Within 7 business days of completing an initial unsuccessful search, a second screening search will be conducted. This allows additional time for the system to take in more data for analysis and to refine the location probability. The 2[nd] search will utilize the same parameters (one leak > 3,000 ppm, three leaks < 3,000 ppm, or 30 minutes active screening) as the initial investigation.
            
  If the second investigation did not identify the leak source and the PSL detection level later increases to 2x the initial level, a PSL Update notification will be sent. A new DRF search will be started within 3 business days of the PSL Update notification. This step will be repeated each time the PSL detection level increases to 2X above the previous PSL detection level.
            
 If a leak source has not been identified and the PSL has not updated within 14 days or more, the PSL can be automatically closed. Were the source to later return above the PSL thresholds described in the "DRF Process" section of this document, a new PSL would be generated. This reinitiates the DRF process.
            
 After 90 days without the unidentified potential leak source worsening (i.e. the 2x trigger), one more screening will be conducted. If the source still has not been identified, the investigation can be annotated to indicate No Leak Source Found and PSL can be closed. Were the source to return above the PSL thresholds described in the "DRF Process" section of this document, a new PSL would be generated. This reinitiates the DRF process.

            
            
                                       
               Figure R-1. DRF Process Flow for PSL Closures 
                                       
                                       
 How many components within the PSL need to be monitored, and by when?
 The area identified by the PSL will be surveyed with the appropriate tools for each situation and an investigation will begin within 3 business days of the notification for the new PSL.  The survey of the area includes searching for leaks from any source with the potential to emit VOC, not just LDAR applicable components. As discussed in FHR's response to question #1 in the "DRF Process" section above, rather than at first focusing on component level monitoring, the screening approach has shown to be a more efficient way to find the leaks that contribute to the generation of a PSL. By following where the screening data leads, the technician can gradually home in on the emission source.

 What monitoring instrument is required?
 The initial survey of the area can use any appropriate tool. Some examples of these screening tools may include but are not limited to OGI (FLIR GF320), ppm or ppb level analyzers such as Ion Science Tiger, Thermo Scientific TVA, or the LDARtools PHX. Once a potential leak source has been identified on a piece of equipment, a M21 compliant instrument will be used to confirm the leak based on the leak definitions in Table 1 of the AMEL request. 

 While there is some clarity as to when a leak is assumed to have been found, what happens if that assumption is incorrect and the leak continues?
 If the PSL is closed because the leak source is assumed to have been identified and corrected , the system will automatically issue a new PSL when a category threshold (as described in FHR's response to question #2 in the "DRF Process" section above) is exceeded. The new PSL will trigger a new investigation so that the remaining leak source can be identified.
            
 What happens if the leak is not found?
 This is described in response to the first bullet of question #4 in the "DRF Process" section of this document.
            
 Is there a point in time when all components must be monitored within the PSL? What if all components are monitored and the leak continues?
 The sensors provide continuous monitoring for leaks from any source with the potential to emit VOC within the given area. Since this system is set up for continuous monitoring, the PSL will continue to update as more data is gathered should a leak source not be identified initially. When completing a screening survey of the area in response to a PSL, facility personnel are looking for any potential indication of leaks within the area, not just LDAR applicable components.

            
 Table 1 on page 14 specifies leak definitions for non-LDAR equipment. The table further states, "follow emission event reporting and repair guidelines." What does that mean? "Authorized Emission" and "MSS" are listed in Table 1 with no leak definition and no obligation for repair. What is the intention of this? 

 Table 1 provides LDSN leak definitions for "non-LDAR" components. Some examples of non-LDAR components for which leaks could be detected by LDSN include equipment in hydrocarbon service but not subject to traditional LDAR programs, pin hole leaks in piping, etc.  Under current work-practices, leaks from non-LDAR equipment are evaluated as potential reportable emissions using applicable reportable quantity (RQ) values. Even if below the RQ, the emissions are considered as a recordable emission event under TX rules (30 TAC 101.201(b)). Emission event reporting and repairs are (and will continue to be) managed in accordance with FHR policies/procedures to ensure compliance with applicable reporting requirements and follow up repairs. This includes reporting the non-LDAR leak emissions as Title V deviations. With the current work-practices, non-LDAR leaks are typically discovered as AVOs since scheduled M21 monitoring is not required. For any unauthorized emissions, repairs are completed as soon as practicable and tracked/reported accordingly. Although FHR does expect to detect these emissions earlier due to the continuous monitoring provided by LDSN, it is important to note that any potential emission reductions from non-LDAR components where conservatively not included in the M21 equivalency demonstration. 
            
 FHR guidance documents, based on regulations and agency guidance documents, are utilized to make determinations on emissions event reporting and recordkeeping requirements. FHR reviews incident reports in its database that occur during the reporting period and includes them in its Title V Semi-Annual Deviation Reports if applicable.
            
 During the PSL investigation, FHR may determine that the originating cause of the PSL notification is an emission source that is currently authorized in existing air permits. For example, if a stack is an authorized emission source and is determined to be the cause of the PSL, the investigation could be closed with "Authorized Emission Source" as the closure justification. FHR will maintain documentation to demonstrate the emission is authorized (e.g. equipment identification, emission point name (EPN), permit number, or First Report case number, etc.)  Similarly, those maintenance, start-up, or shutdown activities which are authorized via permit will be noted as "MSS" on the PSL closure documentation. 

 What is the obligation for repair of non-LDAR components or pieces of equipment to which no leak standard applies in LDAR regulations is found to be the cause of the triggered PSL? If there is no obligation to correct leaks from non-LDAR equipment, "authorized emission(s)" or "MSS" events, how will this impact the identification of other leaks in the same general area that are subject to an LDAR requirement? 

 The obligation for repairing non-LDAR components or pieces of equipment to which no leak standard applies is to meet the requirements of the general duty clause in 40 Code of Federal Regulations 60.11 and the affirmative defense criteria per 30 TAC 101.222, referenced in FHR's Title V Operating Permit. FHR's response to the prior bulleted question includes additional information about emission events detected via LDSN. 
 If authorized emissions or authorized MSS activities impact sensor detections and PSL generation, the same process that is used to address ongoing DORs can be utilized. This approach is discussed in FHR's response to question #2 of the "LDSN Specific Items" section of this document.

Will FHR continue to document and investigate AVO that are identified during general operations, or will these be ignored unless the LDSN detects something in that area?

 AVOs identified by facility personnel during general operations will continue to be investigated, repaired, and reported accordingly per the applicable rule/requirement.
            
            
5. Page 8 of the request states, "In practice, the DRF process often identifies actionable leaks that are well below the DTU (>500 ppm but less than the DTU)." Please clarify what is intended by the parenthetical. Do leaks below 500 ppm above baseline trigger a PSL investigation response or are they ignored? Is a PSL triggered for any leak above baseline or is there a threshold (e.g., 500 ppm)? How is the leak ppm level above baseline calculated?
The statement and parenthetical were intended to avoid the potential for an incorrect assumption that a DTU of 18,000 ppm indicates that only leaks that are > 18,000 ppm will be detected. During the pilot, many small leaks under 18,000 ppm were successfully detected and located. The median leak levels found were 3,315 ppm for the m-Xylene unit and 3,153 ppm for the Mid-Crude unit.  The thresholds for PSL generation are described in FHR's response to question #2 of the "DRF Process" section above. Whenever a new PSL has been generated, a PSL investigation will be initiated. As part of that investigation, any M21 fugitive emission measurement for a component that is higher than the leak definition specified in Table 1 of the AMEL application is considered an actionable leak which requires repair. Since the lowest leak definition in that table is 500 ppm, then fugitive emissions from components that are M21 measured at <500 ppm do not trigger a repair requirement. The equivalency demonstration followed that logic and did not assume repair actions or emission reductions to components that were below the leak definition.

6. When does the 5/15 repair clock begin? Is it at PSL generation or at component-level identification?
During the PSL investigation, if a component is monitored and the M21 results for that component are at or above the applicable leak definition specified in Table 1 of the AMEL request, then a leak has been detected. The date of the M21 monitoring will be recorded and it starts the "5/15 repair clock" for LDAR components. This approach was included in the equivalency demonstrations in the AMEL request and also in the pending CRADA report. Non-LDAR components follow the emission event process described above. The potential emission reductions from earlier detection of non-LDAR leaks was not included in the equivalency demonstration. 
As part of the comparison of M21 CWP and the LDSN-DRF included in Appendix A of the AMEL request, the model assumes that once a specific leak has been detected it will be repaired 7 days later. This assumption is based on historical records and the requirement under both methods to conduct an initial repair attempt within 5 days of a M21 leak being detected and a final effective repair no later than 15 days after the leak was detected. Thus, the model maintained the leak at a constant emission rate for 7 days after the leak existed for the M21 simulation. There is some anticipated time lapse between initial LDSN potential source location (PSL) notification and the time that a specific leak is detected utilizing DRF activities. So, while both the M21 and the LDSN-DRF simulations similarly assumed 7 days from leak detection until leak repair, the LDSN-DRF simulation also conservatively included 3 additional days at the elevated rates to account for time from leak existence until leak detection.



7. What specific information will FHR keep in the record related to the DRF? The request needs to specify which paragraphs of each subpart still apply and which will be replaced by requirements in the AMEL. FHR will provide in a separate document a list of the citations and paragraphs which still apply, and those which will be replaced by requirements in the AMEL. 
      p. 16-17
             Fugitive Emission Management Plan
 FHR will maintain records to demonstrate which portions of the facility are complying with M21 CWP, OGI AWP, or the LDSN-DRF AMEL. This might include one or more of the following formats - listing by process unit or block; listing by process streams; electronic plot plans; annotated P&IDs, process flow diagrams, or block flow diagrams. Other documentation may be used instead, as long as the compliance approaches being used are clearly represented.
                  
             Sensor Selection
 FHR will maintain records to demonstrate that the sensor type in use has anticipated responses factors of <10 for the targeted LDAR applicable process streams. Stream compositions may be based on sample data, feed/product specifications, or process knowledge. Response factors may be based on published data, test results, or accepted calculation methodologies.
            
             Sensor Node Placement
 FHR will maintain records to demonstrate that the node placement for the selected sensor type is designed to provide sensor coverage for all components within the LDSN boundary that were previously M21 CWP applicable. Sensor coverage is defined as having a DTU of 18,000 ppm or less.
                  
             Sensor Initial Calibration and Quarterly Bump Tests
 FHR will maintain records to demonstrate successful completion of the initial calibration and ongoing quarterly bump tests for each installed sensor.
                  
             Sensor Data
 FHR will maintain sensor reading raw data (in mV and/or PPBe) in a manner that can be viewed electronically. Data will be reasonably available for export upon request by EPA or TCEQ.
                  
             Meteorological Data
 FHR will maintain records of wind direction and wind velocity in a manner that can be viewed electronically. Data will be reasonably available for export upon request by EPA or TCEQ.
                  
             PSL Notifications and PSL Investigations
 FHR will maintain records of each PSL notification and associated investigations including the initial PSL notification date, investigation start date, investigation results, M21 reading when an actionable leak has been detected, repair action taken, and M21 reading for effective repair confirmation when applicable.
                  
                  
8. What specific information will FHR include in the semiannual reports related to the DRF? The request needs to specify which paragraphs of each subpart still apply and which will be replaced by requirements in the AMEL. 
FHR will provide in a separate document a list of the specific information that will be included in semi-annual reports including the citations and paragraphs which still apply, and those which will be replaced by requirements in the AMEL.


LDSN Specific Items:
General:
1. How will FHR clearly identify which components are not monitored by the LDSN, and thus continue to be subject to the Method 21 or AVO inspections?
      p. 6: FHR will maintain records to clearly demonstrate which portions of the facility are complying with M21 CWP, OGI AWP, and which are utilizing the LDSN-DRF AMEL. Each component covered by the LDSN-DRF is not required to be individually identified or listed.
FHR plans to utilize a combination of piping and instrumentation diagrams ("P&IDs"), and process flow diagrams ("PFDs") to identify equipment that is utilizing an LDSN system. In addition, components that will continue to be subject to M21 inspections will be tagged and tracked through an electronic database system. The AMEL application page 16 indicates that, "FHR will maintain records to demonstrate which portions of the facility are complying with M21 CWP, OGI AWP, or the LDSN-DRF AMEL. This might include one or more of the following formats - listing by process unit or block; listing by process streams; electronic plot plans; annotated P&IDs, process flow diagrams, or block flow diagrams. Other documentation may be used instead, as long as the compliance approaches being used are clearly represented.

2. How does DOR affect PSL generation and the ability to see additional leaks while a component is on DOR?
The LDSN successfully identified some existing DORs via PSL generation during the pilot run. After a leaking component is located and is designated as DOR, the location information is entered in the LDSN algorithm via the mobile app and the PSL box is narrowed down to a much smaller area where the DOR component is located. If there is any leak located in the vicinity of the DOR, with the help of nearby sensors around the DOR and varying wind directions, the LDSN can generate additional PSLs outside the downsized DOR area. This effectively isolates the DOR component from other components in the area so that new PLS can be issued should new leaks be detected. 
Meanwhile, the algorithm continuously tracks detections caused by the DOR everyday via anomaly detection. If there is a significant change in the detection level either in signal strengths or in detection frequency, a new PSL notification will be issued prompting an investigation into the DOR or other components in the DOR area.

3. Page 11 of the request states, "based on that analysis and the use of complex proprietary algorithms, target areas of interest are generated." Can FHR provide additional information on how the algorithm generates the PSL box based on the inputs in order to further review this application?
The raw spatial and temporal gas and wind data with a time stamp from each of the LDSN sensor nodes is continually transmitted to the cloud 24x7 and the data is continuously processed in the background using Molex's data analytic algorithm, which was developed to identify the occurrence of fugitive emissions within a facility and estimate the most probable locations of the emission sources. Just as with gas chromatography (GC), the LDSN algorithm first performs baseline modeling /curve-fit to the time-resolved gas sensor output data, and then identifies excursions above the modeled baseline as detection peaks by using a threshold of signal-to-noise ratio S/N >= 3. In other words, the baseline itself is not tied to leak detections. Only detection peaks >3 times the noise level are considered detection events. Signal characteristics including the amplitude, width and centroid of each detection peak are then calculated and recorded.
It should be noted that in an outdoor open field, the baseline of a PID sensor is primarily determined by the sensor condition as well as atmospheric temperature and humidity. There is no evidence that small fluctuations of the baseline due to changes in environmental conditions have a significant impact on the sensitivity of the sensor or the performance of the sensor in detecting gas plumes. 
Concurrent with gas sensor signal processing, the algorithm looks for wind direction at the time of each detection. If a detection occurs with a south wind for example, the algorithm assumes a possible leak source to be located in the south of the sensor. When multiple sensors in one vicinity detect emissions under changing wind directions, there will be overlapped areas in the algorithm's leak location estimation. As illustrated in Figure R-2 below, Sensor 1 shows a detection peak when a south wind is present, thereby the algorithm estimates a potential leak area south of Sensor 1. Next, Sensor 2 shows a detection peak under west wind but sensor 1 shows no detection so the algorithm estimates a potential leak area on the west side of Sensor 2. An area that has the most overlaps corresponds to the most probable leak source location (PSL) just as the darkest color on a probability heat map.

            
                Figure R-2. Illustration of PSL Generation 

The algorithm continually estimates potential leak areas from collaboration of sensors in the LDSN network under varying wind conditions and superimposes all the nearby estimated areas to obtain the most probable potential leak source location.  When the number of leak location estimation overlaps hit a preset threshold value within a given time window, e.g. 5% of the time over a rolling 72-hour window, a notification is issued with the most probable leak source location (PSL) in the form of a boxed area. 

Sensor Placement:
3. The determination of sensor location and density is not explained with enough detail. This "black box" is problematic because EPA is unable to determine if provided sensor density and location is appropriate. Can FHR provide more detail on sensor location and density?
There are two major steps in the sensor placement design. The first step is to determine individual sensor coverage or sensor density through a leak simulation analysis that demonstrates equivalent or better emission control with the LDSN-DRF solution when compared with Method 21 CWP. Molex's default coverage for each sensor is 50 ft radius and +-20 ft elevation based on the results of our controlled gas release testing and acceptable costs for customers to be willing to adopt the sensor solution. 
The second step is to build a 3D digital model of the process unit in which the sensors are to be placed, and use the sensor density or individual sensor detection zone (i.e. effective coverage from each sensor) from the first step to determine the minimal number of sensors required for full coverage of all LDAR components in the unit.  In order to eliminate human errors, Molex developed a sensor placement optimization algorithm which takes in step files from the 3D model and the individual sensor detection zone to generate a sensor placement plan. Imagine each sensor has a disc shaped space around it with the outer surfaces of the disc representing boundaries of the detection zone of the sensor. The placement algorithm then stacks these discs inside the 3D space of the digital plant to ensure full coverage of LDAR components without excessive overlapping of each other. Although it sounds like a mechanical procedure, the sensor placement algorithm utilizes mathematical techniques including a mixed global optimization strategy and stochastic linear programming to determine the optimal sensor locations and generate X, Y, Z coordinates for each of the sensor location in the process unit. 

4. How many sensors will FHR install in each of the 2 units included in the request? Would this be the minimum number of sensors and future adjustments would only increase density? The request should provide certainty that sensor density will not be decreased over time, thus creating a less effective system.
There will be a total of 10 sensors in the Meta-Xylene unit and 44 sensors in the Mid-Crude unit.  This represents the minimum numbers of sensors for these two units. There is a possibility that more sensors could be added to the units should FHR determine a need to do so in the future.
As long as the LDSN-DRF AMEL is being used as the compliance method for these units, the number of sensors will not be decreased unless a unit is shut down completely or partially on a permanent basis, for which FHR will follow a normal MOC and reporting process with corresponding regulatory agencies. 

5. Is there a need to review the sensor selection and node placement on some recurring basis based on process changes?
FHR will utilize the existing management of change process that reviews all changes to process equipment and systems within the refinery. During this review, sensor selection and placement can be reviewed for adequacy and updated if needed. Through this review process, action items which need to be addressed prior to the change being implemented are assigned to the appropriate facility personnel. This is a continuous process for any changes throughout the refinery. 

6. The DTA and DTU values are determined based on historic performance of the applicable LDAR programs in the specific process units within the request. It is EPA's understanding that FHR has an interest in broad approval such that other units can incorporate the LDSN-DRF without need for additional application. EPA would like to understand the limitations to standardizing the DTA and DTU, regardless of the historic LDAR program data. That is, what limitations are there to designing the system for a DTU of 10,000 ppm?
There are a number of factors that determine the DTU of the LDSN: the type of gas sensors used in the system and algorithms to calculate sensor signals, sensor placement density, the gas or gas streams to be monitored in a particular process unit, and wind conditions etc. The gas sensors in the LDSN are some of the most sensitive currently available on the market and the algorithms employed use standard techniques in analytical instrumentation.  The LDSN design cannot change gas streams to be monitored or the weather conditions, so that leaves sensor density to work with.  As is understood, the higher the sensor density, the higher the detection sensitivity (represented by DTA - detection threshold average) of the LDSN. The relationship between leak source magnitude and sensor distance is described in Equation 1 of Appendix B, Page B-4 (copied below), which suggests that the detectable leak size is proportional to the square of the distance to the sensor (D[2]):

      𝑆𝑖𝑗 = 𝜋𝐷[2]𝐶𝑗 𝜎𝑤 [2]/𝑢, where:
      Sij = source magnitude estimate based on sensor j peak i, g/sec
      Cij = peak concentration at sensor j peak I, g/m3
      D = distance to source, m
      W = variation of wind velocity over methane peak interval, m/sec
      u = average wind velocity over peak internal, m/s
A DTU of 18,000 ppm corresponds to an effective detection radius of approximately 50 ft. To detect 10,000 ppm leaks, one would have to decrease the detection radius to about 37.3 ft or increase the density of the LDSN by 25.4% according to the above equation. This is physically attainable, but it will significantly increase the costs and make the solution much less appealing to industry. For  a complex industrial setting, the cost to procure and install gas detectors of the type and quality required for these types of applications can range from $8,000 to more than $15,000 per sensor. This does not include the cost associated with maintenance, spares, data storage, and annual software subscriptions.  Increasing the cost by up to 25% or more would provide a significant barrier to industry adoption thereby reducing the mutual benefit the solution can provide. For any process unit that is not currently required to conduct routine M21 connector monitoring as part of the applicable LDAR regulations, a standard M21 equivalency could be easily documented. The control efficiency improvements and emission reductions gained by going from AVO monitoring of connectors to an LDSN solution with continuous monitoring is significant. 

Sensor QA/QC and Maintenance:
7. Periodic Responsivity Test: Page 15 of the request states, "A successful bump test is shown by a `Pass' message from the mobile app, which indicates the response of the sensor exceeds 50% of the nominal value of the standard. A bump test may be repeated up to two additional times if the first test is not successful. If the sensor continues to fail, it will be recalibrated or replaced with a pre-calibrated sensor." This appears to say the sensor could be low by 50%. This level of acceptable error seems high given the capabilities of modern sensors. There is no description of how close the standard needs to be or any clear procedures for the bump test in terms of averaging time. Can FHR provide additional detail?

The purpose of conducting bump tests is to ensure the minimum sensitivity required for the operation and performance of the sensor system. Because the sensors are not used to measure gas concentrations but instead to detect the presence of gas plumes in the air under typical wind conditions, Molex does not expect these sensors to have better accuracy than sensors for safety alarms where accuracy is mission critical. The 50% threshold limit is used in the bump tests for safety alarms and systems. In the early stage of the CRADA project, the team used uncalibrated sensor prototypes and relied on visual inspections to identify detections peaks in the sensor output plot within hourly windows. Now this work is done through computer algorithms 24x7 and is understandably more accurate and reliable.
In a bump test, a sensor is challenged with a real gas by exposing it to a gas standard, e.g. 500 ppb isobutylene, for 30 seconds. Because the sensor has a fast response (typically T90<10s), the sensor output reaches a plateau during the 30 s gas exposure. The difference in the gas readings before and after the gas exposure is then compared to a preset threshold value to determine whether the sensor passes or fails the bump test. 

8. What is the timeframe for recalibrating or replacing a sensor that fails the responsivity test? What is the procedure for recalibration? 
When a sensor fails a bump test repeatedly, the sensor will be calibrated or replaced with a pre-calibrated sensor within 7 business days. Although pre-calibrated spares may be available on-site and often will be installed within 24 hours, the 7-business day timeframe recognizes that a new sensor may have to be shipped from the manufacturer.  
Recalibration is typically done in-house in a controlled environment and following the same process used in calibrating other gas detection equipment. Below is a brief flow chart of the calibration process.
                  
                Figure R-3. Sensor Calibration Process Flow 
      
9. System Operational Availability: Page 15 of the request states, "the annual average of LDSN system operational downtime will be 10% or less." Given that quarterly bump tests are the only required QA/QC, it is unclear why up to 876 hours per year of downtime would be allowed. This seems an excessive and unnecessary allowance for downtime given the capabilities of modern sensors. Is 10% downtime an average across sensors or is it an average per sensor? If it is an average across sensor, is there a maximum annual downtime per sensor?
The average downtime target across sensors for each process unit is a maximum of 10%. A sensor is considered to be down when it sends invalid data or when it does not send data at all. The downtime is calculated for each process unit for the past 12 calendar months on a rolling basis. The yearly downtime is calculated as follows: 
Downtime %=total down time in hrs of all sensors within a unit  in past 12 monthsTotal time in hrs of the same period X number of sensors in the unit 
For each individual sensor, the maximum downtime limit is 30% over the past 12 calendar months. 
The Molex sensor system has a built-in self-diagnostic feature that is constantly operating in the background. If a sensor fails to send data and the duration of the failure exceeds a preset threshold for example, an error message will be generated and sent to designated personnel for investigation and correction. This continuous automated "health check" of each sensor is designed to provide real-time indication of sensor health. Molex's mSyte software platform further tracks system downtime for each unit in order to meet the 90% data completeness objective.   
A minimum uptime of 90% is still highly protective when compared to M21 CWP where certain LDAR components can go months or even years between monitoring events.
10. Can FHR provide the manufacturer and model number with more details on principal of measurement, sensitivity, response factors, and more specific initial calibration and ongoing QA/QC requirements along with specific acceptability criteria for initial and ongoing calibration checks for the specific sensors proposed for use?

The leak detection sensor network (LDSN) system was developed by the mPACT2WO business division of Molex, LLC in Lisle, IL. The hardware consists of wireless transmitters, gas detection modules and wind sensors. Equipped with a 10.6ev PID of ppb resolution, the gas detection module model GDM-VOC-03 is UL certified for use with a wireless transmitter model AMG-WIFI-VOC in Class 1 Div. 2 hazardous locations. Except for the wind sensors, all the hardware was manufactured, assembled, and tested by Molex's Sensorcon facility in Buffalo, NY. 
The gas transmitter includes a power supply module, a transmitter module, and an antenna. It is powered by 24VDC, a low voltage power line readily available in the plant. power supply module, a transmitter module, and an antenna.  It is connected to a gas detection module containing a PID sensor.  The transmitter transmits data via Wi-Fi to local access points which forward the data to the cloud via local wired routing. The development of the LDSN sensors and supporting server and network components, as well as the mSyte software and data analytics represent a significant investment in the engineering design, infrastructure development, and facility operations analysis toward an alternative LDAR solution.  
PID stands for photoionization detection. It is one of the most sensitive sensor technologies for VOC detection. A photoionization detector consists of electrodes and a drive circuit, and a high energy UV lamp. As VOC compounds enter the detector, they are bombarded by high-energy UV photons and are ionized when they absorb the UV light, resulting in ejection of electrons from the molecules and formation of positively charged ions which produce an electric current between the electrodes. The higher the concentration of the gas, the more ions are produced and the higher the electric current. The current is then amplified and converted to a digital output. PID has been used for decades as a post-separation detector in gas chromatography (GC) analysis. It is also one of the sensor technologies used in LDAR alongside flame ionization detections (FID). 

PID is broad band and non-selective and will ionize molecules with an ionization energy less than or equal to the lamp photon energy. Just as with FID and other sensor technologies, the PIDs have varying sensitivities to different gas species. The PID is typically calibrated to isobutylene for its moderate sensitivity and low toxicity, and then a "response factor" (also called "correction factor") is used to correct the sensor response.  The true gas concentration reading to a particular gas is obtained by multiplying the raw sensor reading by the response factor of that gas: 

      Gas Concentration = Raw Sensor Reading x Response Factor (RF) 
For example, when the unit is calibrated with isobutylene (RF=1) but used to measure n-Octane which has a response factor value of 1.8 and the reading is 10 ppm, then the true concentration of n-Octane is 10 ppm x 1.8 = 18 ppm. For PIDs, the lower the response factor, the higher the sensor sensitivity. When a gas such as butane has a high response factor, that means the sensitivity of the sensor is very low toward that gas.  Each PID sensor manufacturer publishes a library of response factors to different gases for their sensor products. Below are a few examples of response factors for 10.6ev PID from Honeywell publication.
 https://www.honeywellanalytics.com/~/media/honeywell-analytics/products/rigrat/sps_his_tn_106d_bw_rigrat.pdf?la=en
Table R-1. Response factors of typical gases for 10.6eV PID (data from Honeywell)
Chemical Name
                                    Formula
Ionization Potential, eV
Response Factors 
Acetone
C3H6O
9.71
0.9
Benzene
C6H6
9.25
0.53
Butadiene
C4H6
9.07
0.85
Butane
C4H10
10.53
67
Cyclohexane
C6H12
9.86
1.4
Dimethylamine
C2H7N
8.23
1.5
Ethylene
C2H4
10.51
9
Formaldehyde
CH2O
10.87
No response
Hexane, n-
C6H14
10.13
4.3
Iso-butylene
C4H8
9.24
1
Isopropanol
C3H8O
10.12
6
Methane
CH4
12.51
No response
Naphthalene
C10H8
8.13
0.42
Octane, n-
C8H18
9.82
1.8
Pentane
C5H12
10.22
8.4
Propylene
C3H6
9.73
1.4
Styrene
C8H8
8.43
0.4
Toluene
C7H8
8.82
0.5
Vinyl chloride
C2H3Cl
9.99
2

In an industrial setting, gases are often present in mixtures. For a gas mixture whose composition is known, an overall response factor RFOverall for the mixture can be calculated using the formula below based on published response factors for each of the component in the mixture. 
RFOverall=1X1/F1+X2/F2*+X3/F3+...+Xn/Fn
Where X1  - Xn are the mole ratios of each species in the gas stream, and F1 - Fn are the response factor of each species.  This equation allows for the assessment of the sensitivity of the sensors for any unit where the stream compositions are known.  
For unknown gases or gas mixtures, the PID cannot apply a proper factor or calculate a true concentration. In such cases the PID reading is deemed to be an "Isobutylene-equivalent" response, just as FID readings are deemed "methane equivalent" in Method 21. 
Figure R-4 outlines the major procedural steps that were followed in the construction and operation of the LDSN. As the manufacturer of the technology, Molex QA-screens all incoming components, produces the sensors, and performs comprehensive pre-deployment operation checks on all sensor nodes deployed in the field. Because of a linear output, a two-point (0 and 500 ppb isobutylene) calibration procedure is used to prepare the sensors for use.  

      


                   Figure R-4. LDSN QA/QC Procedures   

When the LDSN nodes are installed in the process units, an in-field sensor functionality and calibration check called a "bump test" is performed. In brief, the operator attaches to the sensor inlet a special bump test fixture which is connected to a certified 500 ppb isobutylene gas cylinder. The operator uses the mSyte(TM) mobile device to verify the identity of the sensor node, starts the bump procedure test within mSyte(TM) mobile app, and then releases the test gas for a short time period administered by the mobile app. The amplitude of change in the sensor reading is compared to a preset threshold value. A successful bump test is shown by a "Pass" message on the mobile app, which indicates the response of the sensor exceeds 50% of the nominal value of the standard. If the first test was not successful, the test can be repeated up to two additional times and if the sensor continues to fail, it needs to be recalibrated or replaced with a pre-calibrated sensor. When a sensor is replaced, the mSyte software automatically updates the sensor info for this particular sensor location due to a modular design. All the bump test data are segregated from leak detection data on mSyte and the test time, test gas, test results as well as the operator who performs the test are automatically logged in the mSyte(TM) database as a QA record. 

 
                                       
             Figure R-5. LDSN Node Bump Test QA Check   

In addition to scheduled bump tests, the health of each sensor is continuously monitored for power outage, loss of data transmission, and sensor baseline levels. The current status of each sensor is available on mSyteTM. Historical data is also logged in the database. Any failure or significant deviation from preset threshold values will result in a notification being sent to appropriate facility personnel. Failed sensors should be reset, repaired, or replaced within 7 business days and the individual sensor downtime should not exceed 30% and the downtown across sensors in a unit should not exceed 10% within a calendar year. 


11. Sensor Data: Page 15 of the request states, "there should not be a statistically significant number of Method 21 readings greater than 1.2 times the DTU found on LDAR applicable components within the LDSN boundary unless a PSL notification has already been generated and leak investigations are pending or ongoing. The factor, 1.2, is in recognition of the variability that occurs in the M21 measurement process." This is a good check on the system that could be built into an annual verification that the facility could be required to do. It should not be the state or EPA's responsibility to verify the system is operating properly through a Method 21 audit. EPA or the state could also come in to challenge the system in the same way. The calculation what is "statistically significant" needs elaboration. What happens is a statistically significant number of components is found leaking at 1.2 times the DTU? Is the facility in violation? Are corrective measures required? What corrective measures? When will the Method 21 audit be repeated? Who will perform that audit?

The portion of the AMEL request referenced above was not intended to imply that it is an agency responsibility to verify the LDSN system is operating properly. It was instead provided in the QA/QC section as an example of how QC could still be performed on this innovative system with tools and techniques that are already available and currently in use by regulators.

The system is designed to generate PSLs and allow any leak that is 18,000 ppm or greater to be detected anywhere in the covered area.  If the LDSN-DRF has a 4σ performance level,  then an audit method such as randomly selected M21 component monitoring should detect no M21 leaks > 21,600 ppm if 100 components were randomly monitored and should detect 2 or less M21 leaks >21,600 ppm if 300 components were randomly selected for M21 monitoring. Leaks detected above this threshold would be indicative of a system or program defect. The facility would conduct a root cause analysis to determine the cause of the defect and to determine appropriate corrective action. Similar to the approach used for RSR fenceline monitoring, these should begin within 5 days and be completed no later than 45 days after discovery. If corrective actions will take longer than 45 days to complete, then a corrective action plan would be submitted to EPA. The facility would report the issue as a Title V deviation until corrective actions were completed.

Recordkeeping and Reporting:

12. What specific information will FHR keep in the record related to the LDSN? Page 16 of the request states, "Where these rules specify reporting the number of each equipment type or the number of components monitored, a reasonable estimate may be used for areas covered by the LDSN-DRF practices." This statement implies that an accurate inventory of components will not be maintained. Why is the inventory required by the LDAR regulations now required under this request? How will leaks and repairs on individual components be tracked? How will Method 21 audits be conducted if an inventory is not maintained? How will an auditor know which components are in VOC or HAP service and which are not? The request needs to specify which paragraphs of each subpart still apply and which will be replaced by requirements in the AMEL. FHR will provide in a separate document a list of the record keeping and reporting citations and paragraphs which still apply, and those which will be replaced by requirements in the AMEL.
The M21 CWP requires component by component listing with identification numbers in order to be able to match scheduled M21 monitoring results back to a specific component. There is significant burden to individually tagging each component. Under the CWP, Corpus Christi even used more expensive bar-coded tags to help ensure the technician was monitoring the correct component. Under the CWP, if a component was not tagged, it did not get monitored, and even if monitored later and shown to not be leaking, there was still a non-compliance with the work practice standard. Under the LDSN-DRF, all components within the covered area will be monitored. There will no longer be a need to link a monitoring technician and their monitoring results to each individual component in the facility. The elimination of the requirement to hang and maintain hundreds of thousands of tags in the facility is part of the value of an area monitoring system.
When a PSL investigation results in the detection of a leak, the technician will document the leak in the mobile app. This leak documentation includes the investigation start date, investigation results, component location description as well as location marked on map, M21 reading, identifying information and/or picture if needed, repair action taken, and M21 reading for effective repair confirmation when applicable. The 5/15-day repairs will be tracked in an electronic software system. 
If needed, Method 21 audits can be conducted on randomly selected components or in small area "sweeps". However, an approach that still requires every LDAR component in the facility to be listed and tagged with individual serial numbers obviates one of the key advantages of an area monitoring system such as the LDSN. Components in VOC or HAP service will be shown as highlighted P&IDs or PFDs.
13. What specific information will FHR include in the semiannual reports related to the LDSN? The request needs to specify which paragraphs of each subpart still apply and which will be replaced by requirements in the AMEL. FHR will provide in a separate document a list of the citations and paragraphs which still apply, and those which will be replaced by requirements in the AMEL.

14. Emissions Reporting: How can accurate emissions reporting be accomplished if an accurate inventory of components is not maintained? 

For the portions of the facility complying with M21 CWP, fugitive emission estimates will continue to follow the methodology specified in TCEQ's Emissions Inventory Guidelines (RG-360). For portions the facility utilizing LDSN-DRF, the estimated fugitive emissions will be based on an estimate of the number of fugitive emission components, the emission factors found in TCEQ's Air Permit Technical Guidance for Chemical Sources  -  Fugitive Guidance (APDG 6422), and LDSN-DRF control efficiencies that are appropriate for a continuous fugitive monitoring system (28LAER level control percentages).The type and number of fugitive components will be based on current component counts that will then be increased or decreased as new projects trigger permitting action and LDAR components will be added or removed. The updated component counts will be based on project related P&ID reviews. This overall method will be well within the measurement error associated with the emission factors, correlation equations, and control efficiencies developed and published in the Protocol for Equipment Leak Emission Estimates (the '95 Protocol). 

--------------------------------------------------------------------------------

Equivalence Demonstration Specific Items:
1. Will FHR provide the LeakDAS database to OAQPS for verification of model results prior to release of the CRADA report? FHR has provided the requested information to OAQPS.
2. What is the actual DT band and DTU identified during the pilot testing? How does this compare to the model inputs and results?
      p. A22: Cluster repair threshold  -  3000 ppm
      p. B10: Although the nominal leak average detection threshold (DTA) is 5000 ppm, several smaller leaks (500-5000 ppm are predicted to be detected by the Molex LDSN due to peculiarities of leak-to-sensor distance distribution in an operating plant environment. All those "to be detected" leaks have significant impact on emission equivalency calculations.) 
During the FHR Corpus Christi pilot, a total of 39 leaks were found by the LDSN-DRF in the Mid-Crude unit over the 5-month test period, and 71 leaks were found by LDSN-DRF over the 7-month test period. The minimum screening values (in ppm) for leaks detected by the LDSN-DRF system in the Meta-Xylene and Mid-Crude units are 564 and 582 ppm, respectively. The fact that the LDSN-DRF system was able to detect leaks for repair as low as ~ 564 ppm for Meta-Xylene and 582 ppm for Mid-Crude demonstrates the capability of the LDSN-DRF solution to detect many small leaks before they grow larger. As described in the AMEL request, leak proximity effect, the additive effect of a cluster of small leaks, and opportunistic discovery during DRF are all potential factors contributing to the detection of these small leaks.
The upper limit of the DT band (DTU) represents the smallest leak(s) that could be detected by the sensor network at the farthest distance away from a sensor. Table R-2 shows the distribution of the leak sizes found at the farthest distance away from the nearest sensor in the pilot run. For the Meta-Xylene unit, the minimum and median leak size at the farthest distance are 1071 ppm and 2601 ppm, respectively. For the Mid-Crude unit, the minimum and median leak size at the farthest distance are 713 ppm and 5470 ppm, respectively. Although the actual leaks found do not represent the DTU value, an examination of the median leaks found does suggest a higher DT band in the Mid-Crude unit. These results are not surprising considering the gas streams in the Mid-Crude units have higher response factors, i.e. lower gas sensitivities than in the Meta-Xylene unit.   
Table R-2. Leaks found at farthest distances from nearest sensors in FHR pilot run.
Unit 
Component Tag
Component Type
Approx. Distance. to Nearest LDSN Node (ft)
M21 Screen Value               (ppm) in DRF


Meta-Xylene

252164.1
Connector
57
3,861

252574
Valve
51
1,071

252577
Valve
51
1,073

252555
Valve
51
1,342

252552
Valve
51
24,102

AVO 018727
Connector
50
17,988



Mid-Crude

Near 112144
OEL
72
1,896

112082
Valve
72
7,287

112011.1
Connector
63
2,572

108940.1
Connector
63
100,000

111961.1
Connector
63
100,000

119746.1
Connector
54
30,876

105803.1
Connector
52
3,654

102078
Pump
50
713

Table R-3 shows a comparison of our Monte-Carlo simulation and Pilot Run results.  The simulated DTUs to reach equivalency (1.5 times of eDTA) are 15,000 - 37,500 ppm, and 11,250  -  26,250 ppm for Meta-Xylene and Mid-Crude unit, respectively. The leaks found at the sensor node boundaries (50-70 ft away from the closest sensor node) varied from 1,071  -  24,102 ppm for the Meta-Xylene unit and 713  -  100,000 ppm for the Mid-Crude unit. To minimize the interference of the cluster effect and opportunistic leak discovery in the analysis, we compared the median values of boundaries leaks to the DTU from the simulation, which are 2,601 and 5,470 ppm for Meta-Xylene and Mid-Crude unit, respectively. Those median values are significantly lower than the equivalency required DTU values, even under the most conservative assumption  -  the DTA(tag) detection scenarios (15,000 ppm and 11, 250 ppm). The wide leak distribution and low median leak size found in each unit during the pilot run demonstrate the ability of the LDSN-DRF to detect leaks under a DTU of 18,000 ppm to achieve equivalency to Method 21 and reduction in fugitive emissions.



     Table R-3. Comparison of Monte-Carlo Simulation and Pilot Run Results
                                     Unit
                                  Leak Model
                           eDTU of DTA(tag) scenario
                         eDTU of DTA(cluster) scenario
                              Min -Max Pilot Run 
                Median found 50-70 ft from nearest sensor node
                                  Meta-Xylene
                                  Non-Growth
                                  15,000 ppm
                                  26,250 ppm
                                564-100,000 ppm
                                   2601 ppm
                                       
                                 Linear Growth
                                  18,750 ppm
                                  37,500 ppm+
                                       
                                       
                                   Mid-Crude
                                  Non-Growth
                                  11,250 ppm
                                  18,750 ppm
                               582 -100,000 ppm
                                   5470 ppm

                                 Linear Growth
                                  26,250 ppm
                                  26,250 ppm
                                       
                                       


3. How will FHR prevent background creep  -  where the sensor baseline by which peaks are compared increases over time? Figure A-6 [of the AMEL request] shows continued climb in emissions over time. Does this mean there is a point where the LDSN no longer performs equivalent or better? If so, what is the backstop?
One of the major requirements in sensor placement is that sensors be placed in locations with relatively unimpeded airflows so that the system can triangulate sensor detections to the leak source location. Wind and changes in wind direction do not allow gas concentration to build up at the sensor location. Thus, the sensor's baseline has little, if any, contribution from small leaks nearby. The baseline of the PID sensor at this ppb sensitivity level is primarily determined by internal condition of the sensor (e.g. contamination of the electrodes), and environmental conditions such as temperature and humidity. 
In the pilot run, four sensors were placed in outside the battery limit (OSBL) in the plant for the sensor baseline study.  A comparison of the sensor outputs shows that these clean air OSBL sensors do not have lower baselines nor lower noise levels than sensors in the process units. 
It should be noted that the LDSN is an emissions monitoring solution, rather than just a detector for large leaks. In the leak simulations, several detection scenarios and different growth models were considered to address the "background creep" concern and based on the simulation results, the LDSN-DRF solution design will prevent this from happening. For instance, the Monte-Carlo simulation results (Figures R-6 and R-7 which correspond to Figures A-14 and A-15 of the AMEL application and are also included here for reference) show the cumulative fugitive emission plots of the Meta-Xylene and Mid-Crude unit. In the plots, the difference in cumulative emissions between the DTA(tag) and M21 simulated cumulative emissions either remain unchanged or show LDSN cumulative emission values widening over the simulation period. This suggests that the growth rate of cumulative M21 CWP emissions is at least equal to or greater than the growth rate of the DTA(tag) cumulative emissions. Based on the Monte Carlo simulation data, Molex fit the cumulative emissions (every 180 days) into linear equations and found that at the eDTA level, at least 76% of the growth rates (slopes) of DTA(tag) scenarios are lower than the growth rate of simulated M21. 

The more realistic detection models - DT(tag) and DT(cluster), in which the detection distance is considered, show a growing difference between the M21 and the DT(cluster) cumulative emissions over the simulation period. This is a clear indication that emission reductions resulting from the shorter duration of large leaks outweighs the slow accumulation of the small leaks (some of which will be fixed during cluster leak searches). In addition, the 5-month pilot run results show that the DT band ranges from 564-100,000 ppm (pegged value) and 582-100,000 ppm (pegged value) for Meta-Xylene and Mid-Crude unit, respectively. The wide DT band of actual leaks found during pilot run confirms our assumptions that LDSN-DRF, which combines AI-powered potential leak source location (PSL) information and an efficient leak screening/searching approach, reduces the emissions from both large (>10,000 ppm)and small (500-10,000 ppm) leaks. From these pieces of evidence, Molex is confident that the LDSN-DRF solution should detect both large leaks and small leaks and prevent background creep.

                                       
Figure R-6. Monte Carlo Simulated Cumulative Fugitive Emissions of the Meta-Xylene Unit (2014 - 2018): Linear Growth Model (Left) and Non-growth Model (Right)

                                       
Figure R-7. Monte Carlo Simulated Cumulative Fugitive Emissions of the Mid-Crude Unit (2016 - 2018): Linear Growth Model (Left) and Non-growth Model (Right)


Table R-4. Percentage of the slower emission growth rate with DTA (tag) detection scenarios at eDTA level (vs. scheduled M21)
                                     Unit
                              Linear-growth model
                               Non-growth model
                                  Meta-Xylene
                                      96%
                                      86%
                                   Mid-Crude
                                      76%
                                      88%


4. What sensor density was modeled in the simulations? How does it compare to the sensor density that will be used in the actual process units?
In the Monte Carlo DT band simulations, Molex used a detection radius of 50 ft. Approximately the same detection coverage radius was used in the actual process units (Mid-Crude and Meta-Xylene units). Due to manual placement work in the early days of the pilot, the Mid-Crude unit has a couple of spots with slightly lower sensor density. We have mapped out those spots and plan to add 6 more sensors to this unit to provide an approximately 50 ft detection radius for all the sensors before implementation of an approved AMEL.

_____________________________________________________________________________________

