                                                                               
                                                                               



                                 EDOCKET NO:  
                             EPA-HQ-OAR-2014-0609
                                       
                                       
                    REVIEW OF WIPP PERFORMANCE ASSESSMENT 
                      COMPUTER CODE MIGRATION ACTIVITIES
                                       











                    U. S. Environmental Protection Agency 
                      Office of Radiation and Indoor Air 
                 Center for Waste Management and Regulations 
                             Washington, DC 20460
                                       
                                       
                                       
                                   July 2015
                                       
                                       
                                       
                                       
                                       
                                       
                                       
                                       
Table of Contents
Preface	7
Executive Summary	8
Background	10
Software Qualification	13
Post CCA SQA Upgrades and Documentation	18
Overview of DOE's Computer Code Migration Activities	20
Current Status	20
DOE'S Test Methodology	22
References	23
The Agency's Review Approach	25
Summary of Individual Computer Code Migration	26
ALGEBRACDB	26
1.1.1	Introduction	26
1.1.2	Test Methodology	27
1.1.3	Test Results	27
1.1.4	The Agency's Conclusions	28
1.1.5	References	28
BLOTCDB	29
1.1.6	Introduction	29
1.1.7	Test Methodology	30
1.1.8	Test Results	30
1.1.9	The Agency's Conclusions	30
1.1.10	References	31
BRAGFLO	32
1.1.11	Introduction	32
1.1.12	Test Methodology	34
1.1.13	Test Results	35
1.1.14	The Agency's Conclusions	36
1.1.15	References	36
CCDFGF	38
1.1.16	Introduction	38
1.1.17	Test Methodology	40
1.1.18	Test Results	42
1.1.19	The Agency's Conclusions	42
1.1.20	References	43
CCDFSUM	45
1.1.21	Introduction	45
1.1.22	Test Methodology	45
1.1.23	Test Results	46
1.1.24	The Agency's Conclusions	46
1.1.25	References	46
CUTTINGS_S	47
1.1.26	Introduction	47
1.1.27	Test Methodology	49
1.1.28	Test Results	50
1.1.29	The Agency's Conclusions	50
1.1.30	References	51
DRSPALL	52
1.1.31	Introduction	53
1.1.32	Test Methodology	54
1.1.33	Test Results	55
1.1.34	The Agency's Conclusions	57
1.1.35	References	57
DTRKMF	58
1.1.36	Introduction	58
1.1.37	Test Methodology	59
1.1.38	Test Results	59
1.1.39	The Agency's Conclusions	60
1.1.40	References	60
EPAUNI	61
1.1.41	Introduction	61
1.1.42	Test Methodology	62
1.1.43	Test Results	63
1.1.44	The Agency's Conclusions	63
1.1.45	References	63
EQ3/6		64
1.1.46	Introduction	64
1.1.47	Test Methodology	65
1.1.48	Test Results	65
1.1.49	The Agency's Conclusion	66
1.1.50	References	66
FMT		66
1.1.51	Introduction	67
1.1.52	Test Methodology	68
1.1.53	Test Results	68
1.1.54	The Agency's Conclusions	68
1.1.55	References	69
GENMESH	70
1.1.56	Introduction	70
1.1.57	Test Methodology	71
1.1.58	Test Results	71
1.1.59	The Agency's Conclusions	71
1.1.60	References	72
GROPECDB	73
1.1.61	Introduction	73
1.1.62	Test Methodology	74
1.1.63	Test Results	74
1.1.64	The Agency's Conclusions	74
1.1.65	References	74
ICSET		75
1.1.66	Introduction	75
1.1.67	Test Methodology	76
1.1.68	Test Results	76
1.1.69	The Agency's Conclusions	77
1.1.70	References	77
JAS3D		78
1.1.71	Introduction	78
1.1.72	Test Methodology	79
1.1.73	Test Results	79
1.1.74	The Agency's Conclusion	79
1.1.75	References	79
LHS		80
1.1.76	Introduction	80
1.1.77	Test Methodology	81
1.1.78	Test Results	82
1.1.79	The Agency's Conclusions	83
1.1.80	References	83
MATSET	84
1.1.81	Introduction	84
1.1.82	Test Methodology	85
1.1.83	Test Results	86
1.1.84	The Agency's Conclusions	86
1.1.85	References	87
MODFLOW2000	88
1.1.86	Introduction	88
1.1.87	Test Methodology	89
1.1.88	Test Results	91
1.1.89	The Agency's Conclusions	91
1.1.90	References	91
MWT3D	92
1.1.91	Introduction	92
1.1.92	Test Methodology	93
1.1.93	Test Results	93
1.1.94	The Agency's Conclusions	93
1.1.95	References	93
NONLIN	94
1.1.96	Introduction	94
1.1.97	Test Methodology	94
1.1.98	Test Results	95
1.1.99	The Agency's Conclusions	95
1.1.100	References	95
NUTS		95
1.1.101	Introduction	96
1.1.102	Test Methodology	97
1.1.103	Test Results	97
1.1.104	The Agency's Conclusions	98
1.1.105	References	98
PANEL	100
1.1.106	Introduction	100
1.1.107	Test Methodology	101
1.1.108	Test Results	102
1.1.109	The Agency's Conclusions	102
1.1.110	References	102
PEST		104
1.1.111	Introduction	104
1.1.112	Test Methodology	107
1.1.113	Test Results	108
1.1.114	The Agency's Conclusions	108
1.1.115	References	109
POSTBRAG	110
1.1.116	Introduction	110
1.1.117	Test Methodology	111
1.1.118	Test Results	111
1.1.119	The Agency's Conclusions	111
1.1.120	References	111
POSTLHS	112
1.1.121	Introduction	112
1.1.122	Test Methodology	113
1.1.123	Test Results	114
1.1.124	The Agency's Conclusions	114
1.1.125	References	114
POSTSECOTP2D	115
1.1.126	Introduction	116
1.1.127	Test Methodology	117
1.1.128	Test Results	117
1.1.129	The Agency's Conclusions	118
1.1.130	References	118
PREBRAG	119
1.1.131	Introduction	119
1.1.132	Test Methodology	121
1.1.133	Test Results	121
1.1.134	The Agency's Conclusions	121
1.1.135	References	122
PRECCDFGF	123
1.1.136	Introduction	123
1.1.137	Test Methodology	125
1.1.138	Test Results	125
1.1.139	The Agency's Conclusions	125
1.1.140	References	125
PRELHS	126
1.1.141	Introduction	127
1.1.142	Test Methodology	128
1.1.143	Test Results	128
1.1.144	The Agency's Conclusions	129
1.1.145	References	129
PRESECOTP2D	130
1.1.146	Introduction	131
1.1.147	Test Methodology	132
1.1.148	Test Results	132
1.1.149	The Agency's Conclusions	132
1.1.150	References	132
RELATE	133
1.1.151	Introduction	134
1.1.152	Test Methodology	134
1.1.153	Test Results	135
1.1.154	The Agency's Conclusions	135
1.1.155	References	135
SECOTP2D	136
1.1.156	Introduction	136
1.1.157	Test Methodology	137
1.1.158	Test Results	137
1.1.159	The Agency's Conclusions	138
1.1.160	References	138
STEPWISE	139
1.1.161	Introduction	139
1.1.162	Test Methodology	140
1.1.163	Test Results	140
1.1.164	The Agency's Conclusions	141
1.1.165	References	141
SUMMARIZE	141
1.1.166	Introduction	142
1.1.167	Test Methodology	144
1.1.168	Test Results	146
1.1.169	The Agency's Conclusions	146
1.1.170	References	146
LIBRARIES	148
1.1.171	CAMCON_LIB	149
1.1.172	CAMDAT_LIB	152
1.1.173	CAMSUPES_LIB	154
1.1.174	PLT_LIB	156
1.1.175	SDBREAD_ LIB	158
Codes Used to Support the Inventory Report	161
1.1.176	ORIGEN2	161
1.1.177	TransOrigen	171
1.1.178	Comprehensive Inventory Database	175
Summary and Conclusions	182


Preface 
The U.S. Department of Energy (DOE) is required to submit a Compliance Recertification Application (CRA) to the U.S. Environmental Protection Agency (EPA) for the Waste Isolation Pilot Plant (WIPP) facility every five years including an updated assessment of future WIPP performance. During EPA's review of DOE's CRA-2014 performance assessment (PA), events associated with the February 2014 repository fire and radionuclide release have resulted in closed portions of the underground facility. This closure has created a situation where certain parts of the underground facility could not be accessed for ground control. Panel 9 may be abandoned along with plans to install panel closures in panels 3, 4, 5 and 6.  
Because the CRA performance assessments are predictions of post-closure repository performance and the EPA knows there will be modifications to the current repository design, modifying the CRA-2014 PA at this time to incorporate alternative parameter values would not add more reality to predictions of repository post-closure performance. Consequently, the EPA adopted the CRA-2014 PA as originally submitted by DOE as the baseline, rather than have DOE conduct a revised PA baseline calculation (PABC). In lieu of requesting a PABC-2014, the EPA requested that DOE conduct a set of sensitivity studies to address some of the significant technical concerns arising from the EPA's CRA-2014 review. The inputs to these sensitivity studies broadly address many of the EPA's technical concerns that could potentially impact long-term repository performance. The Agency has reviewed the results of these studies and determined that there exists an adequate level of confidence -- that is, a reasonable expectation -- that the repository will continue to comply with EPA regulations.  

This report describes the results of the U.S. Environmental Protection Agency's (EPA or the Agency) review of Performance Assessment (PA) computer code development and testing activities performed by the U.S. Department of Energy (DOE or the Department) in support of their ongoing PA of the Waste Isolation Pilot Plant (WIPP).  The ability of the WIPP facility to meet the Agency's certification requirements was demonstrated, in part, through the use of a series of PA computer codes that are documented in the Department's Compliance Certification Application (CCA).



Executive Summary	

The Waste Isolation Pilot Plant (WIPP), located in southeastern New Mexico, is an underground facility designed for the permanent disposal of Transuranic (TRU) defense-related waste.  The U.S. Department of Energy (the Department or DOE) operates the WIPP repository under the regulatory oversight of the U.S. Environmental Protection Agency (the Agency or EPA).  The ability of the DOE WIPP facility to continue to meet the certification requirements of the EPA is demonstrated in part through the use of a series of performance assessment (PA) computer codes. DOE must demonstrate on an ongoing basis that PA computer software is in compliance with regulations outlined in 40 CFR 194.23  -  Models and Computer Codes.  Since the Agency's certification of the DOE WIPP Compliance Certification Application (CCA), DOE has added computer hardware and upgraded the computer software.  In order to maintain compliance with Sections 194.22 and 194.23, DOE is required to conduct testing on the computer codes to ensure that they still function properly on new hardware and software whenever changes are made.  The Agency reviewed the testing performed by DOE to demonstrate continued compliance with the addition of computer hardware and upgraded software. 

For the CCA, performance analyses were run on the DEC Alpha Cluster using the OpenVMS operating system, Version 6.1.  In 1999, the operating system was updated from OpenVMS 6.1 to 7.1, and a year later from OpenVMS 7.1 to 7.2.  In the summer of 2001, the FORTRAN compiler available on the cluster was upgraded to Version 7.4A.  In August 2002, the operating system was upgraded to an OpenVMS 7.3-1. 

In addition to software upgrades, DOE has made hardware changes.  The DEC Alpha Cluster was the main platform for performance analyses for the WIPP during the CCA.  The cluster consisted of 11 DEC Alpha 2100 computers with 44 processors.  In September 2001, a single Compaq Alpha ES40 computer was added to the WIPP PA hardware cluster.  In August 2002, the DEC Alphas were replaced by a Compaq ES40.  

In 2003, two new hardware systems were added to the PA computational cluster -- the Compaq ES45 and the Compaq Alpha 8400.  In September 2004, the Agency approved 38 (of 39) computer codes and 3 libraries for use on the Compaq ES45 and 8400 using OpenVMS 7.3-1.

In 2006, the DOE procured four Compaq ES47 machines to add to the PA computing resources of two Compaq ES40 and two Compaq ES45 machines.  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2.  Because of these changes in the operating system and the addition of a new computing platform, DOE has conducted regression testing for each PA software code to ensure that each code continues to function correctly.  The regression test methodology uses the VMS DIFFERENCE command to compare output from the latest version of the computer code and/or operating system to the earlier versions.  The regression test cases are outlined in the Validation Document (VD) and are run using the WIPP PA run control system.  The scripts, script input files, and other files related to validation testing of the code reside in the Configuration Management System (CMS) library.  All test inputs are fetched at run time by the scripts, and test outputs/results and run logs are automatically stored by the scripts the CMS library.  The VMS DIFFERENCE command compares two files and identifies records that are different in the two files.  Records with differences are grouped into sections; a section begins with a record that is different between the two files, and ends with first subsequent record where the two files agree.  In the output of the DIFFERENCE command, sections are separated by rows of 12 asterisks; inside a section, the records from the two files are separated by a row of 6 asterisks.  At the end of the DIFFERENCE output, the utility reports the number of sections and the number of records in which differences were found.  Differences that are limited to code run dates and time, file and directory names, user names, platform names and execution statistics are acceptable.  Differences involving numerical output require analysis to determine the origin of the differences and whether the differences affect the code's performance.  Numerical differences may be determined to be acceptable based on the analysis of each difference.  If all differences are found to be acceptable, it follows that the output of the newer code version and/or operating system meets the acceptance criteria specified in the Requirements Document (RD/VVP), and the code will be considered to be validated on the platforms and operating systems that are tested.   

In 2015, the Agency updated this report to document the review of two significant changes to the WIPP performance assessment since the last WIPP recertification in 2009.  First, codes changed to support the PA calculations performed on the VMS computer platform to support the 2014 recertification (CRA14) submission. Second, the migration of the WIPP PA system from the VMS platform to the Sun Solaris Blade platform will be used in future compliance calculations.

Most PA codes were not modified or re-qualified to support the CRA14. Six PA codes were modified and qualified for use in CRA14, the EPA's review findings are discussed in this report. 

In 2012 and 2013, DOE had SNL migrate the WIPP performance assessment from the aging OpenVMS Alpha Cluster to a Sun Solaris Blade Server running Intel processors [3].  AP-162 [3] documents the requirements and migration process.  SNL notes, "The primary motivation for the PA migration is diminishing technical and hardware support for the OpenVMS and the Alpha servers." Migration to the Solaris Blade includes the PA codes, support libraries, and the PA parameter database. SNL's summary report [4] describes the results of the migration to the Solaris Blade.

This report presents the Agency's findings with respect to the qualification of six modified codes on the VMS platform using OpenVMS 8.2 and migration and qualification of the WIPP PA computer codes running on the Solaris Blade machines with SunOS 5.11 (Table 3.1-1). After conducting a review, the Agency concludes that the versions of the computer codes specified in Table 3.1-1 are approved, except EQ3/6, for use in PA compliance calculations running VMS platform using OpenVMS 8.2 for the 2014 CRA PA (table column "2014 CRA") and on the Solaris Blade machines with SunOS 5.11 for future WIPP PAs (table column "SOLARIS MIGRATION").
Background

This report describes the results of the U.S. Environmental Protection Agency's (EPA or the Agency) review of Performance Assessment (PA) computer code development and testing activities performed by the U.S. Department of Energy (DOE or the Department) in support of their ongoing PA of the Waste Isolation Pilot Plant (WIPP).  The ability of the WIPP facility to meet the Agency's certification requirements was demonstrated, in part, through the use of a series of PA computer codes that are documented in the Department's Compliance Certification Application (CCA).

DOE conducted a PA to show compliance with the Agency's disposal regulations as part of the WIPP certification process.  DOE must demonstrate on an ongoing basis that PA computer software is in compliance with regulations outlined in §194.22  -  Quality Assurance and §194.23  -  Models and Computer Codes.  These regulations are presented in Appendix A.  Examples of software that must meet the compliance criteria are as follows:

 Scientific or engineering software used to assess the performance of a site 
 Scientific or engineering software used to analyze data for, or produce input (parameters) to, a PA calculation
 Software that is used in managing information or augmenting mission-essential decisions
 Software used to collect data (e.g., far-field, near-field, engineered barriers) 

DOE executes the PA conceptual models through software applications with parameter value inputs on an infrastructure composed of computers and operating systems that must be periodically updated.  For the CCA, PA analyses were run on the DEC Alpha Cluster using the OpenVMS operating system, Version 6.1.  In 1999, the operating system was updated from OpenVMS 6.1 to 7.1, and a year later from OpenVMS 7.1 to 7.2.  In the summer of 2001, the FORTRAN compiler available on the cluster was upgraded to Version 7.4A.  In August 2002, the operating system was upgraded to an OpenVMS 7.3-1.  In 2006, SNL updated the operating systems and installed the OpenVMS 8.2 operating system. In 2012, SNL modified six performance assessment codes to support the 2014 recertification. In 2013, SNL migrated the entire WIPP PA to the Sun Solaris Blade Server hardware running the SunOS 5.11 operating system. In 2014, codes residing on other computers, such as Linux "Alice", were also migrated to the Solaris Blade.

In addition to software upgrades, DOE has made hardware changes.  The DEC Alpha Cluster was the main platform for performance analyses for the WIPP during the CCA.  The cluster consisted of 11 DEC Alpha 2100 computers with 44 processors.  In September 2001, a single Compaq Alpha ES40 computer was added to the WIPP PA hardware cluster.  In August 2002, the DEC Alphas were replaced by a Compaq ES40.  In 2003, two new hardware systems were added to the PA computational cluster; the Compaq ES45 and the Compaq Alpha 8400.  For the 2004 Compliance Recertification Application (CRA) PA, DOE used OpenVMS 7.3-1 as the operating system in conjunction with the Compaq ES40, ES45, and 8400.  In June 2003, the EPA presented their findings with respect to their review of 27 codes and 3 libraries that were migrated to the Compaq ES40.  The Agency concluded that all of the 27 codes and 3 libraries were migrated successfully to the Compaq ES40 with OpenVMS 7.3-1 and were approved for use in compliance calculations for the WIPP PA.
With respect to the Compaq ES45 and 8400 hardware systems, most of the computer codes have undergone regression testing by DOE to ensure that each code will function correctly on the ES45 and 8400 platform running OpenVMS 7.3-1.  In March 2004 (Docket A-98-49, II-B3-70), the Agency concluded that 36 (of 39) computer codes and 3 libraries migrated to the Compaq ES45 and 8400 using OpenVMS 7.3-1 were acceptable and were approved for use in compliance calculations for the WIPP PA.  In September 2004 (Docket A-98-49, II-B1- 7), the Agency published their findings with respect to the qualification of the computer codes on the Compaq ES45 and the Compaq Alpha 8400.  At that time, the Agency concluded that 38 (of 39) computer codes and 3 libraries migrated to the Compaq ES45 and 8400 using OpenVMS 7.3-1 were acceptable and approved for use in compliance calculations for the WIPP PA.  SANTOS was the only code not approved for PA calculations.  A detailed analysis of SANTOS completed in 2005 concluded that the approximations of room closure and waste compaction developed by the SANTOS model are adequate for use in WIPP PA (Docket A-98-49, II-B1-17).  In 2005, DOE made revisions to eight of the computer codes; including LHS, POSTLHS, CUTTINGS, DRSPALL, PANEL, SUMMARIZE, PRECCDFGF, and CCDFGF.  In March 2006 (Docket A-98-49, II-B1-8), the Agency concluded that these codes were qualified on the Compaq ES40 and ES45. 

In 2006, the DOE procured four Compaq ES47 machines to add to the PA computing resources of two Compaq ES40 and two Compaq ES45 machines.  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2.  Because of these changes in the operating system and the addition of a new computing platform, DOE has conducted regression testing for each PA software code to ensure that each code continues to function correctly as required.  This report documents the results of the Agency's assessment performed to determine whether the observed DOE PA code activities conform with the compliance criteria requirements for §194.22 and §194.23.  Specifically, the Agency's evaluation addresses whether these changes have materially affected the Agency's original determination that the computer codes were adequate to support the certification decision.  

In the 2012 and 2013 timeframe SNL performed PA calculations to support the 2014 recertification.  All but six of the performance assessment codes remained the same for these calculations and the hardware framework described above did not change.  The six codes modified and re-qualified for the 2014 CRA are BRAGFLO, CCDFGF, PRECCDFGF, MATSET, PRELHS, and PREBRAG.  BRAGFLO and PREBRAGFLO were modified to include water balance calculations, to support case CRA14-0 in the 2014 CRA.  CCDFGF was changed in 2010 to investigate the impact of lack of mass balance in the mobilization of radionuclides. MATSET, PRELHS, and PRECCDFGF were modified to use the new parameter database software, MySQL.

In 2012 and 2013, SNL also systematically migrated the PA codes, code libraries, and parameter database to the Solaris Blade Server [3].  SNL could not simply use regression testing to verify PA code performance as done previously because VMS uses a different floating point format than the Intel Solaris OS [4]. Because of the difference in hardware architecture between the VMS platforms and the Sun Solaris, SNL had to use a combination of regression testing (comparing previous results to new results calculated on the new hardware and operating system) and code validation testing using functional requirements [3, 4]. PA codes, such as BRAGFLO, that require numerical convergence calculations were expected to show different results in some cases and may not be conducive to simple result comparison used in regression testing [4].   

This report presents the Agency's findings with respect to the six codes qualified for the 2014 CRA calculation and the qualification of the most recent versions of the PA computer codes running on the Sun Solaris Blade machines with SunOS 5.11 (Table 3.1-1).  After conducting reviews, the Agency concludes that the versions of the computer codes specified in Table 3.1-1 are approved, except EQ3/6, for use in PA compliance calculations. These include the six codes for use in the 2014 CRA running on the Compaq cluster of computers with OpenVMS 8.2 and the migrated PA codes running on the Solaris Blade machines with SunOS 5.11 for the WIPP PA.

Based on the results of the Type 3 problems (Test Cases 4, 10, 11, 12, 13, and 14), EPA cannot approve the use of EQ3/6 Version 8.0a for WIPP PA calculations (see EQ3/6 section below).  EPA further suggests that EQ3/6 Version 8.0a be qualified for the WIPP PA using its own requirements and acceptance criteria specific to the code's usage to support performance assessment calculations.

This report is divided into five sections.  Following this Introduction (Section 1), a Background section (Section 2) presents the approach that DOE has taken to meet the compliance criteria requirements for the computer codes.  The Background section is followed by a summary of DOE's code migration approach and conclusions (Section 3).  Section 4 presents the general approach that the Agency followed to review DOE's code migration activities.  Section 5 summarizes each of the computer codes that were reviewed by the Agency.  Section 6 provides the summary and conclusions.  References are provided at the end of each section.

Software Qualification

To demonstrate that computer software is in compliance with disposal regulations outlined in §194.22, the DOE established a life-cycle management process for software used to support their PA.  Their qualification approach for the software follows the life-cycle phases outlined in ASME NQA-2a-1990 addenda, part 2.7, as follows:

 Planning
 Requirements
 Design
 Implementation
 Validation
 Installation and Checkout
 Maintenance
 Retirement

Life-cycle phases are implemented using an iterative or sequential approach following the process flowchart below (Figure 1).  Each phase and its associated documentation shown in Figure 1 are discussed in the following sections.

Planning Phase

A Software QA Plan (SQAP) is produced during the planning phase for new software development (Figure 1).  Software under configuration control and developed within the scope of these QA requirements does not require a stand-alone SQAP.  Following the development of the SQAP, all specified requirements for each phase must be met and not subvert the intent of the requirements.  SQAPs may be written for an individual code or a set of codes.


Figure 1.	Major Components of DOE's Software Development Process

Requirements Phase

The document produced during the requirements phase is the Requirements Document and Verification and Validation Plan (RD/VVP) (Figure 1), which is a single document identifying the computational requirements of the code (e.g., SECOFL2D must be able to simulate ground-water flow under steady-state conditions).  The RD/VVP also describes how the code will be tested to ensure that those requirements are satisfied (e.g., such as list the code functions that will be tested using test cases).

Design Phase 

The Design Document (DD), produced during the design phase, provides the following information (as applicable): 
 
 Theoretical basis (physical process represented)
 Mathematical model (numerical model)
 Control flow and logic
 Data structures
 Functionalities and interfaces of objects, components, functions, and subroutines
 Ranges for data inputs and outputs, in a manner that can be implemented in software
 
More than one DD may be created during software development.  For example, a high-level design may be developed to match the code design to the requirements and define the overall architecture of the code (define modules and subroutines and their purpose, data structures, subroutine-call hierarchy, code language used, etc.).  Another DD may be developed to define how the modules will function in detail (define call interfaces between routines, define data types, etc.).  A detailed design, as its name implies, is very detailed, down to the level of almost writing the code (pseudocode).  These separate DDs may be combined into a single document.

Implementation Phase 

The following documents are produced during the implementation phase: 

User's Manual (UM)  -  describes the code's purpose and function, mathematical governing equations, model assumptions, the user's interaction with the code (e.g., how data is input into the code), and the models and methods employed by the code.  The UM generally includes:
 
 The numerical solution strategy and computational sequence, including program flowcharts and block diagrams.
 The relationship between the numerical strategy and the mathematical strategy (i.e., how boundary or initial conditions are introduced).
 A clear explanation of model derivation.  The derivation starts from generally accepted principles and scientifically proven theories.  The UM justifies each step in the derivation, and notes the introduction of assumptions and limitations.  For empirical and semi-empirical models, the documentation describes how experimental data are used to arrive at the final form of the models.  The UM clearly states the final mathematical form of the model and its application in the computer code.
 Descriptions of any numerical method used in the model that goes beyond simple algebra (e.g., finite-difference, Simpson's rule, cubic splines, Newton-Raphson Methods, and Jacobian Methods).  The UM explains the implementation of these methods in the computer code in sufficient detail, so that an independent reviewer can understand them.
 The derivation of the numerical procedure from the mathematical component model.  The UM gives references for all numerical methods.  It explains the final form of the numerical model and its algorithms.  If the numerical model produces only an intermediate result, such as terms in a large set of linear equations that are later solved by another numerical model, then the UM explains how the model uses intermediate results. The documentation also indicates those variables that are input to and output from the component model.

Implementation Document (ID)  -  provides the information necessary for the re-creation of the code used in the WIPP PA calculation.  Using this information, the computer user can reconstruct the code (e.g., compile the source language) and/or install it on an identical platform to that used in the WIPP PA calculation.  In this manner, the code can be regression-tested against an Agency-approved version for subsequent PA calculations (i.e., CRAs).  The document includes the source-code listing, the subroutine-call hierarchy, and code compilation information.
Validation Phase 

The validation phase consists of executing and reviewing the functional test cases identified in the previously approved VVP to demonstrate that the developed software meets the requirements defined for it in the RD.  The Validation Document (VD), produced during this phase, summarizes the results of the code functional testing activities prescribed in the RD/VVP documents for the individual codes, and provides evaluations based on those results.  The VD contains listings of input and output files from computer runs of a model.  The VD also contains reports on code verification, benchmarking, and validation, and documents the results of the QA procedures. 

Installation and Checkout Phase 

The following documents are produced during the installation and checkout phase: 
 
 The Installation and Checkout (I&C) Form [e.g., Sandia National Laboratories (SNL) and Los Alamos National Laboratories (LANL) form NP 19-1-8]
 The Access Control Memorandum
 The Approved Users' Memorandum

Production Software and/or Baseline Document Change Control 

When there are changes to the software baseline, the Change Control Form, SNL and LANL Form NP 19-1-9, is used to document the changes.  Types of changes that may be implemented are:

 Major changes, including new requirements, new design, new models, and new implementation, require a new baseline (i.e., SQAP, RD, DD, VVP, ID, UM, VD) to be documented.  In addition to revising every baseline document, a Change Control Form and Installation and Checkout Form are used. 
 Minor changes do not affect the requirements or design and can be documented with an addendum (no more than three addenda per baseline document) or page change to the affected baseline document.  In addition to the Change Control Form, the Installation and Checkout Form must be used. 
 Patch changes can be used for very small fixes to the code, usually one or two lines of source code or expanding a field's character length, etc.  Patch changes can be documented and tested with the Change Control Form and Installation & Checkout Form. 

System Software and Hardware Change Control 

Coding Documentation Standards.  Any change to software must be accompanied by documentation describing the change, the date the change was made, and the name of the person responsible for implementing the change.  This documentation should be clearly identified and placed in the source code (e.g., the actual written computer code text) in the vicinity of the change and at the top of the source code prior to the first executable line.  The code reviewer shall determine if this documentation is clear and sufficient. 

Significant System Software or Hardware Changes.  The Code Team/Sponsor (single-user systems) or System Administrator (multi-user systems) proposes significant system software or hardware changes using the Change Control Form, SNL and LANL form NP 19-1-9.  Examples of significant changes to system software or hardware: 
 
 Changes to the operating system, such that the version or level identifier changes
 Changes to the Central Processing Unit (CPU) 
 Database management system changes 

In general, changes are significant if they impact the results generated by production software or cause recompilation of production software. 

Software Problem Report (SPR).  Whenever a software problem is identified, the Code Team/Sponsor evaluates the problem to determine if it is, indeed, a problem (as opposed to user error).  If it is a problem, the SPR process is followed.
 
The Code Team/Sponsor classifies the problem as major, if it could significantly impact previous uses of code, or if it will require significant modification to the software; otherwise it is classified as minor.  For a major problem, the Responsible Manager identifies affected users to be notified of the problem, and designates qualified personnel to identify and evaluate the impact of the software problem.  The affected analysis is revised, and the evaluation and resolution of the software problem is documented in Part II of the SPR and Evaluation Form.  For a minor problem, this evaluation can be performed by the Code Team/Sponsor.

Configuration Management (Configuration Identification and Status Accounting).  Configuration management is the process for defining the configuration of software products, establishing software configuration baselines, and tracking the status of baseline changes.  A software configuration baseline consists of the source code and baseline documents, and provides objective evidence of technical adequacy.

The Software Configuration Management (SCM) Coordinator maintains a Software Baseline List and makes it available upon request.  The SCM Coordinator performs a completeness review to ensure compliance with the procedure and that necessary components of configuration management are present. 

For compliance software, the Software Baseline List contains:

 Code name and version
 Code version date
 Code Team/Sponsor name
 Code classification
 RD version
 VVP version
 DD version
 ID version
 UM version
 VD version
 List of approved users (may be listed by name, organization, group, or task, etc.) 
 List of approved system software/hardware configurations
 List of outstanding Software Problem Report (SPR) numbers
 Status of approved changes that are in process
 I&C date 

Retirement Phase 

To retire a code:
 The Code Team/Sponsor issues a memorandum to the SCM Coordinator requesting that the code be retired and provides a reason for the retirement.  
 The SCM Coordinator marks the code as retired in the baseline software list.  
 The System Administrator and/or Code Team/Sponsor take action to prevent the use of the retired code.  This could involve removal of the software from the computer or the changing of execution privileges.

Post CCA SQA Upgrades and Documentation

Since the time of CCA, the DOE has implemented upgrades to the software operating systems and computer hardware, which are documented in the following reports:

 Summary of Performance Assessment System Upgrades since the CCA
 Analysis Package for AP-042 (documents the upgrade from OpenVMS operating software from Version 6.1 to Version 7.1)
 Analysis Package for Regression Testing the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster
 Analysis Package for Regression Testing for the Compaq Alpha ES40 Hardware Upgrade on the WIPP DEC Alpha Cluster
 Analysis Package for Regression Testing for the Upgrade of Operating System to OpenVMS 7.3-1 and Hardware to HP Alpha ES45
 Analysis Report for the ES45 Regression Tests
 Analysis Report for the 8400 Regression Tests
 Individual code regression tests for the addition of the Compaq ES40, ES45, and ES47 machines and the upgrade of the operating system to OpenVMS 8.2  

Regression testing was performed on the upgraded operating systems and hardware, which run PA codes to demonstrate that the codes continue to produce acceptable output.  Regression testing, as a discipline, consists of running a set of one or more tests for a computer program and verifying that the output produced in the tests is within previously specified acceptable limits. 

The Agency has reviewed the documentation that DOE has developed to assess whether the computer codes still meet the requirements specified in §194.22 and §194.23.  In addition to the references cited above, the Agency reviewed UMs, VDs, IDs, and RD/VVPs for each code.  The Agency also reviewed all of the Change Control and Software Installation and Checkout forms for code modifications made since the CCA. 
Overview of DOE's Computer Code Migration Activities

Current Status

In August 2002, the operating system was upgraded to OpenVMS 7.3-1, and the DEC Alpha 2100s was replaced by a Compaq ES40.  In June 2003, the Agency approved the qualification of the computer codes on the Compaq ES40 and the use of the PA computer codes on this computer.  With the exception of NUMBERS, the Agency concluded that all of the remaining 38 codes and 3 libraries migrated to the Compaq ES40 using OpenVMS 7.3-1 were approved for use in compliance calculations for the WIPP PA.  NUMBERS 1.19 has since been approved for use by the Agency (2006; DOCKET NO: A-98-49 II-B1-7).

In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, both running OpenVMS 7.3-1.  This configuration was used for preparing the 2004 CRA.  Because of these changes, regression testing was conducted by DOE for the software codes and three libraries on the Compaq ES45 and 8400 using the OpenVMS 7.3-1 operating system to ensure that each code continues to satisfy all the criteria in its RDs.  In September 2004, the Agency published their findings with respect to the qualification of the computer codes on the Compaq ES40 and the Compaq Alpha 8400 [1].  At that time, the Agency concluded that 38 (of 39) computer codes and 3 libraries migrated to the Compaq ES45 and 8400 using OpenVMS 7.3-1 were approved for use in compliance calculations for the WIPP PA (as noted above, NUMBERS 1.19 has since been approved). 

 In 2005, DOE made revisions to eight of the computer codes, including LHS, POSTLHS, CUTTINGS, DRSPALL, PANEL, SUMMARIZE, PRECCDFGF and CCDFGF.  In March 2006 (Docket A-98-49, II-B1-8), the Agency concluded that these codes were qualified on the Compaq ES40 and ES45. 

In 2006, the DOE procured four Compaq ES47 machines to add to the PA computing resources of two Compaq ES40 and two Compaq ES45 machines.  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2.  Because of these changes in the operating system and the addition of a new computing platform, DOE conducted regression testing for each PA software code to ensure that each code continues to function correctly.  

In 2012 and 2013, SNL performed the performance assessment calculations to support the submission of the 2014 CRA.  Six PA codes were modified and re-qualified for use in the 2014 CRA PA calculations. Also, during the same time period, DOE/SNL procured Sun Solaris Blade Servers with SunOS 5.11 operating system.  SNL migrated the PA codes, code libraries, and parameter database to the Solaris [3, 4].  The migration of the WIPP PA required extensive qualification using regression testing as well as validation against the acceptance criteria of the VVP [4].

This report presents the Agency's findings with respect to the qualification of the computer codes used in the 2014 CRA and PA code migration to the Sun Solaris Blade Servers with SunOS 5.11 as the operating system.  The versions of the computer codes and libraries that are approved for WIPP PA use in PA calculations are presented in Table 3.1-1.

Table 3.1-1.	Computer Codes and Libraries Reviewed and Approved by the Agency

                                 Computer Code
                            Agency Approved Version
                                       
                                      CCA
                              2004 CRA 2004 PABC
                              2009 CRA 2009 PABC
                               2014 CRA[a,] [b]
                                   SOLARIS 
                                   MIGRATION
ALGEBRACDB
                                     2.35
                                     2.35
                                     2.35
                                      ---
                                     2.36
BLOTCDB
                                      --
                                      --
                                     1.37
                                      ---
                                     1.38
BRAGFLO
                                      4.0
                                      5.0
                                      6.0
                                   6.0/6.02
                                     6.03
CCDFGF
                                     1.01
                                     5.0A
                                     5.02
                                   6.0/6.02
                                     7.01
CCDFSUM
                                     1.01
                                     2.00
                                     2.00
                                      ---
                                   Replaced
CUTTINGS_S
                                     5.03
                                     5.04A
                                     6.02
                                      ---
                                     6.03
DRSPALL
                                      NA
                                      1.0
                                     1.10
                                      ---
                                     1.22
DTRKMF
                                      NA
                                      1.0
                                      1.0
                                      ---
                                     1.01
EPAUNI
                                     1.14
                                     1.15A
                                     1.15A
                                      ---
                                     1.16
EQ3/6  - not approved for PA
                                       
                                       
                                       
                                       
                                 8.0A Windows
FMT
                                      NA
                                     2.40
                                     2.40
                                      ---
                                   Replaced
GENMESH
                                     6.08
                                     6.08
                                     6.08
                                      ---
                                     6.09
GROPECDB
                                     2.12
                                     2.12
                                     2.12
                                      ---
                                     2.13
ICSET
                                     2.21
                                     2.22
                                     2.22
                                      ---
                                     2.23
JAS3D  
                                       
                                       
                                       
                                       
                                  2.4.C-LINUX
LHS
                                    2.32Z0
                                     2.41
                                     2.42
                                      ---
                                     2.43
MATSET
                                      9.0
                                     9.10
                                     9.10
                                     9.20
                                     9.21
MODFLOW2000
                                      NA
                                     1.60
                                     1.60
                                      ---
                                     1.70
MWT3D
                                       
                                       
                                       
                                       
                                     2.50
NONLIN
                                       
                                       
                                     2.01
                                      ---
                                     2.02
NUTS
                                     2.02
                                     2.05A
                                     2.05C
                                      ---
                                     2.06
PANEL
                                      3.6
                                     4.02
                                     4.03
                                      ---
                                     4.04
PEST
                                      NA
                                     5.51
                                     9.11
                                      ---
                                     9.12
POSTBRAG
                                     4.00
                                     4.00
                                     4.00
                                      ---
                                     4.02
POSTLHS
                                     4.07
                                     4.07
                                     4.07A
                                      ---
                                     4.08
POSTSECOTP2D
                                     1.02
                                     1.04
                                     1.04
                                      ---
                                     1.05
PREBRAG
                                      6.0
                                     7.00
                                     8.00
                                   8.00/8.02
                                     8.03
PRECCDFGF
                                      1.0
                                     1.00B
                                     1.01
                                      2.0
                                   1.06/2.01
PRELHS
                                     2.10
                                     2.10
                                     2.30
                                     2.40
                                     2.41
PRESECOTP2D
                                     1.20
                                     1.22
                                     1.22
                                      ---
                                     1.23
RELATE
                                     1.43
                                     1.43
                                     1.43
                                      ---
                                     1.45
SANTOS2
                                     2.17
                                     2.17
                                     2.17
                                      ---
                                   Replaced
SECOTP2D
                                     1.30
                                     1.41
                                     1.41A
                                      ---
                                     1.43
STEPWISE
                                     2.20
                                     2.21
                                     2.21
                                      ---
                                     2.22
SUMMARIZE
                                     2.10
                                     2.20
                                     3.01
                                      ---
                                     3.02

                                       
                                       
                                       
                                       
                                       
Libraries
                                       
                                       
                                       
                                       
                                       
CAMCON_LIB
                                     2.16
                                     2.20
                                     2.21
                                      ---
                                 Section 5.30
CAMDAT_LIB
                                     1.22
                                     1.25
                                     1.25
                                      ---
                                 Section 5.30
CAMSUPES_LIB
                                     2.18
                                     2.21
                                     2.22
                                      ---
                                 Section 5.30
PLT_LIB
                                     1.02
                                     2.04
                                     2.06
                                      ---
                                 Section 5.30
SDBREAD_LIB
                                     3.10
                                     3.11
                                     3.12
                                     4.00
                                 Section 5.30

                                       
                                       
                                       
                                       
                                       
Databases
                                       
                                       
                                       
                                       
                                       
ORIGIN2
                                      NA
                                      2.2
                                      2.2
                                 ORIGEN-S 2.2
                                      NA
TRANSORIGIN
                                      NA
                                      2.2
                                      2.2
                                    In CID
                                      NA
CID
                                      NA
                                      1.0
                                      1.0
                                     2.02
                                      NA
[a] 2014 CRA  -  PA was performed on the VMS computer platforms before the PA was migrated to Solaris.  
[b] Most of the 2014 CRA PA codes have not changed (noted "---") and are the same versions.
                                       

The RD for each software code specifies the validation criteria for the code and the test cases that demonstrate compliance with these criteria.  SNL used qualification against acceptance criteria and regression testing to determine whether each code could satisfy the criteria in its RDs when run in the current computing configuration (i.e. Intel Solaris Blade Computer Servers). 

DOE'S Test Methodology

The test methodology and acceptance criteria described in AP-089 [2] were implemented by the DOE for these regression tests, and the results are presented in Section 5 of this document.  The regression tests were conducted by running every validation test for each code in the most recent computing configuration (Compaq ES40, ES45, and ES47 running OpenVMS 8.2) and comparing the code's output to the output from the code's previously approved validation tests (OpenVMS 7.3-1 running on the Compaq ES40).  The differences between the two sets of outputs were then analyzed.  Any numerical differences between code outputs were evaluated to determine if the code output met the code's acceptance criteria.

In each case, the regression test methodology used the VMS DIFFERENCE command to compare outputs from the regression testing to outputs from previous validations.  The 
DIFFERENCE command compares two files and identifies records that are different in the two files.  The DIFFERENCE command was not used to compare binary output data.  Binary output data from both the regression testing and previous validations were often processed through other software codes to produce ASCII files that could then be compared using the DIFFERENCE command.

Differences that involve dates and times, file and directory names, user names, platform names, system version numbers, and execution statistics were termed acceptable.  Differences in numerical output required analysis to determine the origin of the differences and whether the differences affect the code's performance.  Numerical differences were determined to be acceptable if the analyst judged that the output, although different, still met the acceptance criteria for the code.

After DOE ran the tests outlined in the Validation Plan and it was concluded that a code met the acceptance criteria specified in its RDs, a Software I&C form was completed.  The I&C form documents that a code's regression test results met the acceptance criteria specified in its RDs, management's approval of the installation of the software, and the SCM Coordinator's approval of the release of the code as production baseline software. 

3.2.1 MIGRATION TO SOLARIS BLADE PLATFORM

DOE/SNL modified the test methodology using regression testing described above because of the differences between the VMS platforms and the Solaris computers. Solaris uses a different floating point format than the VMS platform and many of the PA codes were converted to double precision number formats on the Solaris [4].  Similar results were expected for utility codes, therefore regression testing was applicable for their migration.  However, for codes solving differential equations and using numerical solvers, regression testing sometimes failed and numeric results were not always the same. SNL sometimes used regression testing in a screening mode, but in other cases SNL used acceptance criteria documented in the VVP to qualify the code for WIPP PA calculations. (See AP-162 ERMS #561457, page 7.)

SNL developed the RPD (relative percent differences of 1x10[-4] or less is considered insignificant) to determine if a numeric difference between the two platforms was significant to assist in evaluating reasonable differences.  For values greater than the RPD, SNL was required to provide an adequate explanation or use VVP criteria to verify adequate PA code performance. SNL was not always able to use the SunOS difference command (UNIX diff) but developed utility codes that used the RPD to determined values needing additional review and explanation.  These differences in approach are discussed in Section 5.0 below for each code.

The Solaris uses a difference configuration management system than the VMS platform [4]. This system is called CVS (Code Versioning System) on the Solaris platform.  CVS is used to manage all activities related to WIPP PA calculations.

3.2.1.1 SOLARIS MIGRATION  -  INTEGRATION TESTS

In addition to the qualification of the codes on the Solaris discussed above, SNL/DOE determined that to enhance confidence in the new PA system on the Solaris, "...a comparison of results from one or more PAs runs on VMS and Solaris" was required [4].  Two integration tests were performed consisting of running the PABC09 and CRA14 analyses on the Solaris and then comparing the releases projected by CCDFGF from the VMS and Solaris runs [4]. SNL compared the mean releases and releases by vector to verify that PA calculations performed on the Solaris produced similar results.  The section titled "Integration Tests" in the AP-164 Summary Report [4] describes the integration test process and compared results between the VMS and Solaris PA calculations.

References

 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.

 Analysis Plan (AP-089) 2002.  "Upgrade of Operating System to OpenVMS 7.3-1 and Hardware to HP Alpha ES45."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #523491.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
 WIPP PA - "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Analysis Plan for the 2014 WIPP Compliance Recertification Application Performance Assessment, AP-164, dated January 30, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #559198. 

The Agency's Review Approach

The Agency's review has been conducted by a team consisting of Agency and contractor personnel.  The review was initiated with preparatory activities and assembly of background information.  With the exception of computer codes associated with the inventory database, for which LANL is responsible, all of the computer codes are maintained in Carlsbad by the Department's WIPP science advisor, SNL.  

The Agency's review has been conducted in several stages, recognizing that changes in the assessment approach might be required, depending upon results obtained.  The following preparatory activities were conducted before conducting on-site reviews:

 Tabulation of the Agency's code acceptance criteria developed by the Agency during the CCA
 Preparation of a list of computer code life-cycle documentation (e.g., VVPs, Change Control and Error Reporting forms) that the Agency needed to review
 Preparation of a draft checklist for reviewing the ability of the PA codes to meet the QA criteria

The following on-site review activities were conducted by the Agency:

 Received overview presentations by SNL and LANL personnel describing the computer code migration activities
 Obtained and reviewed the adequacy of documentation describing the computer code migration activities
 Reviewed the adequacy of testing performed to demonstrate consistency of code output under different operating/hardware systems
 Reviewed and evaluated the traceability of the code migration information
 Reviewed the ability of PA codes to accurately reproduce output obtained under the software/hardware configurations in place during the CCA and subsequent CRAs

In addition to the on-site reviews, off-site reviews were conducted that included relevant documents (e.g., Change Control and Error Reporting Forms, Code Tracking Sheets, Validation Documents) and the DIFFERENCE files for all tests cases for each of the computer codes and libraries that DOE has tested.  The results of these activities are summarized in Section 5 below.
Summary of Individual Computer Code Migration

The following section presents the results of the Agency's computer code migration analysis for each individual code examined (Table 3.1-1).  Specific software and hardware configurations are reviewed, followed by the regression test methodology, the Agency's analysis of the testing, and the Agency's conclusion. 

ALGEBRACDB

This section presents the regression test results for ALGEBRACDB.  ALGEBRACDB is a utility code that adds, removes, or manipulates data on CAMDAT database (CDB) files.  The data manipulations to be performed are expressed as algebraic equations involving the existing and/or newly created data.

Introduction

ALGEBRACDB 2.35 was used in the WIPP Compliance Certification Application (CCA) PA.  ALGEBRACDB 2.35 was validated in January 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of 10 Test Cases (1 through 10) met the acceptance criteria defined in the RD/VVP for ALGEBRACDB 2.35 (document Version 1.00) [2].  In January 1997, ALGEBRACDB was re-evaluated, and DOE determined that several requirements, previously identified as "Functionality Not Tested" in the RD/VVP (document Version 1.00), were, in fact, in need of testing.  DOE generated five additional Test Cases (11 through 15) to address these parameters and validated on a DEC Alpha 2100 with OpenVMS 6.1 [3], by demonstrating that the results met the acceptance criteria defined in the RD/VVP for ALGEBRACDB 2.35 (document Version 1.01) [3].  In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from ALGEBRACDB 2.35 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of ALGEBRACDB 2.35 run on a DEC Alpha 2100 with OpenVMS 6.1 [1].  In June 2003, the Agency completed a report documenting the Agency's approval of ALGEBRACDB 2.35 [7] on the ES40 and DEC Alpha 2100 with OpenVMS 6.1.  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [8, 9].  In March 2004, the Agency completed a report documenting the Agency's approval of ALGEBRACDB 2.35 on the Compaq Alpha ES45 and 8400 for use in the CRA 2004 [10].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [11].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [11].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for ALGEBRACDB 2.35 to ensure that it continues to function correctly.

In 2013, SNL migrated the WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [13, 15]. SNL, in 2012, executed regression testing to verify that ALGEBRACDB 2.36 continues to perform WIPP PA calculations correctly [15].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to ALGEBRACDB 2.36 running on the Solaris Blade with SunOS 5.11.

Test Methodology 

The tests for this code comprised the 15 test cases described in the Requirements Document & Verification and Validation Plan for ALGEBRACDB  2.35 (RD/VVP) (both document Versions 1.00 [3] and 1.01 [4]).  The first 10 tests are described in document Version 1.00, and the remaining 5 cases are included in document Version 1.01.  Regression test results from ALGEBRACDB 2.35 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of ALGEBRACDB 2.35 run on a DEC Alpha 2100 with OpenVMS 6.1, as documented in the Validation Document for ALGEBRACDB  2.35 (VD) (both document Versions 1.00 [5] and 1.01 [6]).  In January 2003, regression test results from ALGEBRACDB 2.35 run on the ES45 and 8400 with OpenVMS 7.3-1 were compared to results from the validation tests of ALGEBRACDB 2.35 run on a Compaq ES40 with OpenVMS 7.3-1 [8, 9].  In 2006, the regression test methodology used the VMS DIFFERENCE command to compare output from ALGEBRACDB 2.35 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2, to results from the validation tests of ALGEBRACDB 2.35 run on the ES40 with OpenVMS 7.3-1 [12]. 

CAMDAT database files (CDB) are produced in 14 of the ALGEBRACDB test cases.  The output CDB files are converted from a binary, CDB, file format to an ASCII file format for comparison during the validation process.  In the previous ALGEBRACDB validation, the CDB files were converted using GROPECDB 2.12.  GROPECDB 2.12 was validated in April 2006, on the Compaq ES40, ES45, and ES47 running OpenVMS 8.2 (Section 5.11, GROPECDB).  For this regression test, GROPECDB 2.12 on the Compaq ES40, ES45, and ES47 running OpenVMS 8.2 is used to convert the CDB output files from ALGEBRACDB.

GROPECDB 2.12 was validated on the Compaq ES40, ES45, and ES47 with OpenVMS 8.2 as part of the hardware regression test (see Section 5.11, GROPECDB).  For DOE's regression test, GROPECDB 2.12 is used to convert the CDB output files from ALGEBRACDB 2.35 in OpenVMS 8.2.

In 2012, SNL performed regression testing of ALGEBRACDB 2.36 run on the Solaris Blade with SunOS 5.11 [14].  Results were compared to ALGEBRACDB 2.35 results run on the VMS Compaq E47 with OpenVMS 8.2. The VMS results were transferred to the Solaris, converted as needed, and compared using the UNIX diff command [14].  

Test Results

The 15 test cases for ALGEBRACDB 2.36 were executed on the Solaris Blade with SunOS 5.11.  Output files from the test cases were compared to the corresponding output files from the validation of ALGEBRACDB 2.35 on the Compaq ES47 with OpenVMS 8.2 by using the UNIX diff command.  The comparison found that all differences found in the output are limited to code run date and time, platform names, system version numbers, the directory, file names, and very minor numeric differences. 

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found only minor differences in the numerical output of ALGEBRACDB 2.36.  The Agency concludes that ALGEBRACDB 2.36 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 Analysis Report 2003.  "Analysis Report for the VMS 7.3-1 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  WIPP:1.3.5.1.1: SFT: QA-L 525277.
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for ALGEBRACDB Version 2.35" (document Version 1.00).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #28109.
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for ALGEBRACDB Version 2.35" (document Version 1.01).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #41863. 
 WIPP PA (Performance Assessment) 1995.  "Validation Document for ALGEBRACDB Version 2.35" (document Version 1.00).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #28112. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for ALGEBRACDB Version 2.35" (document Version 1.01).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #41865. 
 WIPP PA  -  Validation (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12, May 17, 1996."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of ALGEBRACDB Version 2.35 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543463.
 WIPP PA - "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories, Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for ALGRBRACDB Version 2.36 Regression Testing for the Solaris Blade, date October 31, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557796.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.

BLOTCDB

BLOTCDB is a code to plot the mesh and results from finite-element and finite-difference analysis programs. BLOTCDB plots all intermediate and final results from all main modules used to perform the WIPP PA. BLOTCDB directly reads a CAMDAT (CDB) file and plots: (1) the computational mesh with contoured analysis results, (2) grid distance versus any variable, and/or (3) any variable versus any other variable. BLOTCDB produces mesh plots with various representations of the analysis output variables and can also produce X-Y curve plots of the analysis variables [13].
	
Introduction

In 2013, SNL migrated the WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade with SunOS 5.11 [13].  SNL notes: "Small changes in the device drivers and the modifications to the test cases make regression testing impractical. The test cases will be fully validated according to the acceptance criteria presented in this document" ([13] page 14).

BLOTCDB 1.38 validation includes six test cases that verify that the code satisfies various requirements listed in Section 2.0 of the VVP/VD ([13] page 7).  The code has twelve Functional Requirements (R), five External Interface Requirements (R), and twenty-nine Additional Functionality (A) requirements that are examined in the six test cases ([13] page 15). Table 4-1 list the requirements tested by the various test cases. Each test case describes the acceptance criteria for that test, as shown in Section 4.0 of the VVP/VD ([13] page 14). SNL states, "Most criteria involve visual inspection of the plot to assure that the relevant CAMDAT data is plotted as specified in the input command file. Usually the tester is required to locate points or lines on a plot. The accuracy of these visual measurements rests on the "best judgment" of the tester" ([13] page 14).

Test Methodology

Each test case produces plots that are manually examined to verify that the plots are correct, represent input data, and satisfy the acceptance criteria for that test case (see each test case for details in the VVP/VD ([13] page 17)).
Test Case	Functional Requirements Tested (see [13], Section 4.0)
  #1		BLOTCDB can display a wireframe, a mesh and vector plots.  Test R.1, R.5, R.6, R.14, R.15, R.17, A.3 through A.11, A.25, A.26, A.28 and A.29. 
  #2		Code can display line and painted contours.  Test R.1, R.2, R.3, R.4, R.14, R.15, R.17, R.18, A.2, A.3, A.4, A.6, A.7,A.13, A.14, A.15, A.16, A.27 and A.28.  
  #3		BLOTCDB can display pathlines for particle tracking.  Test R.7, R.14, R.15, R.17, A.3, A.7, and A.18.
  #4		Code can plots time history plots and variable-verses-variable plots. Test R.1, R.8, R.9, R.10, R.14, R.15, R.16, R.17, A.1, A.3, A.18 through A.24, and A.28.
  #5		Code can plot variable-verse-distance plots.  Test R.1, R11, R.12, R.14, R.15, R.16, R.17, A.3, A.18 through A.24 and A.28.
  #6		BLOTCDB can display data on a large mesh. Test R.1, R.2, R.3 R.6 R.14, R.15, R.17, R.18, A.1, A.7, A.8, A.12 and A.13.
Each test case writes the BLOTCDB input data to a plot file that is printed for the tester to examine.  The acceptance criterion for each test describes the expected appearance of each plot.  Acceptance criteria involve visual inspection of the tester to verify that the input command file data is plotted correctly.  The tester is required to locate points on the plots to verify the input information is plotted correctly.  
Test Results

SNL used a review process of manual inspection to validate BLOTCDB 1.38, as described in Section 4.0 of the VVP/VD ([13] page 14).  Results of each test are examined against the acceptance criteria in the VVP/VD [13] to verify that acceptance criteria are satisfied.
The Agency's Conclusions

The Agency thoroughly examined SNL's examination of the results of each test case and verified that manual evaluation is adequate and that their conclusions are reasonable.  The Agency concludes that BLOTCDB 1.38 meets the acceptance criteria in the RD ([15] page 4) and the VVP/VD ([13], page 15) and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.
References

[1] Long, J. J. 2012. Nuclear Waste Management Procedure NP 19-1 Software Requirements,
Revision 14. Sandia National Laboratories, Carlsbad, NM. ERMS #558215.

[2] WIPP PA. 2013. Requirements Document for BLOTCDB Version 1.38. Sandia National
Laboratories, Carlsbad, NM. ERMS #560360.

[3] WIPP PA. 1996. User's Manual for BLOTCDB Version 1.37. Sandia National
Laboratories, Carlsbad, NM. ERMS #237501.

[4] WIPP PA. 2013. Addendum to User's Manual for BLOTCDB Version 1.38. Sandia
National Laboratories, Carlsbad, NM. ERMS #560363.

[5] Kirchner, T. B. 2012. AP-162: Analysis Plan for Migration of the Performance Assessment
Codes to the Sun Solaris Blade Server running with Intel Processors, Revision O. Sandia
National Laboratories, Carlsbad, NM. ERMS #557765.

[6] WIPP PA. 1996. Requirements Document & Verification and Validation Plan for
BLOTCDB (Version 1.37). Sandia National Laboratories, Carlsbad, NM. ERMS #237499.

[7] WIPP PA. 1996. Validation Document for BLOTCDB (Version 1.37). Sandia National
Laboratories, Carlsbad, NM. ERMS #237502.

[8] WIPP PA. 1996. User's Manual for GROPECDB Version 2.12. Sandia National
Laboratories, Carlsbad, NM. ERMS #237496.

[9] WIPP PA. 2012. Addendum to User's Manual for GROPECDB Version 2.13. Sandia
National Laboratories, Carlsbad, NM. ERMS #557792.

[10] WIPP PA. 1996. User's Manual for ALGEBRACDB Version 2.35, document version 1.01.
Sandia National Laboratories, Carlsbad, NM. ERMS #241864.

[11] WIPP PA. 2012. Addendum to User's Manual for ALGEBRACDB Version 2.36. Sandia
National Laboratories, Carlsbad, NM. ERMS #557795.

[12] WIPP PA - "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.

[13] WIPP PA  -  "Verification and Validation Plan/Validation Document for BLOTCDB Version 1.38 dated July 18, 2013."  Sandia National Laboratories. Sandia WIPP Central Files ERMS #560361.

[14] WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.

[15] WIPP-PA "Requirements Document Criteria dated July 22, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #560360.


BRAGFLO

This section presents the qualification and regression test results for the BRAGFLO.  BRAGFLO is a program used to study two-phase (brine and gas), three-dimensional isothermal flow in porous media.  It has been developed specifically for use in assessing the performance of the WIPP, particularly the flow behavior in the immediate vicinity of the repository.  The physical model is described by material balance equations for brine and gas, Darcy's law, and two-phase fluid properties.  The numerical model includes a cell-centered finite difference discretization, Newton solution of the nonlinear constitutive equations, and linear equation solvers necessary for the Newton iteration.  Various sub-models specific to WIPP include a pressure-induced fracture treatment, creep closure of the repository, and gas generation resulting from corrosion and biodegradation of waste components. 

Introduction

Since the CCA PA, the BRAGFLO code has undergone a series of revisions.  Versions 4.00 and 4.01 of BRAGFLO were used in the WIPP CCA.  BRAGFLO 4.00 was used to calculate Salado flow; BRAGFLO 4.01 was used to calculate direct brine releases.  These codes were validated on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of each test case met the acceptance criteria defined in the RD/VVP/VDs [3, 4, 5, 6].

In 1997, BRAGFLO 4.10 was created to combine the capabilities of both BRAGFLO 4.00 and BRAGFLO 4.01 into a single code version [2].  No new functionality was added [1].  BRAGFLO 4.10 was validated on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of each test case met the acceptance criteria defined in the RD/VVP [3].  Several changes were made to BRAGFLO 4.10 during its revision to BRAGFLO 5.0 including removing a number of parameter assignments from embedded data to input data; moving the porosity surface from embedded data to input data; and changing the input-output format [16]. 

DOE ran the OpenVMS 7.3-1 tests using the FORTRAN 7.3 Run-Time Library (RTL) instead of the RTL, Version 7.4A [9].  The date and time functions in the RTL changed between Version 7.3 and 7.4A, and BRAGFLO 4.10 does not run with the new date and time functions.  Accordingly, BRAGFLO 4.10 is run using the FORTRAN 7.3 RTL by implementing the procedure described in [7].  BRAGFLO 4.10 had one problem report that has since been resolved [8].  

In June 2003, the Agency completed a report documenting the Agency's approval of BRAGFLO 4.10 on the ES40 and DEC Alpha 2100 with OpenVMS 6.1 [13].  For the 2004 CRA, the DOE modified BRAGFLO 4.10 to produce BRAGFLO 5.0 to allow the user to input information that was previously included in the BRAGFLO executable file [10, 16].  Beginning with BRAGFLO 5.0, the user provides various constants and molecular weights, as well as information defining the porosity surface, which comes from the SANTOS software.  Changes from BRAGFLO 4.10 to BRAGFLO 5.0 involved input/output issues. 

In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1 [14, 15].  In March 2004, the Agency completed a report documenting the Agency's approval of BRAGFLO 5.0 on the Compaq Alpha ES45 and 8400 that were both running OpenVMS 7.3-1 [19].
In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [20].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [20].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for BRAGFLO 5.0 to ensure that it continued to function correctly [21].

In 2007, a number of changes were made to BRAGFLO from version 5.00 to version 6.00 in accordance with a paper by Hansen and Stein (2005), which describes changes that should be made to PA models to accommodate a more realistic evolution of the WIPP underground [22].  The changes made to BRAGFLO included changes to the disturbed rock zone, brine availability, magnesium oxide (MgO) precipitation, room closure, and formulations pertaining to the capillary pressure versus saturation, which impact the physical and chemical characteristics of the WIPP disposal rooms [23].

In 2013, SNL developed BRAGFLO 6.02 to support Case CRA14-0 of the 2014 CRA PA calculations.  In particular, improved water balance calculations were added [30]: "The primary purpose for revision of BRAGFLO 6.0 is to incorporate the conversion of
hydromagnesite to magnesite. This additional reaction is added to improve the modeling of
water production and consumption. In Version 6.0, hydrated MgO reacts with C02 generated by
microbial degradation of CPR material to form either hydromagnesite or magnesite. For version
6.02, an additional chemistry step of the hydromagnesite conversion to magnesite was added.
This reaction is included to improve the calculation of saturation in the waste area." To qualify BRAGFLO 6.02, SNL developed a new Test Case 14 to verify that the water balance modification works properly.  Test Case 14 was compared to the acceptance criteria in Section 9.14.4 of the RD/VVP [30].   

Also in 2013, SNL migrated WIPP PA software from the VMS platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [26, 28]. Because of the changes in computer hardware and operating system, qualification testing and regression testing were completed to verify that BRAGFLO 6.03 continues to perform WIPP PA calculations correctly [27].

The discussion below documents the test methodology, regression test results and qualification results, and the Agency's conclusions with respect to qualification of BRAGFLO 6.02 running on the Compaq ES45 and ES47 with OpenVMS 8.2, and to BRAGFLO 6.03 running on the Solaris Blade computers with SunOS 5.11. 

Test Methodology

The tests for Version 4.10 of this code comprised all 12 test cases described in the Requirements Document & Verification and Validation Plan for BRAGFLO Version 4.10 (RD/VVP) [1, 12].  Results of regression tests performed on BRAGFLO 4.10 run on the ES40 with OpenVMS 7.3-1 and on the DEC Alpha 2100 with OpenVMS 6.1 were documented by the EPA [13].  In January 2003, regression test results from BRAGFLO 4.10 run on the ES45 and 8400 with OpenVMS 7.3-1 were compared to results from the validation tests of BRAGFLO 4.10 run on a Compaq ES40 with OpenVMS 7.3-1 [14, 15].  

In 2003, BRAGFLO 5.0 was tested by performing all 12 test cases presented in the RD\VVP and comparing the results to the acceptance criteria [11, 17].  This testing was followed in 2004 by regression testing to compare output from BRAGFLO 5.0 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 to the output from the validation of BRAGFLO 5.0 on the Compaq ES40 with OpenVMS 7.3-1 [18].  

In 2006, the regression test methodology used the VMS DIFFERENCE command to compare output from BRAGFLO 5.0 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 to results from the validation tests of BRAGFLO 5.0 run on the ES40 with OpenVMS 7.3-1 [21].

Test Case 7 required the use of other WIPP PA codes: POSTBRAG 4.00 and SUMMARIZE 3.01.  These codes have been validated on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 as part of the hardware regression tests.

In 2007, regression testing was used to determine whether BRAGFLO 6.0 satisfied the acceptance criteria of the RD/VVP for Test Cases 1 - 13 [24].  Test Case 14 was validated in this analysis by evaluating BRAGFLO results with respect to the acceptance criteria specified in the RD/VVP, since it is a new test case for the new requirements in BRAGFLO 6.0 [24]. 
Regression analyses are accomplished in this validation work by comparing results from BRAGFLO 6.0 to the corresponding results from BRAGFLO 5.0.  BRAGFLO 5.0 was validated on the ES40, ES45 and ES47 with Open VMS 8.2 [25].  The VMS Difference command was used to compare ASCII output files from BRAGLO 6.0 with the corresponding output files from BRAGFLO 5.0.

In 2013, BRAGFLO 6.02 was developed to support the 2014 recertification.  A total of 14 test cases were used to qualify the code for use in the WIPP performance assessment.  The original 13 test cases were regression tested as previously performed, as described above. Test Case 14 was developed to verify the performance of the changes made to the code and test results were compared to the acceptance criteria in the VVP [30]. 

In 2013, SNL migrated the WIPP PA and performed qualification and regression testing of BRAGFLO 6.03 on the Solaris Blade platform with SunOS 5.11 on all fourteen test cases to verify that the code performs PA calculations on the new computer system correctly.  SNL used a two-phased approach to execute these tests. First, BRAGFLO 6.03 was regression tested against BRAGFLO 6.02 using results obtained on the Compaq computer cluster running OpenVMS 8.2. These files were moved to the Solaris, converted to Solaris format as needed, and then compared using the UNIX diff command.  If the differences recorded satisfied the Regression Test criteria ([27] Section 3.2), such as acceptable differences (i.e. file names, version number differences, etc.) and "Small numeric differences" as expected ([27] page 9) because of different characteristics of the two computer systems, then BRAGFLO 6.03 was considered to be qualified for use on the Solaris Blade with SunOS 5.11.

However, if the "number of differences is large" ([27] page 9) then the BRAGFLO 6.03 was validated using acceptance criteria expressed in the VVP document.  Four Test Cases 2, 6, 7, 14, required the use of VVP acceptance criteria to be validated.  Test Case 13, input error reporting, was also qualified using the VVP acceptance criteria ([27] page 10). 

Test Results

The 14 test cases for BRAGFLO 6.02 were executed on the Compaq ES45 and ES47 with OpenVMS 8.2. As noted by SNL in [29], the following procedure is performed for Test Cases 1 to 13.

   "1) Validation test results from BRAGFLO 6.02 run on the ES47 with OpenVMS 8.2 are compared to results from the validation tests of BRAGFLO 6.0 run on the ES47 with
          OpenVMS 8.2. 
    
    2) Validation test results from BRAGFLO 6.02 run on the ES45 with OpenVMS 8.2 are compared to results from the validation tests of BRAG FLO 6.02 run on the ES47 with Open VMS 8.2. 

    The VMS DIFFERENCE command is used to compare the output file from BRAGFLO 6.02 to the corresponding output file as outlined above."

The results for Test Case 14 are compared to the acceptance criteria described in the VVP [30].  For Test Cases 1 through 13, SNL found acceptable changes such as date, time, file and directory names, platform names, code version, execution statistics, and changes to input parameters used to support the water balance changes in the code during the validation of BRAGFLO 6.02.

The 14 test cases for BRAGFLO 6.03 were executed on the Solaris Blade with SunOS 5.11.  Each test case generated output files, which were compared to the output files from the BRAGFLO 6.02 validation tests, executed on the Compaq cluster with OpenVMS 8.2. All test cases except 2, 6, 7, 13, and 14 showed acceptable textural differences and small numeric differences between BRAGFLO 6.03 and BRAGFLO 6.02 test case output files.

Test Cases 2, 6, 7, 13, and 14 were evaluated against the VVP acceptance criteria.  This evaluation verified that these test cases satisfy the acceptance criteria.

The Agency's Conclusions

The Agency found that the regression testing of BRAGFLO 6.02 Test Cases 1 through 13 are acceptable and that changes found in the output files are appropriate. The Agency also found that the qualification of Test Case 14 was complete and adequate.  The Agency concludes that BRAGFLO 6.02 meets acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Compaq ES45 and ES47 with OpenVMS 8.2. 

The Agency found for the BRAGFLO code migration that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The Agency also found that for test cases not regression-tested, the acceptance criteria were satisfied.   The Agency concludes that BRAGFLO 6.03 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for BRAGFLO Version 4.10."  Sandia National Laboratories.  Sandia WIPP Central Files, WPO #45227. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for BRAGFLO Version 4.10."  Sandia National Laboratories.  Sandia WIPP Central Files, WPO #45242. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for BRAGFLO Version 4.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30702. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for BRAGFLO Version 4.00."  Sandia National Laboratories.  Sandia WIPP Central Files, WPO #30705. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for BRAGFLO Version 4.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #38122. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for BRAGFLO Version 4.01."  Sandia National Laboratories.  Sandia WIPP Central Files, WPO #38135. 
 WIPP PA (Performance Assessment) 2001.  "Change Control Form, Compaq/DEC Alpha FORTRAN 7.4A Compiler Upgrade."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #519716. 
 WIPP PA (Performance Assessment) 2001.  "Software Problem Report 01-002 for PREBRAG 6.00 and BRAGFLO 4.10."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #519714. 
 Digital Equipment Corporation 1996.  "OpenVMS 7.1 Release Notes, Section 5.8.  Digital Equipment Corporation, Maynard Massachusetts, November 1996."  Order number AA-QSBTA-TE. 
 WIPP PA (Performance Assessment) 1998.  "WIPP PA Analysis Package for AP-042."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #49786. 
 WIPP PA (Performance Assessment) 2003.  "BRAGFLO Test Case 7 Results for the OpenVMS 7.3-1 Regression Test.  Sandia WIPP Central Files.  Records Package #525277. 
 Validation Report for BRAGFLO Version 4.10 Test Case 6 Using OpenVMS 7.3.1.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA (Performance Assessment) 2003.  "Change Control Form, BRAGFLO."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #525869. 
 WIPP PA (Performance Assessment) 2003.  "Validation Document for BRAGFLO Version 5.00."  Sandia National Laboratories.  Sandia WIPP Central Files, WPO #525703.
 WIPP PA (Performance Assessment) 2004.  "Results of Regression Testing for BRAGFLO Version 5.00 Running on the Compaq ES45 and 8400 Platforms."  Sandia National Laboratories.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of BRAGFLO Version 5.0 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543805.
 Hansen, F. D. and J. S. Stein, 200S.  "WIPP Room Evolution and Performance Assessment Implications."  Milestone Report, February 26, 2005.  Carlsbad, New Mexico:  Sandia National Laboratories.  ERMS #538870. 
 WIPP PA 2007.  "BRAGFLO Version 6.00 Software QA Plan" Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #545013.
 Nemer, M. B., 2007.  Requirements Document and Verification and Validation Plan forBRAGFLO Version 6.0.  Document Version 6.00, ERMS# 545014. Sandia National Laboratories, Albuquerque, New Mexico.
 Nemer, M. B., 2006.  Regression Testing Report of BRAGFLO Version 5.00 on the Compaq ES40, ES45 and ES47 Platforms using Open VMS 8.2.  Document Version l.0, ERMS #543805.  Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories, Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Validation Document for BRAGFLO Version 6.03, dated February 1, 2013."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #558352.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765. 
 WIPP PA  -  "Validation Document for BRAGFLO, Version 6.02, dated January 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #558661. 
 WIPP PA  -  "Requirements Document & Verification and Validation Plan for BRAGFLO, Version 6.02, dated January, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #558659.

CCDFGF

This section presents the regression test results for CCDFGF.  The CCDFGF code assembles the results calculated by other codes in the WIPP PA system to produce Cumulative Complementary Distribution Functions (CCDFs) of releases.

Introduction

Since the CCA PA, the CCDFGF code has undergone a series of revisions.  CCDFGF 1.01 was used in the WIPP CCA.  Version 1.01 was validated on a DEC Alpha 2100 running OpenVMS 6.1 [3].  The validation demonstrated that the results of the four test cases met the acceptance criteria defined in the VVP for Version 1.01 [4].  In 1996, CCDFGF was revised to Version 2.01 to improve and clarify the algorithm by which releases to the Culebra were calculated.  CCDFGF 2.01 was validated on a DEC Alpha 2100 running OpenVMS 6.1 [5].  Test Cases 1 - 4, for the validation of CCDFGF 2.01, were identical to the test cases for the validation of CCDFGF 1.01 [6].  The acceptance criteria for these test cases were satisfied by showing that the output from CCDFGF 2.01 was identical to the output of the CCDFGF 1.01 validation tests. 

In 1997, CCDFGF was revised to Version 3.00 to correct an error found in Version 2.01 and to add functionality required for the Performance Assessment Verification Test (PAVT).  CCDFGF 3.00 was validated on a DEC Alpha 2100 running OpenVMS 6.1 [7].  Test Cases 1 - 4, for the validation of CCDFGF 3.00, were not identical to the test cases for the validation of CCDFGF 2.01 [6].  Rather, the test cases for CCDFGF 3.00 were modifications of those used for CCDFGF 2.01.  The modified test cases examined the features added to CCDFGF for Version 3.00 and specified additional acceptance criteria for these features.  CCDFGF 3.00 was validated by the DOE's analysis, and the additional acceptance criteria were met.  Consequently, the validation of CCDFGF 3.00 relies on the combination of the validation of CCDFGF 2.01 and on the extensions to the test cases for CCDFGF 3.00. 
 
CCDFGF was revised again in 1997 to Version 3.01 to add the capability of producing intermediate results for releases to and from the Culebra.  Test Case 5 was added to validate this additional capability [2].  Since the revision consisted only of code to consolidate existing output of CCDFGF and previous testing had validated the existing output, the validation of CCDFGF 3.01 only examined Test Case 5 [8].  Consequently, the validation of CCDFGF 3.01 relies on the combination of the validation of CCDFGF 2.01, the extensions to the test cases for CCDFGF 3.00, and the additional test case for CCDFGF 3.01.

In June 2003, the Agency completed a report documenting the Agency's approval of CCDFGF 3.01 [8] on the Compaq ES40.  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1 [9, 10, 11].  In August 2003, CCDFGF Version 3.01 was upgraded to Version 5.0.  In March 2004, the format of the open statements was changed and the version number of CCDFGF was upgraded from 5.0 to 5.0A [12].  In September 2004, the Agency concluded that CCDFGF 3.01, 5.0, and 5.0A met the acceptance criteria specified in the VVP [1, 2, 13], and thus was validated on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [14].  Version 5.0A was used to support the 2004 CRA.

In June 2004, the code was changed from Version 5.0A to Version 5.01 to reflect changes involving the confidence intervals assigned to the drilling rate that were changed from 90% to 99.5% [17, 18, 19].  Another version of the code, however, was issued in December 2004 (Version 5.02).  This new version includes changes to the Function FindSeries (i.e., a block in the IF-THEN-ELSE construction was removed and a check is made within the remaining block to ensure that 0 is never returned) [17, 20].  In March 2006, the Agency completed a report documenting the Agency's approval of CCDFGF 5.01 and 5.02 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [21].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [22].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [22].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for CCDFGF 5.02 to ensure that it continued to function correctly [23].

In 2010 CCDFGF was modified to investigate the impact of mass balance problems in mobilization of radionuclides calculations. SNL used three test cases to qualify CCDFGF 6.00 on the Compaq cluster with OpenVMS 8.2 [28].

In 2013, WIPP PA software was migrated from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [25, 27]. SNL performed regression testing and validation testing to verify that CCDFGF 7.01 continues to perform WIPP PA calculations correctly [26].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to CCDFGF 6.00 and 7.01.
Test Methodology

The code CCDFGF was upgraded from Version 5.01 to Version 5.02 in December 2004. 
The tests for this code comprised the three test cases described in the VVP [15] and an additional test case for CCDFGF 5.02 described in the Addendum to the CCDFGF 5.01 VD [16].  This additional test case was designed to check that the new interpolation of the drilling rate (90% versus 99.5%) works properly.

Test Case 1 tests CCDFGF to ensure that it evaluates the different release mechanisms correctly for one future.  This test case includes five parts; one for each of the following release mechanisms: 
 Cuttings and cavings
 Spallings
 Direct brine releases (DBR)
 Culebra releases
 Total releases

The input files for this test case specify only eight vectors, with two futures each.  Using only a few vectors and futures allows manual calculation of the correct results.  Moreover, each vector tests a different release mechanism, further simplifying the manual calculation.  The manual calculation is then compared to the computer-generated results.

The objective of Test Case 2 is to ensure that the random number generator is producing a statistically random sequence of numbers and that the sequence is reproducible.  For this test case, CCDFGF 5.02 is run first using the input files for Test Case 2.  Next, two utility programs are run: CCGF_CHISQ_TEST.FOR for the Chi-Squared Goodness of Fit test, and CCGF_SERIAL_TEST.FOR for the Serial test.  These codes compute the statistics for the output of the random number generator in CCDFGF.  

Test Case 2 produces an output file CCGF_QA0501_RND_TEST2.BIN written to the logical RANDOMNUMBERS$BIN.  This binary file contains the sequence of random numbers generated by the random number generator in CCDFGF.

Each utility program uses a single input file assigned to the logical CCGF_RANDOM$INP.  For this test case, the logical CCGF_RANDOM$INP should have the value CCGF_QA0501_RND_TEST2.BIN.  This file is unformatted binary data written by CCDFGF, consisting of a sequence of data pairs; each data pair consists of a double precision real number and a three-character string.  

Each utility code produces an output file assigned to the logicals CCGF_CHISQ_TEST$OUT and CCGF_SERIAL_TEST$OUT.  For this test, the logicals were assigned values of CCGF_QA0501_TEST2_CHISQ.OUT and CCGF_QA0501_TEST2_SERIAL.OUT, respectively.  These files contain the results of each test by reporting the chi-squared value for each repetition and whether or not the statistic exceeds the critical value.  These files can be found in the library LIBCCGF.
Test Case 3 evaluated the statistical correctness of stochastic futures modeled by CCDFGF.  Test Case 3 runs 1,000 futures for each of 8 vectors and examines the following observed statistics:

 Probability of selecting each CH waste stream
 Probability of brine pocket intrusion
 Probability of an intrusion hitting the excavated region
 Percentage of intrusions hitting the excavated region that hit CH waste
 Probability of each plugging pattern

In addition, Test Case 3 examined the average number of drilling events in each future and the distribution of mining times.  When choosing the option to use the volume fraction as a probability for the release of cuttings, Test Case 3 examines the probability of CH waste intrusions that encounter CH waste as well.  The input files for this test case specified only eight vectors, with 1,000 futures each.  The release data for each vector are the same as in Test Case 1. Unlike Test Case 1, the futures for each vector were determined randomly. 

In 2004, the regression test methodology used the VMS DIFFERENCE command to compare output from CCDFGF 5.02 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 to the output from CCDFGF 5.02 on the Compaq ES40 with OpenVMS 7.3-1 [24].  

In 2006, all three of the tests described in the VD were performed to compare output from CCDFGF 5.02 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 to results from the validation tests of CCDFGF 5.02 run on the ES40 with OpenVMS 7.3-1 [23]. 

In 2010, SNL performed validation against the VVP requirements and regression testing to qualify CCDFGF 6.00 on the Compaq ES47 computer. Three test cases were used.  In Test Case 1, the code was evaluated for determining release mechanisms correctly.  In Test Case 3, the code calculates random numbers that were evaluated against known solutions in the VD acceptance criteria in Sections 5.1 and 5.3 [28].  In Test case 2, the code calculates multiple futures and was regression-tested against CCDFGF 5.02 output files in Section 5.2 of the VVP [28].  CCDFGF 6.00 was qualified on the Compaq ES45 using regression testing against the Compaq ES47 output files [28].

In 2013, regression testing of CCDFGF 7.01 on the Solaris Blade platform with SunOS 5.11 was performed on the original three VMS test cases and two new test cases to validate the codes performance [26].  The original Test Cases 1, 2, and 3 were compared to the results obtained using CCDFGF 7.00, which had been previously validated using CCDFGF 5.02 on the Compaq computers with OpenVMS 8.2 to verify that CCDFGF 7.01 correctly inserts data into the performance assessment results database on the Solaris platforms.  Comparison of these test cases verify that CCDFGF 7.01 generates the binary random file correctly on the Solaris platform [26].

Two new test cases, case 5 and 6, were developed to read the database data inserted into the PA results database by CCDFGF 7.01 to verify that the code correctly inserts data into the PA database [26].  Test case 5 compares the binned data and test case 6 compares the statistics data inserted by CCDFGF 7.01 when test cases 1 and 3 are executed.  

Test Results

Test Cases 1, 2 and 3 for CCDFGF 6.00 were executed on the Compaq ES45 and ES47 with OpenVMS 8.2 to qualify the code for WIPP PA [28]. SNL verified that Test Cases 1 and 3 compare satisfactorily against known solutions in the VVP acceptance criteria. Test Case 2 was adequately compared to CCDFGF 5.02 output files using the VMS DIFFERENCE command. 

Test Cases 1, 2, 3, 5, and 6 for CCDFGF 7.01 were executed on the Solaris Blade with SunOS 5.11.  Each test case generated output files, which were compared to the output files from the CCDFGF 7.00 validation tests executed on Compaq ES40, ES45, ES47 with OpenVMS 8.2.  The differences are generally limited to code run date and time, file and platform names. Results of Test Cases 1, 2, and 3 showed no differences or acceptable numeric differences.  SNL notes that, "All show three significant digits that differ by one in the final digit (with rounding), so all these differences are acceptable" [26].  

Test Cases 5 and 6 compared the CCDFGF output data that was inserted into the PAResults database by CCDFGF 7.01.  Test Cases 1 and 3 were compared to the data in the output files from the CCDFGF 7.0 test on VMS.  No significant differences were found and the data was written correctly into the PA Results database [26].

The Agency's Conclusions

The Agency found that CCDFGF 6.00 was adequately qualified for testing and use in the 2014 CRA performance assessment calculations.  The Agency concludes that CCDFGF 6.00 meets the acceptance criteria in the RD/VVP, and is qualified for WIPP PA use on the Compaq ES45 and ES47 with OpenVMS 8.2.

The Agency also found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and rounding differences between the VMS computers and the Solaris computers.  The comparison found minor differences in the numerical output of CCDFGF 7.01.  The Agency concludes that CCDFGF 7.01 meets the acceptance criteria in the RD/VVP, and is qualified for WIPP PA use on the Solaris Blade platform with SunOS 5.11.

References

 WIPP PA (Performance Assessment) 1997.  "Verification and Validation Plan for CCDFGF Version 3.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45412. 
 WIPP PA (Performance Assessment) 1997.  "Addendum to Verification and Validation Plan for CCDFGF Version 3.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45412.
 WIPP PA (Performance Assessment) 1996.  "Validation Document for CCDFGF Version 1.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42042. 
 WIPP PA (Performance Assessment) 1996.  "Verification and Validation Plan for CCDFGF Version 1.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42043. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for CCDFGF Version 2.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42772. 
 WIPP PA (Performance Assessment) 1996.  "Verification and Validation Plan for CCDFGF Version 2.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42768. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for CCDFGF Version 3.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45415.
 WIPP PA (Performance Assessment) 1997.  "Addendum to Validation Document for CCDFGF Version 3.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45415. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290. 
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA  -  "Analysis Report for CCDFGF Version 5.00A Regression Testing for the Compaq ES45 and 8400 Platforms," 2004.  Sandia National Laboratories. 
 WIPP PA  -  "Change Control Form from 5.00 to 5.00A, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #531461.
 WIPP PA (Performance Assessment) 2003.  "Validation Document for CCDFGF Version 5.0."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #530042. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004.  WIPP-PA (Performance Assessment) 2004. 
 WIPP PA (Performance Assessment) Verification and Validation Plan for CCDFGF Version 5.01."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #535736.
 WIPP PA (Performance Assessment) Addendum to the CCDFGF Version 5.01 Validation Document ERMS #535737.   
 WIPP PA (Performance Assessment) 2005.  `Change Control Form' for CCDFGF 5.01, Sandia National Laboratories.  Sandia.  ERMS #538177. 
 WIPP PA (Performance Assessment) 2003.  "Validation Document for CCDFGF Version 5.01."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #535737. 
 WIPP PA (Performance Assessment) 2004.  "Addendum to the CCDFGF Version 5.01 Validation Document."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #538167.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files. WPO #542680.
 WIPP PA  -  "Regression Testing Report of CCDFGF Version 5.01 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May17, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543452.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #561457. 
 WIPP PA  -  "Addendum to Requirements Document, User's Manual, Verification and Validation Plan, and Validation Document for CCDFGF Version 7.01, dated November 2013." Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #561130. 
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories. Sandia WIPP Central Files. ERMS #557765
 WIPP PA  -  "Validation Document for CCDFGF (Version 6.00), dated April 2010." Sandia National Laboratories. Sandia WIPP Central Files. ERMS #552387.
                                       


CCDFSUM    

**(Please note CCDFSUM functionality is replaced by database functions. The text in this section is unchanged to preserved historical context.)

This section presents the regression test results for CCDFSUM.  The CCDFSUM code plots the cumulative complementary distribution functions (CCDFs) for the releases calculated by the code CCDFGF. 

Introduction

Since the CCA PA, the CCDFSUM code has undergone a series of revisions.  CCDFSUM 1.01 was used in the CCA.  Version 1.01 was validated on a DEC Alpha 2100 running OpenVMS 6.1 under the requirements of SNL QAP 9-1 (now SNL NP 9-1) [2].  In 1996, CCDFSUM was revised to Version 2.00 to accommodate changes made in CCDFGF 3.00.  CCDFSUM 2.00 was validated on a DEC Alpha 2100 running OpenVMS 6.1 [3]. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from CCDFSUM 2.00 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of CCDFSUM 2.00 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of CCDFSUM 2.00 [4] on those operating systems.  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [5, 6].  In March 2004, the Agency completed a report documenting the Agency's approval of CCDFSUM 2.00 on the Compaq Alpha ES45 and 8400 running OpenVMS 7.3-1 [7].  CCDFSUM 2.00 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [8].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [8].  Because of these changes in the operating system and the addition of a new computing platform, regression testing has been conducted for CCDFSUM 2.00 to ensure that it continues to function correctly [9].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to CCDFSUM 2.00 running on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2. 

Test Methodology

The tests for this code comprised the test cases described in the Verification and Validation Plan for CCDFSUM Version 2.00 (VVP) [1].  In 2003, regression test results from CCDFSUM 2.00 run on the ES45 and 8400 with OpenVMS 7.3-1 were compared to results from the validation tests of CCDFSUM 2.00 run on the ES40 with OpenVMS 7.3-1 [6].  CCDFSUM 2.00 was used to support the 2004 CRA.

In 2006, the regression test methodology used the VMS DIFFERENCE command to compare output from CCDFSUM 2.00 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 to results from the validation tests of CCDFSUM 2.00 run on the ES40 with OpenVMS 7.3-1 [9]. 

Test Results

The VVP for CCDFSUM 2.00 lists a total of nine test cases; however, CCDFSUM is run only in the first test case.  The other eight test cases specify comparison of the output of the first test case with different criteria.  These test cases do not exercise any function of the code.  For this regression test, DOE believes (and the Agency agrees) that it is sufficient to run only the first test case and compare its output with the output of the previous validation test.

The first test case was executed on the Compaq ES40, ES45, and ES47 with OpenVMS 8.2 [9].  The test case generated output files, which were compared to the output files from the CCDFSUM 2.00 validation tests, executed on Compaq ES40 with OpenVMS 7.3-1.  The differences are limited to code run date and time, file and platform names.  The comparison found that all differences in output are limited to code run date and time, file and platform names. 

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of CCDFSUM 2.00.  The Agency concludes that CCDFSUM 2.00 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the ES40, ES45, and ES47 with OpenVMS 8.2.

References 
 
 WIPP PA (Performance Assessment) 1997.  "Verification and Validation Plan for CCDFSUM Version 2.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #43920. 
 WIPP PA 2001.  "Nuclear Waste Management Program Procedure 9-1, Analyses."  Sandia National Laboratories. 
 WIPP PA 1997.  "Validation Document for CCDFSUM Version 2.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #43925.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of CCDFSUM 2.00 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543594.

CUTTINGS_S

This section presents the regression test results for CUTTINGS_S.  The CUTTINGS_S (CUSP) code was written to calculate the quantity of radioactive material (in curies) brought to the surface from a radioactive waste disposal repository as a consequence of an inadvertent human intrusion through drilling.  The code determines the amount of material removed from the repository by several release mechanisms, and decays the material to the time of intrusion.  

Introduction

Since the CCA PA, the CUTTINGS_S code has undergone a series of revisions.  CUTTINGS_S 5.03 was used in the WIPP CCA.  Version 5.03 was validated in May 1996 on a DEC Alpha 2100 with OpenVMS 6.1.  The validation was accomplished by demonstrating that results of the six test cases met the acceptance criteria defined in the RD/VVP [4] and Validation Document [7]. 

In July 1997, CUTTINGS_S was revised to Version 5.04 and was validated on a DEC Alpha 2100 with OpenVMS 6.1.  Test Cases 1 - 6 for the validation of CUTTINGS_S 5.04 were identical to test cases for the validation of CUTTINGS_S 5.03.  The acceptance criteria for these test cases were satisfied by showing that the output from CUTTINGS_S 5.04 was identical to the output of the CUTTINGS_S 5.03 validation tests.  New Test Cases 7 - 9 were validated by demonstrating the output of Test Cases 7 - 9 met the acceptance criteria defined in the RD/VVP for CUTTINGS_S 5.04 [1, 3]. 

In January 2001, CUTTINGS_S was revised to Version 5.04A to remove references to unused libraries.  Although SDBREAD_LIB and the INGRES library are not used in PA calculations, CUTTINGS_S 5.04 checked for their availability and will not run if they are absent.  Since these libraries were no longer present on the system, it was necessary to eliminate the linkages.  The following quotations from the Change Control form explain the revisions: 
      CUTTINGS_S Version 5.04 was mistakenly linked with SDBREAD_LIB and an INGRES library.  Although SDBREAD_LIB and the INGRES library are not used, the INGRES system must be installed on the system for Version 5.04 to run.  The linked software is no longer available on the system, so CUTTINGS_S will be relinked to remove these libraries.

      There were no source changes between CUTTINGS_S Version 5.04A and Version 5.04.  The only difference is that CUTTINGS_S Version 5.04A will not be linked with SDBREAD_LIB and the INGRES library.  The code will now be linked with the standard libraries CAMDAT_LIB, CAMCON_LIB, and CAMSUPES_LIB. (The library.OLB files that were used for Version 5.04 will not be used for Version 5.04A. [5] 

In order to test the new operating systems that were added in 2002 - 2003 (Section 1), regression test results from CUTTINGS_S 5.04A run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of CUTTINGS_S 5.04A run on a DEC Alpha 2100 with OpenVMS 6.1 [2, 6].  In June 2003, the Agency completed a report documenting the Agency's approval with respect to the migration and verification of CUTTINGS_S 5.04A [8].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [9, 10].  In September 2004, EPA concluded that that CUTTINGS_S 5.04A met the acceptance criteria specified in the RD/VVP [3], and thus was validated on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [11].  CUTTINGS_S 5.04A was used to support the 2004 CRA. 

The CUTTINGS_S code was upgraded from Version 5.10 to Version 6.00 in January 2005 [12, 18].  CUTTINGS_S was modified to remove unneeded functionality and improve maintainability [13].  The changes reduce the number of input files needed to run CUTTINGS_S and improved the traceability of input parameters to CUTTINGS_S.  The new CUTTINGS_S can produce output values for multiple intrusions and cavities, resulting in fewer code executions.

The validation of CUTTINGS_S Version 6.00 was conducted on a Compaq ES40 platform and documented in the Verification and Validation Plan and Validation Document for CUTTINGS_S Version 6.00 (VVP/VD) [12].  For completeness, regression testing was also conducted by DOE to demonstrate the validity of the code on the Compaq ES45 platform [14, 19].

In April 2005, the CUTTINGS_S code was upgraded from Version 6.00 to Version 6.1 [16].
In CUTTINGS_S Versions 5.10 and 6.00, the input parameter RNDSPALL was required for use of spall model 4.  In Version 6.01 use of this parameter is optional.  If RNDSPALL is not specified in the input control file, the first spallings vector volume in the spall input file will be used for calculation of spallings releases for the first vector output, the second spallings vector volume will be used for calculation of spallings releases for the second vector output, and so on.

The validation of CUTTINGS_S Version 6.01 was conducted on a Compaq ES40 platform and documented in the Verification and Validation Plan and Validation Document for CUTTINGS_S Version 6.01 (VVP/VD) [16].  For completeness, regression testing was also conducted by DOE to demonstrate the validity of the code on the Compaq ES45 platform [17].

In June 2005, the CUTTINGS_S code was upgraded from Version 6.01 to Version 6.02 [20].  In Version 6.01, when the flow is turbulent, the subroutine DRILL attempted to calculate the radius at which the flow becomes laminar.  It is required, both physically and computationally, that ROUTER remains larger than the constant RINNER.  In Version 6.02, the subroutine DRILL was modified by adding an IF statement that assigns a value of RORIG to ROUTER if ROUTER becomes less than RINNER.  In March 2006, the Agency completed a report documenting the Agency's approval of CUTTINGS_S Version 6.02 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [20].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [22, 24].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [22].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for CUTTINGS_S Version 6.02 to ensure that it continues to function correctly [23].

In 2013, SNL migrated the WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [26, 28].  Because of these changes, regression testing was done to verify that CUTTINGS_S 6.03 continues to perform calculations correctly based on the WIPP PA [27].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to CUTTINGS_S Version 6.03 running on the Solaris Blade computers with SunOS 5.11.

Test Methodology

The tests for CUTTING_S Version 6.02 comprise the 12 test cases described in the Verification and Validation Plan / Validation Document CUTTINGS_S Version 6.01 (VVP/VD) [16] and an additional test case for CUTTINGS_S 6.02 described in the Addendum to the CUTTINGS_S Version 6.01 Verification and Validation Plan / Validation Document (VVP/VD Addendum) [21].  Per the VVP/VD [16], Test Cases 1, 2, 3, 6, and 9 are obsolete, leaving seven test cases as described in the VVP/VD [16] and its addendum [21].

In 2003, the regression testing of CUTTINGS_S 6.02 on the ES40 platform with OpenVMS 7.3-1 was conducted for 7 test cases.  Test results were compared to the results of the validation testing conducted for CUTTINGS_S 6.01 on the ES40 platform with OpenVMS 7.3-1 [15, 16].  Test Case 13, added when CUTTINGS_S was upgraded to Version 6.02, was subject to unit testing, rather than regression testing.  Results of the unit testing for CUTTINGS_S 6.02 Test Case 13 on the ES40 platform with OpenVMS 7.3-1 can be found in VVP/VD Addendum [21].  

In 2004, the regression testing of CUTTINGS_S 6.02 on the ES45 platform with OpenVMS 7.3-1 was conducted for all seven test cases.  Test results for the seven test cases (above) were compared to the results of the regression testing conducted for CUTTINGS_S 6.02 on the ES40 platform with OpenVMS 7.3-1.  Test results for Test Case 13 were compared to the results of the unit testing conducted for CUTTINGS_S 6.02 [16, 23].  

In 2006, the regression test methodology used the VMS DIFFERENCE command to compare output from CUTTINGS_S 6.02 for all of the tests described in the VVP/VD [16].  The tests were run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 and compared to results from the validation tests of CUTTINGS_S 6.02 run on the ES40 with OpenVMS 7.3-1 [24].  Test results for Test Case 13, however, were not compared to the results of the unit testing conducted for CUTTINGS_S 6.02 on the Compaq ES40 with OpenVMS 7.3-1.  

In 2013, regression testing of CUTTINGS_S 6.03 on the Solaris Blade platform with SunOS 5.11 was performed on all seven test cases examined on the VMS platform.  Test results for the seven test cases were compared to the results obtained executing CUTTINGS_S 6.02 on the Compaq computers with OpenVMS 8.2 [27]. Because of the differences in the computer architecture of the Compaq and Solaris computer, such as single precision versus double precision computations, SNL expected minor differences in results of regression testing [27].  SNL used the UNIX diff command and SNL developed a Python utility, Regr_Diff.py, to compare results.

Six test cases showed no or minor numeric differences between the tests.  Test Case 11 showed "significant numeric differences" [27] on the Solaris Blade platform. SNL did a thorough review and concluded that these results were from the greater numerical precision of the double precision calculations done on the Solaris platform [27].

Test Results

Seven test cases for CUTTINGS_S 6.03 were executed on the Solaris Blade with SunOS 5.11.  All tests output files were compared with CUTTINGS_S 6.02 test case runs executed on Compaq cluster with OpenVMS 8.2.  The comparison found that all differences in output are limited to code run date and time, file and platform names, minor numerical differences and a few greater differences caused by the use of double precision computations on the Solaris Blade [27].

The Agency's Conclusions
 
The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names and minor numerical differences due to the difference in computer architecture. Test Case 11 showed "significant numeric differences".  SNL provided a detailed examination of these differences [27]: "VMS calculations were in single precision and Solaris calculations are in double precision, so the change in precision accounts for all the SATBRINn and SATGASn differences. Thus, all the numeric differences are acceptable." The Agency closely examined SNL's explanation and agrees with their assessment.

The Agency concludes that CUTTINGS_S 6.03 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11.



References

 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 Analysis Plan (AP-065) 2000.  "Regression for the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for CUTTINGS_S Version 5.04."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45971. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for CUTTINGS_S Version 5.03."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37763. 
 WIPP PA (Performance Assessment) 2000.  "Change Control form for CUTTINGS_S, Version 5.04A."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #515342. 
 WIPP PA (Performance Assessment) 2001.  "Release of CUTTINGS_S, Version 5.04A." Sandia National Laboratories.  Sandia WIPP Central Files WPO #516570. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for CUTTINGS_S Version 5.03."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #35621. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA (Performance Assessment) 2003.  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA (Performance Assessment).  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004.
 WIPP PA (Performance Assessment) 2005.  "Verification and Validation Plan and Validation Document for CUTTINGS_S Version 6.00."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #537040.
 WIPP (Performance Assessment) 2005.  "Change Control Form" for CUTTING_S 5.1."  Sandia National Laboratories.  Sandia
 WIPP (Performance Assessment) Installation and Checkout for CUTTINGS_S Version 6.00 Regression Testing for the Compaq ES45 Platform.
 WIPP PA (Performance Assessment) 2003.  Software Configuration Management System (SCMS) Plan.  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #524707.
 WIPP PA (Performance Assessment) 2005.  Verification and Validation Plan and Validation Document for CUTTINGS_S Version 6.01.  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #539235. 
 WIPP (Performance Assessment) Installation and Checkout for CUTTINGS_S Version 6.01 Regression Testing for the Compaq ES45 Platform.
 WIPP (Performance Assessment) Requirements Document for CUTTINGS_S Version 6.0.  ERMS #537037 
 WIPP (Performance Assessment) Regression Testing for CUTTINGS_S Version 6.02 on the ES40 and ES45. 
 WIPP (Performance Assessment) 2005.  "Change Control Form" for CUTTING_S 6.1, Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 2005.  "Addendum to the CUTTINGS_S Version 6.01 Verification and Validation Plan / Validation Document ERMS #539235."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #540162
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of CUTTINGS_S 6.02 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543784.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013."  Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for CUTTINGS_S Version 6.03, Regression Testing for the Solaris Blade dated March 19, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #558540.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.              

DRSPALL

This section presents the validation and verification results for DRSPALL.  DRSPALL calculates the volume of solid waste subjected to material failure and transport to the surface by a spallings mechanism during an inadvertent drilling intrusion into the WIPP repository.  The code uses either text-formatted input and output files or CAMDAT database files [6] for I/O, and calculates coupled repository and wellbore transient compressible fluid flow before, during, and after the drilling intrusion process.  Mathematical models are included for bit penetration, multi-phase flow in the well, fluid expulsion at the surface, coupling of the well and the repository, repository spalling (tensile) failure associated with fluidized bed transport, and repository internal gas flow. 

Introduction

DRSPALL calculates the spallings release, defined as the mass of waste subject to tensile failure and transport during an inadvertent drilling intrusion into a high-pressure WIPP repository.  Cuttings removed by the direct action of the drill bit, and cavings removed by shear forces of the drilling mud against the drilled cavity wall are calculated separately in the CUTTINGS code (Section 5.5).  DRSPALL uses both text-formatted and CDB input and output files, and calculates coupled repository and wellbore transient compressible fluid flow before, during, and after the drilling intrusion process.  Mathematical models include multi-phase flow in the borehole, fluid expulsion at the surface, coupling of the well and the repository, repository spalling (tensile) failure associated with fluidized bed transport, and repository internal gas flow. The wellbore model is a one-dimensional linear, and the repository model is one-dimensional, either spherical or cylindrical. 

DRSPALL is based on the theory of one-dimensional, time-dependent compressible isothermal fluid flow.  Somewhat different forms of that theory are used, depending on whether the flow is in the wellbore or the repository, and whether the wellbore currently penetrates the repository.  The wellbore and repository flows are coupled at a specified boundary.  Flow in the well is treated as a compressible, viscous, multi-phase mixture of mud, gas, salt, and possibly waste solids.  Flow in the repository is treated as viscous, compressible single-phase gas flow in a porous solid.  At the cavity forming the repository-wellbore boundary (following penetration), waste solids freed by drilling, tensile failure, and associated fluidization may enter the wellbore flow stream.  Between the well and the repository, flow is treated according to the state of penetration.  The wellbore calculations use time-marching finite differences.  These are part of a single computational loop.  The zone boundaries are fixed and fluid moves through the interfaces by convection.  Quantities are zone-centered and integration is explicit in time.  The repository calculations also use time-marching finite differences that are part of a single computational loop.  The method is implicit with spatial derivatives determined after the time increment. 

SNL Software Requirements (NP 19-1) require that the following seven primary documents be developed, reviewed, and maintained for the DRSPALL software:  the Software QA plan, a Requirements Document (RD), Verification and Validation Plan (WP), User's Manual (UM), Design Document (DD), Implementation Document (ID), and the Validation Document (VD).  Configuration control is maintained through completion of Installation & Checkout (I&C) documentation for all changes made to DRSPALL, and system software and/or system hardware.  In addition, Change Control (CC) and Software Problem Report (SPR) documents are completed, as appropriate.  

DRSPALL was originally developed in Digital Visual FORTRAN Version 6 and was designed to run under Microsoft Windows(TM).  However, for implementation in WIPP and other similar PAs, the code has been ported to the WIPP Alpha-Cluster running OpenVMS.  DRSPALL Version 1.00 was built in September 2003 and was therefore not used in the CCA; the validation of DRSPALL 1.00 was conducted on a Compaq ES40 platform and documented in the Verification and Validation Plan and Validation Document for DRSPALL 1.00 [1, 4].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1.  In September 2004, the Agency concluded that DRSPALL 1.0 met the acceptance criteria specified in the RD/VVP, and thus is validated on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [7, 13]. DRSPALL 1.00 was used to support the 2004 CRA.

In January 2004, modifications to DRSPALL 1.00 were made which included; cosmetic changes, bypassing the bounds checking, and the upper bound on the far-field stress was changed from 15E6 to 18E6 to accommodate future initial conditions [8].  The most recent version of DRSPALL is 1.10 [10, 11].  In March 2006, the Agency completed a report documenting the Agency's approval of DRSPALL 1.10 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [12].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [14].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [14].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for DRSPALL 1.10 to ensure that it continued to function correctly [15].
In 2013, SNL migrated the PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [18]. In this case, SNL did not use regression testing, but determined that a full validation for DRSPALL 1.21 was necessary. SNL noted in their Validation Document ([17], Page 8), "Regression testing is not possible because of small differences in the results.  Therefore, each test will be validated using the acceptance criteria documented in the DRSPALL 1.00 VVP/VD [4]."
The discussion below documents the test methodology, validation test results, and the Agency's conclusions with respect to DRSPALL 1.21 running on the Solaris Blade computers with SunOS 5.11.

Test Methodology

The test set for DRSPALL consists of four test cases that are designed to address the requirements established in Section 2 of the VVP\VD [4].  The test cases are numbered 1, 2, 4, and 5 (i.e., there is no Test Case 3).  Functional testing was performed by running the test cases with the production executable for DRSPALL. (The production executable is used to perform the CRA PA calculations.)  The production executable is generated as described in the DRSPALL Implementation Document [3].  All files used in functional testing are stored in class QE0l00 of the DRS library of the Software Configuration Management System (SCMS) accessible from the WIPP Alpha Cluster.  The files include the DRSPALL input and output files, all procedure files to execute DRSPALL, and output files from other numerical solutions used for comparisons.  A single test case requires that DRSPALL be executed one or more times.  Each execution is referred to by DOE as a "case" or "subcase" or "run."  For example, Test Case 5 has six subcases, labeled case 5.1 through 5.7 (5.4 is not defined), and the files for the test case are distinguished by "TC51" through "TC57" in their names. 

In 2006, the initial tests for DRSPALL Version 1.1 consisted of the two sub-cases of Test Case 4 described in the Verification and Validation Plan and Validation Document for DRSPALL 1.00 (VVP/VD) [4].  Test Case 4 was identified in the VVP/VD as suitable for future regression testing.  Regression test results from DRSPALL 1.10 run on the ES40 with OpenVMS 7.3-1 were then compared to results from the validation tests of DRSPALL 1.10 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 

Once the testing of Test Case 4 was complete, the entire test suite for DRSPALL was executed with DRSPALL Version 1.10 on the Compaq ES40 platform with OpenVMS 7.3-1 [9].  The test suite consists of Test Case 1 (1.1, 1.2), Test Case 2 (2.1, 2.2, 2.3, 2.4, 2.5, 2.6), Test Case 4 (4.1, 4.2) and Test Case 5 (5.1, 5.2, 5.3, 5.5, 5.6, 5.7), where each number in parentheses represents a subcase.  (As noted above there is no Test Case 3 defined for DRSPALL.)  Each subcase is a separate execution of DRSPALL and results in output files that must be included in the regression test.  

The output CAMDAT file is a binary file and cannot be compared with the VMS DIFFERENCE command directly.  The GROPECDB utility [4] is used to write portions of the CAMDAT file as text so that they can be compared.  This diagnostics file is not part of the code's functionality and is not used in verification, so it is usually not compared.  

In 2013, SNL validated DRSPALL 1.21 [17] using the Functional Requirements and External Interface Requirements documented listed in the DRSPALL 1.00 Requirements Document [1] just as was performed in 2003 on the VMS computers, discussed above.  DRSPALL's validation consists of four test cases (1, 2, 4, and 5) that validate DRSPALL 1.21 ability to adequately perform calculations on the Solaris.  External Interface Requirements also include three requirements (#R7, #R8, and #R9) that DRSPALL 1.21 can input and output results correctly on the Solaris Blade. Each test case is evaluated with respect to the acceptance criteria listed in the DRSPALL 1.00 VVP/VD [4].

Validation of DRSPALL on the Solaris is performed using the same methodology described above in this section with the exception that file naming conventions and directory formats are modified to comply with the characteristics of the SunOS 5.11 operating system [17]. 

Test Results

DRSPALL reads its run parameters from an input control file (file extension .DRS).  The DRSPALL User's Manual [5] provides instructions on constructing and interpreting the input control file.  Each subcase of the four test cases has its own input control file.  The input control file contains the test subcase number (as "Validation Test Case").  DRSPALL responds to the test case number by creating special output files that contain information used for validation, by initializing conditions (e.g., boundary conditions) specific to the test case, and by limiting the processing to that necessary for validation.  The Design Document for DRSPALL [2] describes any non-standard processing that is dependent on the test case.  Each execution of DRSPALL generates an output CAMDAT file (.CDB) and an output diagnostics file (.DBG).  The DRSPALL User's Manual [5] describes the variables output on the CAMDAT file.  Variables on a CAMDAT file may be extracted in tabular form using the GROPECDB utility or plotted using the BLOTCDB utility.  In addition to the standard output files, a particular test case may generate additional files to be used for validation only.  These validation files are described under the relevant test case section. 

Most test cases compare the results of the DRSPALL execution with those generated by analytical and other numerical solutions.  These solutions are described in detail in the relevant test case section in the VVP\VD [4]. 

The DRSPALL test cases were run with a set of procedure script files.  Each test case has its own procedure file, and each subcase has a procedure file.  The procedure file for the test case (e.g., DRS_TC5.COM) executes all subcases.  It creates a subdirectory for the subcase, fetches the subcase procedure file from the SCMS (software configuration management system), and executes the subcase procedure file, usually by submitting a job to a batch queue.  The procedure file for the subcase (e.g., DRS TC5l.COM) fetches the DRSPALL input file(s), and executes DRSPALL with the appropriate input and output file designations.  The subcase procedure file may also do some simple post-processing on the CAMDAT file, but most post-processing is done manually by the tester.  The test cases are designed to meet the requirements coverage presented in Section 6 of the VVP\VD. 

All subcases of the four test cases for DRSPALL Version 1.10 were executed on the Compaq ES40 with OpenVMS 7.3-1.  Outputs from the test cases were compared to the corresponding output files from the validation of DRSPALL Version 1.10 on a Compaq ES40, ES45, and ES47 with OpenVMS 8.2.  The comparison found that the differences in the output files were limited to code run dates and time, file and directory names, platform names, and execution statistics.

In 2013, SNL executed all tests and test subcases, discussed above, of the four test cases for the DRSPALL 1.21 on the Solaris Blade with SunOS 5.11.  These results are evaluated against the acceptance criteria to qualify DRSPALL 1.21 on the Solaris platform.  SNL found that DRSPALL 1.21 running on the Solaris Blade with SunOS 5.11 meets the acceptance criteria and requirements listed in the DRSPALL 1.00 Requirement Document [1, 17].

The Agency's Conclusions

The Agency closely examined SNL's qualification of DRSPALL 1.21 to verify that its approach is the same as used in 2003 and continues to be based on the DRSPALL RD [1] and VVP/VD [4] functional requirements and acceptance criteria.  EPA also evaluated the test cases and subcases executed on the Solaris Blade computers to verify that SNL executed the cases correctly and evaluated the results adequately.  The Agency found that the process used to qualify the DRSPALL 1.21 PA code to be adequate.   

The Agency concludes that DRSPALL Version 1.21 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 2003.  "Requirements Document for DRSPALL Version 1.00 (document Version 1.20)."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #531278. 
 WIPP PA (Performance Assessment) 2003.  "Design Document for DRSPALL Version 1.00 (document Version 1.10)."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #529878. 
 WIPP PA (Performance Assessment) 2003.  "Implementation Document for DRSPALL Version 1.00 (document Version 1.20)."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #524781. 
 WIPP PA (Performance Assessment) 2003.  "Verification and Validation Plan and Validation Document for DRSPALL Version 1.00 (document Version 1.10)."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #524782. 
 WIPP PA (Performance Assessment) 2003.  "User's Manual for DRSPALL Version 1.00 (document Version 1.10)."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #524780. 
 Hansen, F.D., Pfeifle, T.W., Lord, D.L. 2003.  "Parameter Justification Report for DRSPALL."  ERMS #531057.  Car1sbad, New Mexico: Sandia National Laboratories. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004.
 WIPP PA (Performance Assessment) 2004.  "Change Control Form for DRSPALL, Version 1.0."  WIPP Central Files. WPO #533161.  
 WIPP PA (Performance Assessment) 2004.  "Installation and Checkout Form for DRSPALL 1.10" (ES40).  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #534209.
 WIPP PA (Performance Assessment) 2003.  "Verification and Validation Plan and Validation Document for DRSPALL Version 1.10."  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #524782. 
 WIPP PA (Performance Assessment) 2004.  "Installation and Checkout Form for DRSPALL 1.10" (ES40 & 8400).  Sandia National Laboratories.  Sandia WIPP Central Files. ERMS #534209.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files. WPO #542680.
 WIPP PA  -  "Regression Testing Report of DRSPALL Version 1.10 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May17, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543773.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files. ERMS# 561457.
 WIPP PA  -  "Validation Document for DRSPALL Version 1.21 dated May 28, 2013."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS# 560046.
 WIPP PA  -  "Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."   Sandia National Laboratories.  Sandia WIPP Central Files. ERMS# 557765.


DTRKMF

This section presents the validation, and verification results for DTRKMF 1.00 [14] and the regression testing of DTRKMF 1.01 [11].  This document describes the working design of the software DTRKMF (Double precision TRacKing with MODFLOW 2000 [1] file input) that is used to help visualize the flow fields computed as part of the PA process.  This visualization is accomplished by abstracting the two-dimensional (2-D) or three-dimensional (3-D) flow fields into one dimensional (1-D) particle tracks and then mapping simplified transport solutions onto these tracks.  This mapping approach greatly reduces the cost of computing transport solutions and also produces solutions with considerably less numerical dispersion.

Introduction

DTRKMF is used for estimating the migration paths of neutrally buoyant particles through a known porous medium fluid velocity field.  As output, the program provides the spatial location of the particle over time, until the particle reaches a user-defined boundary.  The flow field that is input to DTRKMF is a discretized velocity field - values of velocities for discrete locations within a computational domain.  The DTRKMF software uses linear assumptions to develop a semi-analytical technique to solve a system of ordinary differential equations (ODEs) representing fluid flow in a porous medium.  DTRKMF was originally developed using the Lahey/Fujitsu Fortran 95 Compiler on an i686 PC running the Red Hat Linux release 7.2 Operating System.  DTRKMF was not used for the CCA and DTRKMF 1.00 was used to support the 2004 CRA.

The DTRKMF 1.00 and DTRKMF 1.01 codes are documented as specified in NQA-2a-1990 in the Design Document, User's Manual, the Requirements Document & Verification and Validation Plan, the Validation Document, and the Implementation Document [2 - 5 and 10-13].

In 2014, SNL migrated the WIPP PA code DTRKMF 1.00 from the Linux computer system, "Alice", to DTRKMF 1.01 on the Solaris Blade platform with SunOS 5.11 operating system [10, 11]. SNL used regression testing to verify that DTRKMF 1.01 continues to do performance assessment calculations correctly [12].

Test Methodology

The validation of DTRKMF 1.00 involved two test cases that were designed to test and verify that the DTRKMF 1.00 code correctly tracks particle motion under the following specific conditions:

 A two-dimensional, discretized steady-state velocity field in which the flow directions vary from point to point, and in which over the domain of interest, the magnitudes vary in a non-linear fashion.
 The positions of the origins of each velocity vector correspond to a finite-difference grid that has non-uniform spacing of columns and rows.

In 2014, regression testing of DTRKMF 1.01 on the Solaris Blade platform with SunOS 5.11 was performed using two test cases previously qualified, noted above, on the Linux "Alice" system [11]. SNL executed the two test cases on the Solaris computer.  The results of the regression calculations were compared using the UNIX "diff" command.  Because of differences in the computer architecture of the Compaq and the Solaris computers, such as 32 bits words verse 64 bit words, SNL expected minor differences in results of the regression testing.  SNL states that "For DTRKMF, double precision numeric values must match to six significant digits (with rounding). Any other differences must be explained" [11].

SNL also tested a SPR (Software Problem Report, SPR 14-001) that was corrected for the DTRKMF code [11].

Test Results

For the two test cases in 2002, the domain of interest is a square that is 1,000 x 1,000 m[2] long.  The grid contains 400 cells -- 20 rows and 20 columns.  The cell widths and heights vary from 100 x 100 m[2] at the lower left-hand corner to 20 x 20 m[2] at the upper right-hand corner.  Three particles were released at coordinates (600,950), (650,950) and (8.5, 17.5).  The coordinates for the third particle were specified by i-j index indicating the center of cell located in the 8th row and the 17th column.  This corresponds to x-y coordinates equal to (621.8181, 946.1717).  The functional requirements were verified by a series of hand calculations, visual inspections of the data, and spreadsheets in 2002 validation testing.

For the regression testing in 2014, results from the previous validation were compared to the 2014 cases run on the Solaris computer using the UNIX "diff" command.  SNL found expected acceptable differences, such as file names and other text information.  A few numeric values had a digit different in the eighth digit that was determined to be acceptable by SNL's acceptance criteria [11]. SNL also determined that the SPR had been corrected by the results of the test documented in Section 6.0 of the installation and checkout procedure [11].



The Agency's Conclusions

The Agency found that all differences in output are acceptable, namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names and expected minor numerical differences due to the difference in computer architecture. The Agency closely examined SNL's discussion and agrees with their assessment.

The Agency concludes that DTRKMF 1.01 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with the SunOS 5.11 operating system.

References 

 Harbaugh, A.W., R.E. Banta, M.C. Hill, and M.G. McDonald, 2000.  MODFLOW-2000, The U.S. Geological Survey Modular Ground-Water Model - User Guide to Modularization Concepts and the Ground-Water Process.  OFR 00-92, U.S. Geological Survey, Reston, Virginia.
 WIPP PA, 2002.  WIPP PA Design Document for DTRKMF Version 1.00, Document Version 1.00; ERMS #523244, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA, 2002a.  WIPP PA Requirements Document For DTRKMF Version 1.00, Document Version 1.00; ERMS #523242, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA, 2002b.  WIPP PA Verification & Validation Plan For DTRKMF Version 1.00; Document Version 1.00; ERMS #523243, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA, 2002c.  WIPP PA User's Manual For DTRKMF (Version 1.00); Document Version 1.00; ERMS #523246, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA, 2000, WIPP PA Requirements Document For DTRKCDB (Version 1.00), Document Version 1.00; ERMS#515806, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA, 2002a, WIPP PA Requirements Document For DTRKMF Version 1.00, Document.
 WIPP PA, 2002b, WIPP PA User's Manual For DTRKMF Version 1.00, Document Version 1.00; ERMS #523246, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA, 2002c, WIPP PA Verification and Validation Plan For DTRKCDB (Version 1.00), Document Version 1.00; ERMS #515089, Sandia National Laboratories, Albuquerque, New Mexico.
 WIPP PA  -  "AP-168 - Analysis Plan for Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Cluster on the Sun Solaris Cluster Revision 0, dated April 22, 2014."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561953.
  "Installation and Checkout for DTRKMF Version 1.01 Regression Testing for the Solaris Blade, dated August 5, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #562206.
 WIPP-PA  -  "Implementation Document for DTRKMF Version 1.01, dated July 14, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #562205.
 WIPP PA  -  "Validation Document for DTRKMF Version 1.00, dated April 2003." Sandia National Laboratories. Sandia WIPP Central Files ERMS #523247.
 "Summary Report on the Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Cluster on the Sun Solaris Cluster, AP-168, dated December 12, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #563103.
EPAUNI

This section presents the regression test results for EPAUNI.  EPAUNI calculates the number of Environmental Protection Agency (EPA) units per unit volume and associated volumetric weighting for each contact-handled (CH) transuranic (TRU) waste stream.  An EPA unit is defined as the inventory of that isotope in curies divided by the EPA release limit for that isotope in curies, as specified in 40 CFR 191, Appendix A, Table I.  EPAUNI is also used to calculate the WIPP scale average EPA units per unit volume for remotely handled (RH) TRU waste streams destined for disposal at the WIPP facility.  EPA units are calculated only for the key radionuclides that are responsible for 99% of the activity in the waste.  The dominant radionuclides in the CH waste are Am-241, Pu-238, Pu-239, Pu-240, and U-234.  Two parent radionuclides (Pu-241 and Cm-244), which produce Am-241 and Pu-240, respectively, are also accounted for in the CH waste calculations.  The calculations for RH waste include three additional radionuclides: Cs-137, Sr-90, and U-233.

Introduction

EPAUNI 1.14 was used in the WIPP CCA PA.  The code was validated in June 1997 on a DEC Alpha 2100 with OpenVMS 6.1.  Validation of Version 1.14 was accomplished by demonstrating that the results of five test cases met the acceptance criteria defined in the VVP for EPAUNI 1.14 [1].  In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from EPAUNI 1.14 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of EPAUNI 1.14 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of EPAUNI 1.14 on those operating systems [2].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [3, 4].  In March 2003, the code underwent further regression testing to verify its operation on the ES45 platform [8].  The code version changed to Version 1.15 in May 2003 to allow more user control on input and to create logical output names [5, 6].  In July of 2003, EPAUNI was updated to Version 1.15A.  In March 2004, the Agency completed a report documenting the Agency's approval of EPAUNI 1.15A on the Compaq Alpha ES45 and 8400 that were both running OpenVMS 7.3-1 [9, 10].  EPAUNI 1.15A was used to support 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [11].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [11].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for EPAUNI 1.15A to ensure that it continued to function correctly.

In 2013, SNL WIPP PA software was migrated from the WMS Alpha platform on the Compaq cluster with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [13, 15].  SNL performed regression testing to verify that EPAUNI 1.16 software continues to perform PA calculations properly [14].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to EPAUNI 1.16 running on the Solaris Blade computers with SunOS 5.11. 

Test Methodology

The tests for this code comprise the five test cases described in the Verification and Validation Plan for EPAUNI (VVP) [1]. SNL modified Test Cases 1 and 2 (called #1_DB- and #2_DB ([14] Section 5.2)) to validate the following new External Interface Requirements; R.6 (that the parameters BOREHOLE:WUF and REFCON:YRSEC can be successfully read from the Parameter Database),  R.7 (proper database information is provided to EPAUNI),  and R.8 (data entered and retrieved from the database are logged properly).  These new requirements are evaluated by examining the output files from Test Cases 1 and 2 [14].  REFCON:YRSEC is slightly different in the Parameter Database from the value hardcoded in EPAUNI, therefore SNL expected the calculations that used this value to be slightly different from EPAUNI 1.15C [14].

Regression test results from EPAUNI 1.16 run on the Solaris Blade with SunOS 5.11 were compared to results from the validation tests of EPAUNI 1.15C run on a Compaq cluster with OpenVMS 8.2 [14].  The regression test methodology uses the UNIX diff command to compare output from the simulations.  

Test Results

The results of the original test cases described above are that only very minor differences (e.g., spacing, version number) were found for the five test cases.  The modified test cases show minor value changes in the output files [14].  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names for the original test cases, and minor changes in the modified cases.

The Agency's Conclusions

The Agency found that all differences in output for the original test cases are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  SNL reviewed and reasonably explained the minor values differences for the modified test cases.   The Agency concludes that EPAUNI 1.16 meets the acceptance criteria in the RD/VVP, and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1997.  "Verification and Validation Plan for EPAUNI Version 1.14."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #44889. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA (Performance Assessment) 2003.  "Design Document for EPAUNI Version 1.15."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #529567.
 WIPP PA (Performance Assessment) 2003.  "User's Manual for EPAUNI Version 1.15."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #529570.
 WIPP PA (Performance Assessment) 2003.  "Validation Document for EPAUNI Version 1.15."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #529568.
 WIPP PA (Performance Assessment) 2003.  "Addenda to Validation Document for EPAUNI Version 1.15."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #530204
 WIPP PA (Performance Assessment) 2003.  "Analysis Report for EPAUNI Version 1.15A Regression Testing for the Compaq ES45 and 8400 Platforms."  Sandia National Laboratories. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of EPAUNI Version 1.15A on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 19, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543779.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162 dated December 23, 2013."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457
 WIPP PA  -  "Addendum to Requirements Document, User's Manual, Verification and Validation Plan and Validation Document for EPAUNI Version 1.16 dated March 28, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #559075
 WIPP PA  -  "Analysis Plan for Migration of the Performance Assessment Codes to the Sun Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.


EQ3/6

The section presents the qualification results of the EQ3/6 Version 8.0a geochemical modeling computer code package. EQ3/6 Version 8.0a extends the qualification of the code so that is can be used to replace the FMT code in the future.  For a detailed description of EQ3/6 and its qualification see Section 1.0 of the VVP/VD [1].  

Introduction

EQ3/6 is a software package for modeling geochemical problems involving fluid-mineral interactions and/or solution-mineral-equilibria in aqueous systems. The software package has a speciation-solubility code, EQ3NR, and a reaction path modeling code, EQ6. Supporting software includes the data file preprocessor EQPT, and the conversion programs XCON3 and XCON6. Supporting databases include a number of thermodynamic data files with the Davis and B-dot equations or Pitzer equations for activity coefficient models [1].

The qualification of EQ3/6 Version 8.0a extends the qualification of EQ3/6 so that it can be used in place of FMT in future WIPP PA applications.  FMT has three limitations [1]; first, FMT lacks a proper front end for initiating calculations, secondly, the code has a phase selection algorithm that is prone to failure, and lastly, the FMT supporting data file is inordinately complex and difficult to safely modify. 

Numerous changes were made in EQ3/6 Version 8.0a ([1] pages 13 to 18) and its supporting files and databases to migrate it to perform WIPP actinide chemistry calculations (for details of this process, please see the VVP/VD [1]) and to replace FMT.  The focus of this code review is to verify that EQ3/6 is adequately qualified.  EPA will document its detailed technical review of the EQ3/6 code in a separate report.  At this time EPA does not approve this code for PA calculations.  

Test Methodology

Testing was started by first executing the chemical codes, then using the output files to assemble the Excel files for result comparison.  Nineteen test cases for EQ3/6 were executed on the Dell Precision (T5400) PC with Window operating system [1]. As noted in Section 5.0 of the VVP/VD [1], "All of the unit test problems have some degree of WIPP relevance."  Section 5.0 also notes that many of the tests were taken from previous FMT runs.  These nineteen test cases are used to verify that EQ3/6 Version 8.0a satisfies nine functional requirements, eight external interface requirements that require a number of input and output files for the code to function correctly, and a new requirement testing the interfaces with a translated FMT database, as discussed in Section 2.1 of the VVP/VD [1].

Regression tests were used where possible and Microsoft Excel was used for these comparisons [1]. SNL established acceptance criteria for these test cases. The relative percent difference between the EQ3/6 and FMT output values were calculated and the absolute difference is used in specific cases ([1] Section 1.0). The VVP/VD specifies the first acceptance criterion of 1% for "linear" quantities and 0.004 for logarithmic quantities, with 0.01 specifically for pH. 

However, as noted in the text ([1] Section 1.0), these criteria had to be modified because of differences in EQ3/6 and FMT.  Test Cases 1 to 14 compared results between FMT and EQ3/6 Version 8.0a calculations and used these acceptance criteria.

Test Cases 15 to 19 compared results from EQ3/6 8.0 and EQ3/6 8.0a.  The comparisons used a second set of acceptance criteria of < 0.005% for linear values and <0.001% for logarithmic values.

The test cases fall into the three types [1]: 

   Type 1, where the initial solution is pure water, and is by definition charge balanced.
   
   Type 2, where the initial aqueous solution composition is defined in a manner that guarantees charge balance, and 
   
   Type 3, where the initial aqueous solution composition is not charge balanced.

Test Results

Test results are varied, as discussed in the VVP/VD [1].  Type 1 Test Cases 1, 3, 6, 7, 8, and 9 showed excellent results when compared to the first acceptance criteria.  Type 2, Test Cases 1 and 2 showed excellent results when compared to acceptance criteria.  However, Test Case 5A showed improved results after EQ3/6 was rerun using the J(x) approximation.  Test Case 5B did not show as good a result. SNL explained this difference as caused by the "extra water in the FMT run."  

Type 3 problems include Test Cases 4, 10, 11, 12, 13, and 14.  The text notes that these cases are most affected by the FMT front-end issues.  These test cases showed not-very-good to fair ([1] page 173) comparability to between the codes.  The qualification of these aspects of the FMT to EQ3/6 Version 8.0a appear to be less convincing and SNL's explanations insufficient, therefore the adequacy is questioned.  It appears that the input data and procedure was manipulated to get the two sets of results to compare better ([1] page 173). 

Test Cases 15, 16, 17, 18, and 19 showed the results of comparing EQ3/6 Version 8.0 and EQ3/6 Version 8.0a that found these results adequately comparable to the second set of acceptance criteria. 

The Agency's Conclusion

Based on the results of the Type 3 problems (Test Cases 4, 10, 11, 12, 13, and 14), EPA cannot approve the use of EQ3/6 Version 8.0a for WIPP PA calculations.  EPA further suggests that EQ3/6 Version 8.0a be qualified for the WIPP PA using its own requirements and acceptance criteria specific to the code's usage to support performance assessment calculations.

References

[1]  WIPP PA  -  "Verification and Validation Plan/Validation Document for EQ3/6 Version 8.0a for Actinide Chemistry, Revision 1, dated May 12, 2011." Sandia National Laboratories. Sandia WIPP Central Files ERMS #555358.

[2]  "Software User's Manual EQ3/6, Version 8.0, dated January 2003." By Thomas Wolery and Russell Jarek. Sandia National Laboratories. Sandia WIPP Central Files ERMS #548926.

[3]  WIPP PA  -  "Requirements Document for EQ3/6 Version 8.0a for Actinide Chemsitry, dated July 2009." Sandia National Laboratories. Sandia WIPP Central Files ERMS #550238.


FMT  

Please Note: FMT is replaced by EQ3/6. DOE migrated FMT to the Solaris to sustain chemical computation capability until EQ3/6 is adequately qualified.  

This section presents the regression test results for FMT.  FMT calculates the chemical equilibrium in high-ionic-strength geochemical systems at 25 C.  FMT also predicts solubility behavior of Am(III), Th(IV) and Np(V) in brines such as those found in Castile, Rustler, and Salado Formations near the WIPP. 

Introduction

FMT 2.00 was validated in November 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of nine test cases met the acceptance criteria defined in the RD/VVP for FMT 2.00 [3, 4].  FMT was not used to support the CCA.

In August 1996, FMT was revised to Version 2.10 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [5].  Test cases identical to the nine test cases for the validation of FMT 2.00 were run.  The acceptance criteria for these test cases were satisfied through regression testing that the output from FMT 2.10 was identical to the output of the FMT 2.00 validation tests. 

In September 1996, FMT was revised to Version 2.20 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [6].  It was determined at that time that the only test cases needed for validation were Test Cases 1, 2, 6, and 7.  Test cases identical to these four test cases for the validation of FMT 2.10 were run.  The acceptance criteria for these test cases were satisfied through regression testing, which found that the output from FMT 2.20 was identical to the output of the FMT 2.10 validation tests.  Test Case 1 also underwent some additional evaluation to ensure it met the acceptance criteria defined in the RD/VVP for FMT 2.20 [7]. 

In January 1997, FMT was revised to Version 2.30 and was validated on a DEC Alpha 2100 with OpenVMS 7.1 [8].  The four test cases previously identified (Test Cases 1, 2, 6, and 7) were re-named as Test Cases 1 through 4 and three additional test cases (labeled as Test Cases 5, 6 and 7) were generated.  Test Cases 1 through 4, identical to the four test cases for the validation of FMT 2.20 were run.  The acceptance criteria for Test Cases 1 through 4 were satisfied through regression testing.  The regression testing found the output from FMT 2.30 was identical to the output of the FMT 2.20 validation tests.  Test Cases 5, 6, and 7 were validated by demonstrating the results of the three test cases met the acceptance criteria defined in the RD/VVP for FMT 2.30 [9]. 

In October 1998, FMT was revised to Version 2.40 and was validated on a DEC Alpha 2100 with OpenVMS 7.2 [2].  In addition to the seven test cases from the previous validation, one additional test case was added (Test Case 8).  The code was validated by demonstrating the output of the eight FMT 2.40 test cases met the acceptance criteria defined in the RD/VVP for FMT 2.40 [1]. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from FMT 2.4 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of FMT 2.40 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of FMT 2.40 on those operating systems [11].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [10, 12, 13].  In March 2004, the Agency completed a report documenting the Agency's approval of FMT 2.40 on the Compaq Alpha ES45 and 8400 running OpenVMS 7.3-1 [14].  FMT 2.40 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [15].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [15].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for FMT 2.40 to ensure that it continued to function correctly.

In 2013, WIPP PA software was migrated from the WMS Alpha platform on the Compaq cluster with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [17, 19].  SNL performed regression testing to verify that FMT 2.41 software continues to perform PA calculations properly [18].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to FMT 2.41 running on the Solaris Blade with SunOS 5.11. 

Test Methodology

The tests for this code comprised the eight test cases described in the Requirements Document & Verification and Validation Plan for FMT Version 2.40 RD/VVP) [1].  Regression test results from FMT 2.41, run on the Solaris Blade with Sun 5.11, were compared to results from the validation tests of FMT 2.40, run on the Compaq cluster with OpenVMS 8.2 [18].  FMT 2.40 results were transferred to the Solaris, converted as needed and compared to the FMT 2.41 results using the python utility code, Regr_Diff.py ([18] Section 4.0). SNL commented that differences are expected [18]: "AP-162 states that numeric differences are expected, due to the change to double precision floating point variables in the code and differences in the two platforms."

All eight test cases had some significant numeric differences that required further review and explanation [1]. SNL noted: "The differences in very small numbers may be due to numerical differences between VMS and Solaris. Differences in the Descriptor will be deemed acceptable if both the VMS and Solaris values have an absolute value less than 1E-6. The maximum absolute value is 2E-12 for the Descriptor values, so the Descriptor difference in this test case is acceptable" [18].

Test Results

The results of the test described above are that only very minor differences (e.g., spacing, version number, some numeric) were found for the eight test cases.  The comparison found that all differences found in the output are limited to code run date and time, platform names, system version numbers, the directory, file names, and expected numeric differences.  All test cases for FMT 2.41 regression testing showed some "numeric differences." These numerical differences are within the acceptance criteria set for the regression testing of the FMT code. SNL reasonably explained these differences. The comparison found that all differences in output are acceptable. 

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and some numeric differences.  The Agency concludes that FMT 2.41 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1998.  "Requirements Document & Verification and Validation Plan for FMT Version 2.40."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #51305. 
 WIPP PA (Performance Assessment) 1998.  "Validation Document for FMT Version 2.40."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #51587. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for FMT Version 2.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #28118. 
 WIPP PA (Performance Assessment) 1995.  "Validation Document for FMT Version 2.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #28121. 
 WIPP PA (Performance Assessment) 1996.  "FMT 2.1 Regression Testing Results."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #41011.
 WIPP PA (Performance Assessment) 1996.  "Validation Document for FMT Version 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #41521. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for FMT Version 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42044. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for FMT Version 2.30."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #43038. 
 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for FMT Version 2.30."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #43035. 
 Memorandum dated February 4, 2003.  "Additional Evaluation of FMT Test Case 1."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #525279. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of FMT Version 2.40 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 19, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543779.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for FMT Version 2.41 Regression Test for the Solaris Blade dated November 12, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #560969. 
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.

GENMESH

This section presents the regression test results for GENMESH.  GENMESH 6.08 constructs a right-hand, Cartesian, rectangular finite-difference grid in one, two, or three dimensions as defined by a user input file.  In addition to establishing mesh connectivity and node coordinates, the program also sets material regions, geometry flags for node or element boundary conditions, and element attributes associated with the cell size.  In the WIPP PA application, GENMESH is the first code module run for setting up a computational model.  GENMESH is used to establish the computational grid or mesh containing nodes, elements, and material property information.  The output from GENMESH is the preliminary CAMDAT (.CDB) binary file.  These CAMDAT files are the essence of the WIPP PA system, because all PA codes read and write to and from these CAMDAT files. 

Introduction

GENMESH Version 6.07ZO was validated in August 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of nine test cases met the acceptance criteria defined in the RD/VVP for GENMESH 6.07ZO [2].  In January 1996, GENMESH was revised to Version 6.08 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [3, 4].  Test cases identical to the test cases for the validation of GENMESH 6.07ZO were run.  The acceptance criteria for these test cases were satisfied by showing that the output from GENMESH 6.08 was identical to the output of the GENMESH 6.07ZO validation tests.  GENMESH 6.08 was used in the CCA.  In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from GENMESH 6.08 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of GENMESH 6.08 run on a DEC Alpha 2100 with OpenVMS.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of GENMESH 6.08 on those operating systems [6].
In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1 [7, 8].  In March 2004, the Agency completed a report documenting the Agency's approval of GENMESH 6.08 on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [9].
In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [10].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [10].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for GENMESH 6.08 to ensure that it continued to function correctly.  GENMESH 6.08 was used to support the 2004 CRA.

In 2013, the WIPP PA software was migrated from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [12, 14].  

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to GENMESH 6.09 running on the Solaris Blade with SunOS 5.11 [13]. 

Test Methodology

The tests for this code comprised the nine test cases described in the Requirements Document & Verification and Validation Plan for GENMESH Version 6.08 (RD/VVP) [1].  The 2012 regression test results from the GENMESH 6.09 run on the Solaris Blade with SunOS 5.11 were compared to results from the validation tests of the GENMESH 6.08 run on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [13].  GENMESH 6.08 results were transferred to the Solaris, converted as needed, and compared to the GENMESH 6.09 results using the UNIX Diff command [13].

Test Results

All test results show no difference except for Test Cases 5 and 9 [13].  These two test cases showed minor numerical differences.  The comparison found that all differences in the output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numerical differences.  
 
The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numerical differences. The Agency concludes that GENMESH 6.09 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.
 
References 

 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for GENMESH Version 6.08."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #40688. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for GENMESH Version 6.07ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23334. 
 WIPP PA (Performance Assessment) 1996.  "GENMESH Version 6.08, Software Installation and Checkout Form."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30696. 
 WIPP PA (Performance Assessment) 1997.  "Inspection of GENMESH, Version 6.08, [(w/att) Attachment 1 & 2]."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45466. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of GENMESH 6.08 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543597.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for GENMESH Version 6.09 Regression Testing for Solaris Blade, dated October 23, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557800.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.

GROPECDB

This section presents the regression test results for GROPECDB.  GROPECDB allows a user to interactively look at the contents of a CAMDAT database (CDB) file.  The user enters the commands either interactively from the keyboard, or from an input command file.  The outputs can either go to the screen or to a specified file.  GROPECDB was used to convert binary CAMDAT database files to ASCII as part of the validation process for several WIPP PA codes at the time of the CCA. 

Introduction

GROPECDB 2.12 was validated in June 1996, running on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of seven test cases met the acceptance criteria defined in the RD/VVP for GROPECDB 2.12 [1, 2].  GROPECDB 2.12 was validated for the CCA and has not been revised since this validation.  In July 1997, a comparison of GROPECDB 2.10 output results to GROPECDB 2.12 (validated in June 1996) output results was performed [3].  DOE's evaluation concluded that the results were the same, with the exception of run time information (run date, directory names, file version numbers, and history comments).

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from GROPECDB 2.12 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of GROPECDB 2.12 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of GROPECDB 2.12 on those operating systems [4].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [5, 6].  In March 2004, the Agency completed a report documenting the Agency's approval of GROPECDB 2.12 on the Compaq Alpha ES45 and 8400 running OpenVMS 7.3-1 [7].  GROPECDB 2.12 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [8].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [8].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for GROPECDB 2.12 to ensure that it continued to function correctly.

In 2013, the WIPP PA software was migrated from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [10, 12].  

The discussion below documents the test methodology, 2012 validation test results, and the Agency's conclusions with respect to GROPECDB 2.13 running on the Solaris Blade with SunOS 5.11 [11].
 
Test Methodology

In 2012, SNL performed complete validation testing against requirements on eight test cases to verify that GROPECDB 2.13 performs the WIPP PA calculations properly on the Solaris Blade with SunOS 5.11 [11].  Each test case lists requirements tested and acceptance criteria in Section 4.0 of the VVP/VD [11]. The approach used by SNL is similar to the approach used in 1996 to validate the code originally.

Test Results

Based on the results of the validation tests described above, SNL verified that GROPECDB 2.13 satisfies the acceptance criteria and that all requirements were adequately tested.

The Agency's Conclusions

The Agency closely evaluated SNL's code validation process in Section 4.0 of the VVP/VD and found GROPECDB 2.13 is adequately validated [11].  The Agency concludes that GROPECDB 2.13 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.
References 

 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37494. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 CMS Inspection of GROPECDB, Version 2.12, WPO #46352, July 8, 1997. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of GROPECDB 2.2 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 30, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543461.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Verification and Validation Plan/Validation Documents for GROPECDB Version 2.13, dated August 28, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557790.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.

ICSET

This section presents the regression test results for ICSET.  ICSET is a program that sets initial conditions in a Performance Assessment Computational Data Base (CDB) file in 1-D, 2-D, or 3-D.  The ICSET array variables are history, global, nodal, and element variable values, at the first time step (NSTEP=1) in a .CDB file.  Both analysis array names and values are obtained from a user input file.  In addition, any nodal or element variable (existing or new), can be linearly interpolated by specifying interpolation tables in the ICSET input text file. 

Introduction

ICSET 2.21, running on the OpenVMS 6.1 operating system, was validated in September 1995 [1, 2].  ICSET 2.21 was used to support the CCA and has not been revised since this validation, but in 1996, a Change Control Form [3] was approved, revising the software version from 2.21 to 2.22 when new libraries were linked.  ICSET 2.22 remains the current version of this software module.  In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from ICSET 2.22 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of ICSET 2.22 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of ICSET 2.22 on those operating systems [5].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [6, 7].  In March 2004, the Agency completed a report documenting the Agency's approval of ICSET code on the Compaq Alpha ES45 and 8400 which were both running OpenVMS 7.3-1. ICSET 2.22 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [9].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [9].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for ICSET 2.22 code to ensure that it continued to function correctly.

In 2013, the WIPP PA software was migrated from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [11, 13].

The discussion below documents the test methodology, 2012 regression test results, and the Agency's conclusions with respect to ICSET 2.23 code running on the Solaris Blade with SunOS 5.11.

Test Methodology

The tests for this code comprised the six test cases described in the Requirements Document & Verification and Validation Plan for ICSET Version 2.21 (RD/VVP) [2].  Regression test results from ICSET 2.22 run on the Compaq ES40 with OpenVMS 7.3-1 were compared using the VMS DIFFERENCE command to results from the validation tests of ICSET 2.22 run on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [10].

CAMDAT database files (CDB) are produced in each of the six ICSET test cases.  The output CDB files are converted from a binary, CDB, file to an ASCII file for comparison during the validation process.  In the previous ICSET 2.22 validation, the CDB files were converted using GROPE 2.10.  GROPE has since been revised to Version 2.12.  GROPE 2.12 was validated in June 1996 on a DEC Alpha 2100 with OpenVMS 6.1 [4].  GROPE 2.12 has been validated on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 as part of the hardware regression test (see Section 5.11).  GROPE 2.12 is used to convert the CDB output files from RELATE 1.43 (see Section 5.26) in OpenVMS 8.2. 

In 2012, SNL used ICSET 2.23 to run six test cases on the Solaris Blade with SunOS 5.11 which were compared to VMS results for ICSET 2.22 run on the Compaq ES47 with OpenVMS 8.2 as described in Section 1.0 of the installation and checkout procedures [12]. ICSET 2.22 result files were transferred to the Solaris and converted to Solaris file format as required in Section 4.0.  The UNIX diff command was used to compare test results.  SNL noted that numeric differences were expected due to the change to double precision and platform differences.

Test Cases 3 and 4 showed minor numerical changes in one variable.  SNL performed a review and reasonably explained these differences in Sections 5.3 and 5.4 of the installation and checkout procedures [12].

Test Results

The results of the tests described above are that only very minor differences (e.g., spacing, version number, numeric differences) were found for the six test cases.  The comparison showed that all differences in the output were generally limited to code run date and time, platform names, system version numbers, the directory, and file names.  Two cases exhibited differences which were adequately explained [12].

The Agency's Conclusions

The Agency concludes that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and a few minor numerical differences.  The Agency concludes that ICSET 2.23 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for ICSET Version 2.21ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23623. 
 WIPP PA (Performance Assessment) 1995.  "Validation Document for ICSET Version 2.21ZO", Sandia National Laboratories.  Sandia WIPP Central Files WPO #23620. 
 WIPP PA (Performance Assessment) 1995.  "Change Control Form for ICSET 2.22." Sandia National Laboratories.  Sandia WIPP Central Files WPO #36482. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of ICSET 2.22 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 30, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543593.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for ISCET Version 2.23 Regression Test for the Solaris Blade, dated October 24, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557805.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.
      

JAS3D

This section presents the qualification results of the JAS3D Version 2.4.C-WIPP finite element computer code used to perform structural calculations that are used for the WIPP performance assessment. JAS3D replaces the SANTOS computer code. See the VVP/VD, Section 1.0 [1] for a more detailed discussion of the JAS3D code.   

Introduction 

JAS3D is a three-dimensional finite element program designed to solve large quasi-static nonlinear mechanics problems [1]. JAS3D was developed at SNL as part of the SEACAS environment. SEACAS is a modular system based on a common binary data file format, called EXODUS II, which stores the finite element mesh description and computed results. JAS3D runs on the HP Proliant DL260 GT computer server with Red Hat Enterprise Linux 5.3 operating system [1]. 

JAS3D has the capability to apply a variety of mechanical time-dependent loads to a model.  Arbitrary contacting surfaces between bodies can be tied together, slide with or without friction, and be allowed to open and close.  JAS3D has mechanisms to calculate coupled response in conjunction with other physics codes or subroutines.  These mechanisms include heat generation, thermal-hydrological coupling, gas generation, gravity effects, and pore pressure. JAS3D also has many other characteristics discussed in more detail in the VVP/VD [1] and its User's Manual [3].

JAS3D qualification testing also includes WIPP-specific characteristics [1].  These include the creep behavior of rock salt and a model describing the deformation of crushed salt.

JAS3D is supported by a suite of SEACAS utility codes that use the EXODUS binary file format including the following [1]:

FASTQ	generates a 2D mesh,
GEN3DII	generates a 3D mesh,
GREPOS	repositions a 3D mesh,
GJOIN2	combines two or more 3D meshes into a single 3D mesh,
GROPE	post-processing code used to examine an EXODUS files,
ALGEBRA2	post-processing code used to manipulate an EXODUS files,   
BLOTII2	post-processing graphics program used to plot EXODUS files,
APREPRO	used to simplify the preparation of parameterized input files,
NUMBERS	reads and stores data from an EXODUS files for additional analysis.

Each of the utility codes has been separately qualified by SNL.  Also, these codes are implicitly verified by their usage in the validation testing of JAS3D.

Test Methodology 

THE VVP/VD, Section 2.1 lists 25 functional requirements and 6 external interface requirements tested [1].  These requirements are tested using 13 test cases (Section 6 of the VVP/VD). SNL noted in the text: "A traditional regression test is impractical since many small numerical changes are expected" [1]. Therefore, JAS3D is verified by numerous techniques which include: comparing results to analytical solutions (Test Cases 2 to 11); comparisons with independent codes with similar capabilities, such as FLAC3D (Test Cases 1, 2, 12, and 13); and comparisons with other WIPP specific results (Test Case 12 - compared with SANTOS and SPECTROM-32 results; and Test Case 13 - compared with SPECTRUM-32 results) (see Table 6.0-3 of the VVP/VD) [1]. Please see the VVP/VD and User's Manual [1, 3] for more complete details of JAS3D testing.

Acceptance criteria is presented for each of the thirteen test cases in Section 6 of the VVP/VD [1]. Many of the results are examined by visual inspection of graphical output, therefore some judgement is applied. EPA closely examined the acceptance criteria and how they are applied for each test case and found the criteria and approach to be reasonable.

Test Results 

Test Cases 2 through 11 are tests compared to analytical solutions and had excellent results ([1] Sections 6.2 to 6.11). Test Case 1 is a comparison to results from the FLAC3D code, which had very good to excellent comparison ([1] Section 6.1).  Test Case 12 compares JAS3D results to SANTOS and SPECTROM-32, calculating a WIPP like clay seam G problem; in spite of the difference in characteristics of the three codes, the results are very good ([1] Section 12.0). For Test Case 13, which is a comparison of JAS3D and SPECTROM-32 modeling results of a WIPP-like crushed salt, the results are excellent ([1] Section 6.13).
   
The Agency's Conclusion 

EPA closely examined the results of the JAS3D qualification testing and found the results to be generally excellent.  Therefore, the Agency concludes that JAS3D 2.4.C-WIPP meets the acceptance criteria of the RD [4] and VVP/VD [1] and is validated for WIPP PA use on the HP Proliant DL360 G5 with Red Hat Enterprise Linux Server V5 operating system. 

References 

[1]  WIPP PA  -  "Verification and Validation Plan/Validation Document for JAS3D Version 2.4.C-WIPP, dated December 3, 2009." Sandia National Laboratories. Sandia WIPP Central Files ERMS #545606.

[2]  WIPP PA  -  "Implementation Document for JAS3D Version 2.4.C-WIPP, dated December 3, 2009." Sandia National Laboratories. Sandia WIPP Central Files ERMS #545608.

[3]  WIPP PA  -  "User's Manual for JAS3D Version 2.4.C-WIPP, dated February 11, 2010." Sandia National Laboratories. Sandia WIPP Central Files ERMS #545609.

[4]  WIPP PA  -  "Requirements Document for JAS3D Version 2.4.C-WIPP, dated November 23, 2009." Sandia National Laboratories. Sandia WIPP Central Files ERMS #552414.

[5]  WIPP PA  -  "Software Quality Assurance Plan for JAS3D Version 2.4.C-WIPP, dated September, 2009." Sandia National Laboratories. Sandia WIPP Central Files ERMS #545607.





LHS

This section presents the regression test results for LHS.  The LHS program samples distributions of input parameters using either normal Monte Carlo sampling or efficient Latin Hypercube Sampling.  LHS permits correlations (restricted pairings) between parameters.  Latin Hypercube Sampling reduces the minimum number of sample vectors [sv] required to about 4/3 * na, where na is the number of varying parameters.  Only Latin Hybercube Sampling is used for WIPP PA parameters.

Introduction

LHS Version 2.32ZO was validated in August 1996 on a DEC Alpha 2100 with OpenVMS 6.1 using 10 test cases by demonstrating that the results of each test case met the acceptance criteria defined in the RD/VVP for LHS 2.32ZO [2, 3]. LHS 2.32ZO was used to support the CCA.

In March 1996, LHS was revised to Version 2.41 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [5].  Test cases identical to the test cases for the validation of LHS 2.32ZO were run.  The acceptance criteria for these test cases were satisfied by showing that the output from LHS 2.41 was identical to the output of the LHS 2.32ZO validation tests [1, 4].  LHS 2.41 was used in the WIPP CCA.

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from LHS 2.41 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of LHS 2.41 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's approval with respect to the migration and verification of LHS 2.41 on those operating systems [6].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [7, 8].  In 2004, the Agency concluded that LHS 2.41 met the acceptance criteria specified in the RD/VVP [1], and thus was considered validated on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [9, 13]. LHS 2.41 was used to support the 2004 CRA.

In January 2005, the LHS code was revised in order to accurately describe the normal, lognormal, student, and logstudent distributions [10].  Version 2.42 of the code was subsequently validated to run on the Compaq ES40 with OpenVMS 7.3-1 [11].  Following the validation of the code on the ES40, it was regression tested to run on the ES45 [12].  The discussion below documents the test methodology and results, and the Agency's conclusions with respect to LHS 2.42.  In March 2006, the Agency completed a report documenting the Agency's approval of LHS 2.42 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [14].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [15].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [15].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for LHS 2.42 to ensure that it continued to function correctly [16].

In 2013, SNL migrated WIPP PA software from the VMS platform on the Compaq cluster of computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [17, 19].  SNL performed qualification testing and regression testing to ensure that LHS 2.43 continues to perform PA calculations correctly [18].

The discussion below documents the test methodology, regression test results, qualification test results, and the Agency's conclusions with respect to LHS 2.43 running on the Solaris Blade with SunOS 5.11.

Test Methodology

The tests for this code comprised the 11 test cases described in the Requirements Document & Verification and Validation Plan for LHS Version 2.41 (RD/VVP) [1].  All of the test cases were run on the ES40 and the results compared to the evaluation criteria.  Previous versions of LHS were validated using a now-retired code called PLOTLHS.  This code constructed a cumulative distribution function (CDF) plot for a sampled distribution and overlaid that plot onto the plot of the theoretical CDF.  However, it was determined that PLOTLHS incorrectly handled truncated distributions, specifically truncated normal and lognormal distributions.  Thus, it is not a feasible method to validate the current version of LHS.

To verify that LHS meets the acceptance criteria for Test Cases 1 through 4, the software EXCEL was employed to construct CDF plots from the sampled distributions.  These plots were overlaid on the plots of the expected CDFs of the distributions.  (These CDFs were also calculated using EXCEL.)

In 2005, LHS Version 2.41 was verified on the ES40 with OpenVMS 7.3-1, and regression test results from LHS 2.42 run on the ES45 with OpenVMS 7.3-1 were compared to results from the validation tests of LHS 2.42 run on the ES40 with OpenVMS 7.3-1 [12].  

In 2006, all 11 of the tests described in the VD were performed to compare output from LHS 2.42 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 to results from the validation tests of LHS 2.42 run on the ES40 with OpenVMS 7.3-1 [16]. 

In 2013, SNL performed qualification and regression testing of LHS 2.43 on the Solaris Blade with SunOS 5.11 on thirteen test cases to verify that the code performs PA calculations correctly [18].  These tests verify that LHS 2.43 fulfills the acceptance criteria of the computational Functional Requirements (R.1 to R.5) and the External Interface Requirements (R.6 through R.9) that the code properly inputs and outputs data, outputs to the PA database correctly, and captures input errors correctly. Additional Requirement (A.1) also tests that the code captures invalid input ([18] Section 2.0).  Test Cases 1 to 11 were used for LHS 2.42 and 12 and 13 were introduced for LHS 2.43 to test new functionality ([18] Section 4.0). 

Modifications were made to LHS 2.43 that impact validation ([18] Section 4.0).  LHS 2.43 calculations were not modified but the code was converted to double precision. The code's output structure was modified to directly output results to the PA results database. The Edit command was added to implement a conditional relationship between pairs of variables. Two new distributions, StudentMSD and LogStudentGGD, were added; therefore, SNL notes: "...standard regression testing is not practical" ([18] page 10).  SNL was not able to use the standard UNIX diff command to compare LHS 2.42 to 2.43 files because of the effect of double precision and numerous changes to how output files are formatted.  SNL developed a manually populated Excel spreadsheet to perform the data comparison and to calculate the relative percentage difference (RPD).  SNL used this comparison to acceptance criteria to validate LHS 2.43 performance when necessary. 

The VVP/VD, Section 4.1.6 describes the review process used by SNL [18].  Sandia manually populated the Excel spreadsheet to compare LHS 2.42 and LHS 2.43 sampled values and rank correlations values.  SNL used an RPD threshold of 5E-4 for the sampled values and an RPD of 1E-4 for the rank correlation values [18].  Any data values above these RPD limits require further review and explanation.  In most cases the values recorded in the spreadsheet matched within the RPD threshold.  Test Cases 1, 2, 4, 12, and 13 calculated some values outside the limits that required additional review and explanation [18]. 

Test Results

The 13 test cases for LHS 2.43 were executed on the Solaris Blade with SunOS 5.11 ([18] Section 4.0).   Output files from the test cases were compared to the corresponding output files from the validation of LHS 2.42 on the Compaq cluster with OpenVMS 8.2 by using the SNL-developed Excel spreadsheet ([18] Sections 4.1 to 4.11).  Generally the differences were textural or data differences were less than the RPD limits.  Most differences were caused by the conversion to double precision numeric format used on the Solaris. SNL also tested and verified new LHS 2.43 functionality against acceptance criteria to qualify code performance ([18] Sections 4.12 and 4.13).  

The Agency's Conclusions

The Agency closely examined SNLs LHS 2.43 validation process and found that it adequately complied with the approach described in the VVP/VD [18] and satisfied acceptance criteria.  EPA found that numerical differences were generally the result of conversion to double precision numeric format on the Solaris and that new functionally of the code was tested and reviewed reasonably.  The Agency concludes that LHS 2.43 meets the acceptance criteria in the VVP/VD [18] and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11. 
References 

 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for LHS Version 2.41."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30731. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for LHS Version 2.32ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23533.  
 WIPP PA (Performance Assessment) 1995.  "Validation Document for LHS Version 2.32ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23536. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for LHS Version 2.41."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30734. 
 OpenVMS 6.1 Release Notes, Section 5.8.  Digital Equipment Corporation, Maynard Massachusetts, November 1996.  Order number AA-QSBTB-TE. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290. 
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004. 
 WIPP PA (Performance Assessment) 2004.  "Change Control Form for LHS, Version 2.41."  WIPP Central Files WPO #538375.  
 WIPP PA (Performance Assessment) 2005 "Verification and Validation Plan & Validation Document for LHS (Version 2.42)."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #538370. 
 WIPP PA (Performance Assessment) 2005.  Analysis Report for the ES45 Regression Test of LHS Version 2.42.  Sandia National Laboratories.
 USEPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of LHS 2.42 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 19, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543786.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Verification and Validation Plan / Validation Document for LHS Version 2.43, dated March 11, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #559266.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.

* Note:  Discrepancies exist within the Software Quality Assurance (SQA) package for LHS Version 2.41 documentation.  Many of the documents incorrectly identify the current code as Version 2.40, as stated in the memo entitled, "Correct Version Number for LHS," WPO #38837. 

MATSET

This section presents the qualification and regression test results for MATSET.  In WIPP PA applications, MATSET is executed after mesh generation (e.g., after running GENMESH).  MATSET is used to set material property and attribute values used in the computational model.  Property and attribute values are obtained from either the Performance Assessment Parameter Database (PAPDB) or directly from the MATSET input control file.  The output from MATSET is written to a CAMDAT binary file.
Introduction

Since the CCA, the MATSET code has undergone a series of revisions.  MATSET 9.0 was used in the WIPP CCA.  MATSET 9.0 was validated in February 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of 10 test cases met the acceptance criteria defined in the RD/VVP for MATSET 9.0 [2, 3].

In November 2001, MATSET was revised to Version 9.10 and was validated on a DEC Alpha 2100 with OpenVMS 7.2-1 [1].  MATSET 9.10 accesses the new procedure-based PAPDB), it cannot read the databases accessed by previous versions of MATSET.  Therefore, three new test cases (Test Cases 13 through 15) were developed to verify that MATSET satisfies all of the requirements and additional functionality specified in Sections 2 and 3 of the VVP/VD [1].  Note that these test cases replace the test cases that were used to test previous versions of the code. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from MATSET 9.10 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of MATSET 9.10 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of MATSET 9.10 on those operating systems [5].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [6, 7].  In March 2004, the Agency completed a report documenting the Agency's approval of MATSET 9.10 on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [8]. MATSET 9.10 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [9].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [9].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for MATSET 9.10 to ensure that it continued to function correctly.

In 2012, SNL moved the WIPP PA parameter database to a new software system, MySQL.  MATSET was updated to MATSET 9.20 on the VMS computers to accommodate access to the new parameter database, PAPDB 2.0 [14].  This database was used in the 2014 CRA PA calculations.  

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on the Compaq cluster with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [11, 13].

The discussion below documents the test methodology, validation testing, regression test results, and the Agency's conclusions with respect to MATSET 9.20 running on the VMS platform and MATSET 9.21 running on the Solaris Blade with SunOS 5.11.

Test Methodology

The tests for this code comprised the three test cases described in the Verification and Validation Plan/ Validation Document for MATSET Version 9.10 (VVP/VD) [1].  Regression test results from MATSET 9.10 run on the Compaq ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of MATSET 9.10 run on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [10].

CAMDAT database files (CDB) were produced in MATSET Test Cases 13 and 14.  The output CDB files were converted from a binary CDB file to an ASCII file for comparison during the validation process.  In the previous MATSET 9.10 validation, the CDB files were converted using GROPE 2.12.  GROPE 2.12 was validated in June 1996 on a DEC Alpha 2100 with OpenVMS 6.1 [4].  GROPE 2.12 has also been validated on a Compaq ES45 and 8400 with OpenVMS 7.3-1 as part of the hardware regression testing (see Section 5.10).  For this regression test, GROPE 2.12 is used to convert the CDB output files from MATSET 9.10 in OpenVMS 8.2. GROPE 2.12 has been validated on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 as part of the hardware regression test (see Section 5.10). 

In 2012, SNL developed three new test cases, Test Cases 16, 17, and 18, to qualify MATSET 9.20 for use with MySQL database software running on the VMS platform as described in Section 6.0 of the VVP/VD [14].  Results of MATSET 9.20 testing were verified against the 10 functional requirements, 6 external interface requirements, and one additional functionality.

In 2012, SNL used four test cases to validate MATSET 9.21 on the Solaris.  Three regression test cases (16, 17, and 18) are compared to previous validation test of MATSET 9.20 on the Compaq ES47 [12].  New Test Case 19 verifies that MATSET 9.21 can successfully access the parameter database on the Solaris [12].

Test Results

Test Cases 16, 17, and 18 verify that MATSET 9.20 satisfies the requirements listed in the VVP/VD [14] running on the VMS platform. SNL found that MATSET 9.20 correctly access the new MySQL parameter database, reads input files properly, and reports input errors properly.

The Solaris results of the tests described above are that only very minor differences (e.g., spacing, version number) were found for Test Cases 16 and 17 in the regression tests.  The comparison shows that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.  Test Case 18 showed changes in the error message generated by MATSET 9.21 because of code changes. These changes were expected by SNL.

Test case 19 verified that MATSET 9.21 can access the parameter database on the Solaris. See Section 3.4.3 of the RD/VVP/VD addendum [12] for a complete discussion and a list of acceptance criteria.

The Agency's Conclusions

The Agency found that the three test cases adequately evaluate the various requirements and verifies that MATSET 9.20 properly access the new parameter database.  The Agency concludes that MATSET 9.20 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the VMS cluster with OpenVMS 8.2 to perform 2014 CRA PA calculations.

The Agency found that all differences in regression testing output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names or expected changes because of code modifications.  The comparison found no differences in the numerical output of MATSET 9.21.  Database validation using Test Case 19 adequately verifies that MATSET 9.21 properly accesses the parameter database. The Agency concludes that MATSET 9.21 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1995.  "Verification and Validation Plan/ Validation Document for MATSET Version 9.10."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #519734. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for MATSET Version 9.0."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30690. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for MATSET Version 9.0."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30687. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of MATSET 9.10 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543598.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Addendum to Requirements Document and Verification and Validation Plan/Validation Document for MATSET Version 9.21 dated December 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557809. 
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
 WIPP PA  -  "Verification and Validation Plan/Validation Document for MATSET Version 9.20, dated March 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files #556595.

MODFLOW2000

This section presents DOE's verification and validation of MODFLOW2000.  The MODFLOW2000 code is an acquired code that solves both steady state and transient groundwater flow problems.  The MODFLOW groundwater software has been developed by the U.S. Geological Survey and has been continually upgraded since the first version, MODFLOW88, was released in 1988. 


Introduction

MODFLOW is a computer program that numerically solves the three-dimensional ground-water flow equation for a porous medium by using a finite-difference method.  MODFLOW is designed to be modular in that different functionalities, such as wells, rivers, evapo-transpiration, etc., can be added as modules to the basic groundwater flow solutions.  Although MODFLOW was designed to be easily enhanced, the design was oriented toward additions to the ground-water flow equation.  Frequently, there is a need to solve additional equations; for example, transport equations and equations for estimating parameter values that produce the closest match between model-calculated heads and flows and measured values.  The version of MODFLOW used by DOE, MODFLOW2000 (MF2K) is designed to simulate more complex boundary conditions [1, 2, 3].  The user's manual for MODFLOW2000 [4] contains an overview of the old and added design concepts, documents one new package, and contains input instructions for using the model to solve the ground-water flow equation.  For transient and steady-state, single-phase, ground-water flow problems, the MODFLOW2000 software is executed with the prescribed boundary and initial conditions.  MODFLOW was not used for the CCA.

Software Requirements (SNL NP 19-1) requires that the following seven primary documents be developed, reviewed, and maintained for the MODFLOW software:  the Software QA plan, a Requirements Document (RD), Verification and Validation Plan (WP), User's Manual (UM), Design Document (DD), Implementation Document (ID), and the Validation Document (VD).  DOE reviewed the preexisting documentation available for MODFLOW2000 from the US Geological Survey and found it to provide the necessary information that is usually within the RD, DD, UM, and VVP.  Therefore, the only additional documents that were produced by DOE are the Software QA Plan [12], the ID [11], VD [10] and the Installation and Check Out forms [13].  DOE notes that documentation for Version 1.6 will remain as the base document for any future versions of the software, with addenda for each of the documents defining the additional scope of the revised software.  Configuration control is maintained through completion of Installation & Checkout (I&C) documentation for all changes made to MODFLOW2000, and system software and/or system hardware.  In addition, Change Control (CC) and Software Problem Report (SPR) documents are completed, as appropriate.  

The construction of newer clusters of Linux-based computers has required the testing of certain codes that have been previously qualified on older hardware. 

In 2003, MODFLOW2000 Version 1.6 was qualified for use on the PC-based Linux cluster [10]. The Agency reviewed DOE's qualification and accepted the verification of  MODFLOW2000 Version 1.6 on the Linux platform [14].  DOE used these EPA approved software and hardware configurations to support CRA-2004 and PABC-2004.

The Linux-based cluster was upgraded in 2006 (new processors and other hardware) and is now called the "Geo-Hydro Linux Cluster" [6].  This cluster is comprised of three different hardware groups, each with a group name; (1) eleionomae, (2) pegaeae, and (3) crinaeae.  The computers are connected to a job control server, "tethys.sandia.gov", which is not used for execution of codes.  Because the hardware is new, but the software codes are unchanged and are not going to be recompiled, DOE only conducted regression testing to validate that the codes perform correctly on the new systems. For both CRA-2009 and PABC-2009, DOE used MODFLOW2000 Version 1.6 in conjunction with the three hardware groups associated with the "Geo-Hydro Linux Cluster" mentioned above.  

In 2014, SNL migrated MODFLOW2000 from the PC Linux computer system, "Alice", to the Solaris Blade platform with SunOS 5.11 operating system [16, 19]. Regression testing was used to verify that MODFLOW2000 Version 1.07 continues to perform WIPP PA related calculations correctly [17, 18].

The approach, results and Agency findings pertaining to this code migration are discussed below.

Test Methodology

The DOE designed eight test cases to verify the functional requirements necessary for the verification/validation of the computer code for WIPP.  The input files and corresponding output files are provided with the installation package.  Listings of these files are included in Appendix A to the VD corresponding to the test number and test name.  Validation testing consisted of running all test cases and checking resulting output for consistency with documented results.  The test cases were run with the production executable (e.g., the executable version used for PA compliance calculations) for MF2K.  The production executable was created on the target platform by the code sponsor and stored using CVS (e.g., Concurrent Versions System) version control on the target platform (CVSROOT - /h/WIPPcvs, repository - src/mf2k).  The executable, source code and test problems were also stored in SCMS on the WIPP Compaq Alpha cluster (Library- MF2K, class- VER_0160). 

The MF2K production executable and input and output test files were obtained from configuration management and placed in the test directories on the target platform.  All of the input files were used unmodified from the source code package, except for the *.nam file, where the file pathnames were modified to reflect the different syntax between the Windows and Linux operating systems.  The MF2K output listing files, *.lst, created during testing were compared to the output listing files obtained from the MF2K installation package, and differences were noted and addressed.  The listing file is the primary ASCII text file created by MF2K and contains an input echo, solver performance information, calculated head and a budget summary.  This same procedure was used for all the tests, with the exception of Test Case 8, the algebraic multi-grid (AMG) test.  The intent of Test Case 8 is to verify the Linked algebraic Multi-Grid solver (LMG) package that was not included in the MODFLOW2000 test suite.  A test identical to Test Case 1, BCF2SS, was chosen, except that the solver has been switched from the Strongly Implicit Procedure (SIP) to the LMG solver.  The results of Test Case 8 were compared to the results of Test Case 1.

After the code was verified, it was regression-tested against the verification results [7, 8, 9].  The run-control for these tests was done using the csh script RunReadScript and the Python programs ReadScript.py and Format.py.  RunReadScript was used to run ReadScript.py (for processing the list of files to be checked out, checked in, executed and compared), run Fomat.py (for formatting the output of ReadScript.py into a Word file), and then check the log and Word files into the repository.  The specific input script and the locations within the CVS repository where the input script and log file can be found are presented in the regression test documentation [7, 8, 9].

The UNIX diff (e.g., difference) command was used to compare the output to original data.  The diff command does a character-by-character comparison of two ASCII files (binary files cannot be compared).  Any differences are reported by listing the line number in the first file, the type of change (`a' for addition, `c' for change, `d' for deletion), and then the line numbers in the second file.

The test was considered successful if the MODFLOW2000 output listing file was the same as the documented listing file, within reasonable accuracy and accounting for date and filename changes.  Reasonable accuracy was defined as numerically equal, except in the last printed digit for numbers printed with 6 or less digits, or in the digits greater than the 6[th] for numbers printed with greater than 6 digits.  Original output files are listed in the appendix, while the output files generated during testing were stored in CVS on the target platform and in SCMS accessible from the WIPP VMS Alpha cluster.  The same criteria were used for all the test cases.

In 2014, regression testing of MODFLOW2000 1.07 on the Solaris Blade platform with SunOS 5.11 was performed on all nine test cases as shown in Tables 4.1-1 and 4.1-2 of the VD [17]. Test results for the nine test cases were compared to the results obtained executing MODFLOW2000 1.6 on the PC, "Alice", with Linux operating system.  Because of the differences in the computer architecture of the PC Linux versus the Solaris, SNL/DOE expected minor differences in results of the regression testing. SNL used the UNIX diff command to compare results.

All of the nine test cases showed minor differences in results. SNL developed acceptance criteria [17] to evaluate these differences.  Test Case 8 had problems converging on the Solaris so the "budget closure criteria" was changed from 1E-10 to 1E-7 [18, 17]. SNL closely examine the results of Test Case 8 and found that the "...agreement between the files is very good". SNL did a thorough review and determined that the change in convergence criteria was minor.  

Test Results

The regression testing performed on the Solaris Blade shows that MODFLOW·2000 V 1.07 is working adequately and produces comparable results to the code on the PC Linux [17].

The Agency's Conclusions

The Agency found that differences in output files are acceptable and that minor numerical differences were due mainly to the difference in computer characteristics [17].  The Agency closely examined the test results and SNL's explanations and agrees with their assessment.

The Agency concludes that MODFLOW2000 1.07 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11.
 
References 

 WIPP PA 2002.  Code Classification and Review of Pre-Existing Documentation for MODFLOW.  Memo from Sean McKenna to Mario Chavez, Sept. 30, 2002, Sandia National Laboratories, Albuquerque, New Mexico.  ERMS #523942.
 Donald, M.G., Harbaugh, A.W., 1988.  A Modular Three-Dimensional Finite-Difference Ground-water Flow Model, TWI 6-A1, 588 p.  U.S. Geologic Survey, Reston, Virginia.  ERMS #522202.
 Harbaugh, A.W., Banta, E.R., Hill, M.C., and McDonald, M.G., 2000.  MODFLOW-2000, The U.S. Geological Survey Modular Ground-Water Model  -  User Guide To Modularization Concepts And The Ground-Water Flow Process: U.S. Geological Survey Open-File Report 00-92. #522197.
 MODFLOW-2000, MODFLOW-2000.  The U.S. Geological Survey Modular Ground-Water Model  -  User Guide to the LINK-AMG (LMG) Package for Solving MATRIX Equations Using an Algebraic Multigrid Solver: U.S. Geological Survey Open-File Report 00-92. #52220. http://water.usgs.gov/nrp/gwsoftware/modflow2000/modflow2000.html ERMS #522195
 Documentation of a computer program to simulate horizontal flow barriers using the U.S. Geological Survey Modular three-dimensional finite-difference groundwater flow model: USGS Open-File Report 92-477.  ERMS #525469.
 WIPP PA (Performance Assessment) 2008.  "Change Control Form for MODFLOW 2000, Version 1.6."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #550152.
 WIPP PA  -  "Regression Testing Document for MODFLOW-2000 Version 1.6 on the "eleionomae" subset of the GeoHydro cluster dated February 25, 2009."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #550911.
 WIPP PA  -  "Regression Testing Document for MODFLOW-2000 Version 1.6 on the "pegaeae" subset of the GeoHydro cluster dated February 25, 2009."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #550915.
 WIPP PA  -  "Regression Testing Document for MODFLOW-2000 Version 1.6 on the "crinaeae" subset of the GeoHydro cluster dated February 25, 2009."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #550916.
 WIPP PA  -  "Validation Document for MODFLOW-2000 Version 1.6 dated March 2003."  Sandia WIPP Central Files. ERMS#523867
 WIPP PA  -  "Implementation Document for MODFLOW-2000 Version 1.6 dated March 2003."  Sandia WIPP Central Files. ERMS#523868.
 WIPP PA  -  "QA Plan for MODFLOW-2000 Version 1.6 dated March 2003." Sandia WIPP Central Files.  ERMS#523869.
 WIPP PA  -  "Installation and Checkout for MODFLOW-2000 Version 1.6 dated March 5, 2003."  Sandia WIPP Central Files.  ERMS#523943.
 USEPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 WIPP PA  -  "AP-168 Analysis Plan for Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Custer on the Sun Solaris Cluster, Revision 0 dated April 21, 2014." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561953.
 WIPP PA  -  "Validation Document for MODFLOW2000 Version 1.07, dated July 7, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #562213.
 WIPP-PA  -  "Addendum to Verification and Validation Plan for MODFLOW2000 Version 1.07, dated July 2014." Sandia National Laboratories. Sandia WIPP Central Records ERMS #562212.
 "Summary Report on the Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Cluster on the Sun Solaris Cluster, AP-168, dated December 12, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #563103.
      
MWT3D

This section presents the regression testing results of the MWT3D Version 2.51 computer code. MWT3D 2.50 was migrated in 2014 from the "Alice" Linux Cluster to the Solaris Blade platform.  
Introduction 

MWT3D (Moving Water Table in 3 Dimensions) performs steady-state or transient simulations of saturated flow in a porous media subject to free surface and seepage face boundary conditions on the upper surface of the computational domain.  The governing groundwater flow equations is discretized as centered differences on a finite-volume staggered mesh.  The formulation is fully implicit.  In order to maintain a fully saturated domain, the computation mesh deforms such that its upper surface is at the moving water table [1].

Test Methodology 

MWT3D Version 2.50 was validated on the Linux "Alice" platform in 2007 [1]. MWT3D Version 2.50 sources files, and input and output files were migrated unchanged to the Solaris Blade with SunOS 5.11 from the Linux "Alice" platform in 2014.  Nine test cases were run on the Solaris using the same input files as Version 2.50 to validate MWT3D Version 2.51. These nine test cases were run to test twenty-two functional requirements, two external interface requirements and three additional functionally requirements of the code [2]. Output files from the Linux "Alice" validation testing were regression tested, compared, with output files from the Solaris runs.  

Acceptable differences between the compared output files include text changes: for example, names, version, run dates, and user names. Because of the difference in computer characteristics, such as 32-bit verses 64-bit word length, insignificant numeric differences were expected.  A Python utility, Regr_Diff.py, was developed to compare the output files from the two computer platforms.  The Python utility calculated the RPD, the relative percentage difference. A difference of less than 1E-6 is considered insignificant; otherwise the difference must be explained [1].  

Test Results

Test Cases 1, 2, 4, 5, 6, 8, and 9 showed the expected textural differences and some numeric differences below the 1E-6 RPD value calculated by the Python comparison utility. Test Case 3 shows three values and Test Case 7 shows one value above the RPD threshold.  SNL reasonably explained these differences [1].    

The Agency's Conclusions

The Agency closely examined SNL's examination of the results of each test case and verified that the evaluation is adequate and that their conclusions are reasonable.  The Agency concludes that MWT3D Version 2.51 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.    

References

   [1]  "Installation and Checkout for MWT3D Version 2.51 Regression Testing for the Solaris Blade, dated July 21, 20014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #562209.
  [2]  WIPP PA  -  "Verification and Validation Plan/Validation Document for MWT3D Version 2.50, dated December 18, 2007." Sandia National Laboratories. Sandia WIPP Central Files ERMS #546408.
  [3]  WIPP PA  -  "Design Document for MWT3D Version 2.50, dated October 2007." Sandia National Laboratories. Sandia WIPP Central Files ERMS #546406.
  [4]  WIPP PA  -  "Requirements Document for MWT3D Version 2.50, dated February 2008." Sandia National Laboratories. Sandia WIPP Central Files ERMS #548053.
  [5]  WIPP PA  -  "Software Quality Assurance Plan for MWT3D Version 2.50, dated July 23, 2007. Sandia National Laboratories. Sandia WIPP Central Files ERMS #546404.
  [6]  WIPP PA  -  "User's Manual for MWT3D Version 2.50, dated December 2007." Sandia National Laboratories. Sandia WIPP Central Files ERMS #546409.
  [7]  WIPP PA  -  "AP-168 Analysis Plan for Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Custer on the Sun Solaris Cluster, Revision 0 dated April 21, 2014." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561953.
  [8]  "Summary Report on the Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Cluster on the Sun Solaris Cluster, AP-168, dated December 12, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #563103.


NONLIN

The purpose of NONLIN is to compute Pitzer parameters and standard chemical potentials for chemical species in concentrated electrolyte systems (brines).

Introduction

NONLIN 2.01 was validated in 2008 on the VMS and Linux platforms ([1] Section 1.0).  In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade with SunOS 5.11. NONLIN 2.02 was regression tested against NONLIN 2.01 VMS results ([1] Section 1.0).
Test Methodology

Nine test cases were executed on the Solaris Blade ([1] Section 4.0).  Test cases #1 and #2 test the functionality of the NONLIN code while Test cases #3 to #7 verify the codes error handling abilities ([1] Sections 4.0 and 5.0). 

NONLIN 2.01 VMS results were moved to the Solaris, converted as needed, and compared to Solaris results.  A Python utility code, Regr_Diff.py, was used to compare test case results. Regr_Diff.py compares input files line-by-line and calculates the relative percentage difference (RPD) between values.  RPD less than 1E-4 are considered acceptable ([1] Section 4.0).
Test Results

Nine test cases for NONLIN 2.02 were executed on the Solaris Blade with SunOS 5.11 ([1] Section 5.0).  Output files from the qualification of NONLIN 2.01 on the Compaq ES47 with OpenVMS 8.2 were compared by using the Python utility comparison code Regr_Diff.py.  The comparison found that all differences in output are limited to code run date and time, platform names, system version numbers, the directory and file names, and very minor numeric differences. SNL found, "...only acceptable differences were detected between the results of NONLIN 2.02 on a Solaris Blade and NONLIN 2.01 on the Compaq ES47 and since each error test generated and appropriate message and error status..." ([1] page 8).

The Agency's Conclusions

The Agency thoroughly examined SNL's examination of the results of each test case and verified that the evaluation is adequate and that their conclusions are reasonable.  The Agency concludes that NONLIN 2.02 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.
References
   
   [1] 	WIPP PA  -  "Installation and Checkout for NONLIN Version 2.02 Regression Test for Solaris Blade dated November 20, 2013."  Sandia National Laboratories. Sandia WIPP Central Files ERMS #561136.
  

NUTS

This section presents the regression test results for NUTS.  NUTS is a multidimensional, multi-component radioactive material contaminant transport, single-porosity (SP), dual-porosity (DP), and dual-permeability (DPM) finite-difference simulator.  The model simulates first order radioactive chain decay during radioactive material transport.  However, the simulator is not limited to radioactive material transport, and any non-radioactive material can be included.  Three types of sorption isotherms are considered to represent ion exchange between the solute and the surrounding formation; linear, Freundlich, and Langmuir equilibrium isotherms.  Hydrodynamic dispersion is modeled with the assumption that the off diagonal dispersivities are all zero.  The solubility limits of the waste components and their precipitation during migration are included in NUTS.  The precipitate is allowed to undergo decay, and to redissolve in the brine if the concentration drops below the solubility limit.  Multi-radioactive-site representations are also possible, in which case the contribution from each site to the component concentration and precipitation in each computational node can be found.  A similar technique is used to handle the daughters generated from the decay of different parents.  Many options for transport equation(s) discretization are included.  In the implicit solution, the system of partial differential equations is solved sequentially to determine the contribution from parent radioactive material decay to the immediate daughter.  In the sequential method, the solution proceeds progressively from the top of each radioactive material chain.  Therefore, the contribution to any daughter from parent decay will be available.  In addition, NUTS also accounts for thermal dependency of some properties. 

Introduction

For the WIPP PA, DOE uses NUTS for isothermal transport in the rock matrix.  Consequently, the validation test demonstrated a subset of the capabilities of the NUTS code.  For further details on NUTS features used in the CCA calculations, refer to Table 1 in NUTS User's Manual, Version 2.02 [5].  

Since the CCA, the NUTS code has undergone a series of revisions.  NUTS Version 2.02 was used in the WIPP CCA [11 - 18].  During the CCA, an error was found in NUTS 2.02; correction of this error resulted in NUTS Version 2.03 [6].  NUTS Version 2.05 was developed from NUTS 2.03 by adding the capability to calculate solubility limits with an implicit precipitation model [7]. NUTS Version 2.05A was developed from NUTS 2.05 to enable NUTS to run in OpenVMS 7.1 and subsequent operating systems [1, 2, 8].  NUTS 2.05A differs from NUTS 2.05 only in one subroutine that writes information records to the headers of output files [8].  Consequently, the RD/VVP for NUTS 2.05 [3] and the Validation Document (with addendum) for NUTS 2.05 [4, 9, 10] are used for NUTS 2.05A. 

The validation of NUTS 2.05A in OpenVMS 7.2-1 was established by a sequence of regression tests.  The results of the sequence of regression tests, from NUTS 2.02 in OpenVMS 6.1 to NUTS 2.05A in OpenVMS 7.2-1 are detailed in Annex A of the VD [9].  AP-089 [9], the planning document for this regression testing, incorrectly identified SPR 99-001 [10] as an active problem report relating to NUTS 2.05A.

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from NUTS 2.05A run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of NUTS 2.05A run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of NUTS 2.05A on those operating systems [19].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [20, 21].  In March 2004, the Agency completed a report documenting the Agency's approval of NUTS 2.05A on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [21].

As indicated above, NUTS 2.05A was validated on OpenVMS 7.3-1.  When NUTS 2.05A was run on OpenVMS 8.2, however, it aborted because the time argument of DATE_AND_TIME was too short [22].  Therefore, the CAMSUPES_LIB routines EXDATE and EXTIME were substituted for the DATE_AND_ TIME call and the date was expanded to 10 characters, while time remains at 8 characters.  The only change in output is the format of the date and time.  Since there is already a NUTS 2.05B, this change to NUTS V2.05A resulted in NUTS 2.05C.  (NUTS 2.05B was qualified on the Compaq Alpha ES45 and 8400 that were both running OpenVMS 7.3-1, but never used for any analyses). NUTS 2.05A was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [23].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [23].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for NUTS 2.05C to ensure that it continued to function correctly.

In 2013, regression testing of NUTS 2.06 on the Solaris Blade platform with SunOS 5.11 was performed on Test Cases 1 through 12 and Test Case 14 (13 was never defined).  Test results for these thirteen test cases were compared to the results obtained executing NUTS 2.05C on the Compaq ES47 with OpenVMS 8.2 [27]. SNL expected numeric differences "due to the change to double precision floating point variables in the code and differences in the two platforms" [27].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to NUTS 2.06 running on the Solaris Blade platform with SunOS 5.11.

Test Methodology

SNL converted the text VMS files to UNIX for comparison on the Solaris platform.  Binary files on the VMS platform were converted to text format then transferred to the Solaris and convert back to binary as needed for comparison of the test case results.  File name convention is different on the Solaris platform; therefore, file names were converted to make the files compatible with the Solaris requirements [27]. File comparison of the VMS and Solaris results were considered acceptable if text difference were as expected, such a file names and executable names, etc.  Numerical differences were considered to be acceptable if the RPD, relative percent difference, was less than 1E-4.  Numerical differences above this RPD were more closely examined and a complete explanation was provided by SNL staff.  

The tests for NUTS 2.06 comprised all 13 test cases described in the Requirements Document & Verification and Validation Plan for NUTS Version 2.05 RD/VVP) [3].  The NUTS 2.06 regression test methodology uses a Python utility code, called Regr_Nuts.py [27] developed by SNL to compare files and determine differences above the RPD threshold. This was performed to compare output from NUTS 2.06 on the Solaris Blade with SunOS 5.11 to the output from validation tests of NUTS 2.05C on the Compaq ES47 with OpenVMS 8.2 [27]. 

Test Results

Based on the results of the tests described above, only very minor differences (e.g., spacing, version number, numerical differences below the RPD threshold) were found for ten of the test cases.  Three cases, 7, 12 and 14 [27] showed differences above the RPD threshold and required additional explanation. SNL concluded that the differences in these cases were insignificant and were easily explained [27]. SNL concluded that NUTS 2.06 is validated for WIPP PA used on a Solaris Blade with SunOS 5.11.

The Agency's Conclusions

The Agency found that differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names and minor numerical differences for ten of the test cases.  EPA closely examined Test Cases 7, 12 and 14 to verify that SNL's explanations are complete and reasonable.   The Agency concludes that NUTS 2.06 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References

 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 Analysis Plan (AP-065) 2000.  "Regression for the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for NUTS Version 2.05."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45999. 
 WIPP PA (Performance Assessment) 2001.  "Installation and Checkout Form (NUTS 2.05A)."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #516515. 
 WIPP PA (Performance Assessment) 1996.  "User's Manual for NUTS Version 2.02."   Sandia National Laboratories.  Sandia WIPP Central Files WPO #37927. 
 WIPP PA (Performance Assessment) 1997.  "Change Control Form for NUTS, Version 2.03."  Memorandum to Distribution, J.J. Loukota, Sandia National Laboratories.  Sandia WIPP Central Files WPO #43730. 
 WIPP PA (Performance Assessment) 1997.  "Change Control Form for NUTS, Version 2.05."  Memorandum to Distribution, J.J. Loukota, Sandia National Laboratories.  Sandia WIPP Central Files WPO #46624. 
 WIPP PA (Performance Assessment) 2001.  "Change Control Form for NUTS, Version 2.05A."  Memorandum to Distribution, P. Painter, Sandia National Laboratories.  Sandia WIPP Central Files ERMS #515790. 
 Analysis Plan (AP-089) 2002.  "Upgrade of Operating System to OpenVMS 7.3-1 and Hardware to HP Alpha ES45."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #523491. 
 WIPP PA (Performance Assessment) 1999.  "Software Problem Report 99-001 for NUTS Version 2.05."  Memorandum to Distribution, J.J. Loukota, Sandia National Laboratories.  Sandia WIPP Central Files ERMS #504773. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for NUTS Version 2.05."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #46003. 
 WIPP PA (Performance Assessment) 1999.  "Addendum to NUTS Version 2.05 Validation Document  -  Analytical Solution Test Results for Part 1 of Test Case #14 and Additional Test Problem Results."  Memorandum to Distribution, A. Treadway Analysis Report for the VMS 7.3-1 Regression Test Appendix NUTS Page 33 of 121 and M. Lord, Sandia National Laboratories.  Sandia WIPP Central Files ERMS #503096. 
 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for NUTS Version 2.02."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37924. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for NUTS Version 2.02."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37929. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for NUTS Version 2.03."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42618. 
 WIPP PA (Performance Assessment) 1996.  "Software Problem Report 96-012 for NUTS Version 2.02."  Memorandum to Distribution, J.J. Loukota, Sandia National Laboratories.  Sandia WIPP Central Files WPO #41769. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for NUTS Version 2.03."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42619. 
 Digital Equipment Corporation. 1996.  "OpenVMS 7.1 Release Notes, Section 5.8."  Digital Equipment Corporation, Maynard Massachusetts.  Order number AA-QSBTB-TE. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA (Performance Assessment) 2006.  "Change Control Form for NUTS, Version 2.05A."  Sandia WIPP Central Files WPO #43730.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of NUTS 2.05A on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543598.
 WIPP PA - "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for NUTS Version 2.06, Regression Testing for the Solaris Blade, dated June 19, 2013." Sandia National Laboratories, Sandia WIPP Central Files ERMS #559596.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765. 

PANEL

This section presents the regression and validation test results for PANEL.  PANEL takes the source term data and computes the solubility of the elements needed.  PANEL also takes brine flow and repository volume data from a CAMDAT database (CDB) file and computes the amount of mobilized radioisotopes that leave the repository. 

Introduction

PANEL 3.50ZO was initially validated in September 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of two test cases met the acceptance criteria defined in the RD/VVP for PANEL 3.50ZO [4, 5]. 

In May 1996, PANEL was revised to Version 3.60 and was validated on a DEC Alpha 2100 with OpenVMS 6.1.  Test cases identical to the two test cases for the validation of PANEL 3.50ZO were run.  The acceptance criteria for these test cases were satisfied by showing that the output from PANEL 3.60 was identical to the output of the PANEL 3.50ZO validation tests [6, 7].  PANEL 3.60 was used in the CCA.  In June 1998, PANEL was revised to Version 4.00 and was validated on a DEC Alpha 2100 with OpenVMS 7.1 [8].  In addition to the two test cases from the previous validation, five more test cases were added to the RD/VVP for Version 4.00 [3].  The acceptance criteria for Test Cases 1 and 2 were satisfied by showing that the output from PANEL 4.00 was identical to the output of the PANEL 3.60 validation tests [7].  Test Cases 3 - 7 were validated by demonstrating that the output from PANEL 4.00 met the acceptance criteria defined in the RD/VVP for PANEL 4.00 [3]. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from PANEL 4.00 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of PANEL 4.00 run on a DEC Alpha 2100 with OpenVMS 6.1 [1].  In March of 2003, several modifications were made to PANEL and the version number was changed from 4.00 to 4.02 [13].  The test set used for PANEL 4.02 consists of all nine of the test cases presented in Section 9 of the RD/VVP [14].
 
In June 2003, the Agency completed a report documenting the Agency's approval with respect to the migration and verification of PANEL 4.00 on those operating systems [10].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [11, 12, 15]. In September 2004, the Agency concluded that PANEL 4.02 met the acceptance criteria specified in the RD/VVP [3], and thus was considered validated on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [16]. PANEL 4.02 was used to support the 2004 CRA.

In April 2005, the PANEL code was revised to Version 4.03 in order to be able to set the default panel brine volume via MATSET [17, 18].  To ensure that this version was working properly, the DOE regression-tested Version 4.03 against Version 4.02 on the Compaq ES40 and ES45 with OpenVMS 7.3-1 [2].  In March 2006, the Agency completed a report documenting the Agency's approval of PANEL 4.03 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [19].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [21].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [21].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for PANEL 4.03 to ensure that it continued to function correctly [20, 22].

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [23, 25].  SNL did not perform a regression test as before but determined that a full validation of PANEL 4.04 was necessary [24].  

The discussion below documents the test methodology, qualification results, and the Agency's conclusions with respect to PANEL 4.04 running on the Solaris Blade computers with SunOS 5.11.

Test Methodology

The test suite for PANEL, as described in the VVP [14], consists of Test Cases 1 through 9, with Test Case 4 requiring two executions.  

PANEL needs the following input files:  a CAMDAT file containing source term and inventory data, and an optional CAMDAT file containing brine flow and volume data.  PANEL also needs a run type option input on the command line.  All input files and options used to execute the PANEL 4.03 tests were the same files and options used in the previous validation of PANEL 4.02.  In 2004, the regression test methodology used the VMS DIFFERENCE command to compare output from PANEL 4.03 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 to the output from PANEL 4.03 on the Compaq ES40 with OpenVMS 7.3-1 [18, 16].  

Each successful execution of PANEL generates the following output files:  a binary output CAMDAT file and a debug file.  The regression test methodology uses the VMS DIFFERENCE command to compare output from the current execution to the output from a validation of PANEL.  The output CAMDAT files are binary files and cannot be compared with the VMS DIFFERENCE command.  The GROPECDB utility [4, 9] is used to write portions of the CAMDAT files as text, so that they can be compared.  Thus, the debug file and the GROPECDB output from the output CAMDAT file are differenced.  In 2006, the output from PANEL 4.03 on the Compaq ES40 with OpenVMS 7.3-1was compared using the VMS DIFFERENCE command to the output from PANEL 4.03 on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [22].

In 2012, SNL did not use regression testing but fully qualified PANEL 4.04 ([24] Section 3.0) using Functional Requirements (R1, R2, R3, R4, R5, R6 and R7) documented in the PANEL 4.02 RD/VVP [14].  PANEL's validation testing consisted of nine test cases which validated that the code adequately performs WIPP performance assessment calculations on the Solaris.  External Interface Requirements include three requirements (R8, R9, and R10) that PANEL 4.04 can input and output results correctly on the Solaris Blade computers.  Each test case was evaluated with respect to the acceptance criteria as documented with each test case in the PANEL 4.04 VD [24].

Test Results

SNL executed nine test cases as noted in Section 3 of the VD [24].  Each functional requirement and external interface requirements were tested and examined to verify that all acceptance criteria were satisfied for the qualification of PANEL 4.04 on the Solaris platform.

The Agency's Conclusions

The Agency examined SNL's qualification process and results for PANEL 4.04.  EPA found that each test case satisfied the acceptance criteria for each requirement tested.  The Agency also found that SNL's analysis was complete and thorough.  
  
The Agency concludes that PANEL 4.04 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11.

References 

 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 Analysis Plan (AP-065) 2000.  "Regression for the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 1998.  "Requirements Document and Verification and Validation Plan for PANEL Version 4.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #48787. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document and Verification and Validation Plan for PANEL Version 3.50ZO" Sandia National Laboratories.  Sandia WIPP Central Files WPO #24326.  
 WIPP PA (Performance Assessment) 1995.  "Validation Document for PANEL Version 3.50ZO" Sandia National Laboratories.  Sandia WIPP Central Files WPO #24328. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document and Verification and Validation Plan for PANEL Version 3.60."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37358. 
 WIPP PA (Performance Assessment) 1998.  "Validation Document for PANEL Version 3.60."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37362. 
 WIPP PA (Performance Assessment) 1998.  "Validation Document for PANEL Version 4.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #48791. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA (Performance Assessment) 2003.  "Change Control Form for PANEL" Sandia National Laboratories.  Sandia WIPP Central Files WPO #526499. 
 WIPP PA (Performance Assessment) 2003.  "Requirements Document and Verification and Validation Plan for PANEL Version 4.02."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #526649.
 WIPP PA (Performance Assessment) 2003.  "Analysis Report for PANEL Version 4.02 Regression Testing for the ES45 and 8400 Platforms" Sandia National Laboratories. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004. 
 WIPP PA (Performance Assessment) 2005.  "Change Control Form for PANEL, Version 4.03 Sandia National Laboratories.  Sandia WIPP Central Files WPO #539452.  WIPP PA (Performance Assessment) 2004. 
 WIPP PA (Performance Assessment) 2005.  "Installation and Checkout for PANEL Version 4.03 Regression Testing for the Compaq ES40 and ES45 Platforms."  Sandia National Laboratories. 
 USEPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of PANEL 4.03 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 19, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543600. 
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Validation Document for PANEL (Version 4.04) on the SOLARIS System dated November 27, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557733. 
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server System Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
      
PEST

This section presents the verification and validation of PEST.  PEST is an acquired code that solves the problem of parameter estimation for any mathematical model, but with specific application to optimizing T-fields using pilot points in conjunction with the MODFLOW2000 groundwater flow model.  In the context of the Culebra T-fields, PEST is used to iteratively optimize a spatially correlated residual field that is then added to the original mean T-field to produce the final T-field. The PEST code is freely available on the web at: http://www.sspa.com/PEST/.   

Introduction

PEST is a parameter estimation program that can be used with other models to calibrate parameters quickly using a set of known observations.  Models produce numbers and if there are field or laboratory measurements corresponding to some of these numbers, PEST can adjust model parameter and/or excitation data to reduce to a minimum the discrepancies between the pertinent model-generated numbers and the corresponding measurements.  It does this by taking control of the model and running it as many times as is necessary in order to determine the optimal set of parameters and/or excitations.  The modeler must inform PEST of where the adjustable parameters and excitations are to be found on the model input files.  Once PEST is provided with this information, it can rewrite these model input files using whatever parameters and excitations are appropriate at any stage of the optimization process.  Files are constructed so that PEST can identify those numbers on the model output files that correspond to the actual observations that have been made.  Thus, each time PEST runs the model it is able to read those model outcomes that must be matched to field or laboratory observations.  The difference that PEST calculates between the measured and observed values is called the "residual error".  PEST continues to perform iterations in order to minimize the residual errors.  Once the errors are below the pre-defined error criteria, the model will stop and create a transmissivity field.

PEST was not used by DOE for the CCA, however, PEST Version 5.51 was used to create transmissivity fields for the 2004 CRA PA and the PABC-2004.  In 2004, PEST 5.51 was approved by the Agency on the Linux operating system [1]. 

In 2005, DOE proposed that additional stochastic inverse calibration functionality be added to PEST and that the version be updated to 9.11 [2].  This additional functionality includes:

 Truncated singular value decomposition ("truncated SVD") as a parameter regularization methodology.  This additional regularization approach complements the previously used means of enforcing regularization termed "Tikhonov" regularization through solution of a constrained minimization problem.  The truncated SVD approach allows for fewer model iterations to achieve a model calibration by using a subspace decomposition of the estimated parameters.  A disadvantage to this approach over the "Tikhonov" regularization, however, is that it can introduce numerical artifacts.  A solution to this problem is to combine components of the "Tikhonov" regularization with the truncated SVD approach into the "SVD Assist" inversion approach within PEST.

 "SVD-Assist" regularized inversion methodology.  The "SVD-assist" method combines the strengths of both of the above regularization methods.  The result is a scheme that is numerically stable, very efficient, and produces intuitively realistic parameter fields.  This approach uses parameter sensitivity information as identified in a Jacobian matrix and calculated using Tikhonov regularization.  The principal components are determined by all the parameters that are subsequently used in the truncated SVD inverse approach.

The two changes mentioned above are fundamental changes to the way the inverse problem is formulated and solved.  In addition to these two changes, several more practical changes were also implemented.

 The calibration process is now able to use calibration with pilot points to estimate more than one parameter at each pilot point.  In previous applications of the pilot point method to the Culebra transmissivity field problem, only one transmissivity had been estimated at each pilot point.  PEST has been changed to allow for the estimation of multiple parameters at each pilot point, and this multi-variate estimation also works under the more efficient SVD assist inverse procedure.  For the transmissivity field calibrations, only two spatially varying parameters will be estimated in the Culebra; transmissivity and storage capacity.  Both are parameterized using pilot points.

 A second practical change to PEST was made to allow for the use of different regularization and spatial variation parameters within different predefined zones.  The model domain is subdivided into zones of differing geostatistical structure.  Kriging factors and regularization constraints employed in the inversion process can be defined as zone specific and reflect these different structures.  Spatial interpolation from pilot points to the model grid is strictly zone-specific.  The number of distinct zones possible within PEST is theoretically unlimited; however, in practical situations, there will only be a small number of distinct zones.

In 2005, the DOE developed a QA plan for PEST Version 9.11 [3].  The QA Plan indicated that 6 primary documents be developed, reviewed, and maintained to meet SNL NP 19-1 Software Requirements for PEST Version 9.11.  These documents are the QA plan, RD [4], DD [5], UM [7], ID [8], VVP [9], and VD [10].   

Documentation for PEST Version 9.11 consists of the baseline documents.  Software changes are documented in addenda for each of the documents.  Configuration control is maintained through completion of Installation & Checkout (I&C) documentation for all changes made to PEST, and system software and/or system hardware.  In addition, Change Control (CC) and Software Problem Report (SPR) documents will be developed as needed.  As defined in SNL procedure NP 19-1, an I&C, software problem reporting, change control, software configuration control, and appropriate revisions to the quality assurance documents will be prepared, reviewed, and maintained for each change during the software lifecycle.

The RD for PEST 9.11 outlines requirements that need to be tested [4].  These include requirements related to performance, attributes, external interface, and 13 functional requirements.  The DD for PEST, however, compresses requirements 10 through 13 into a single requirement pertaining to the optimization of pilot points [6].  The remaining functional requirements include capabilities related to numerical algorithms, screen input and output, user intervention, statistical calculations, regularization, pilot points, parallel processing, predictive analysis and utility programs.  

The DD and DD Addendum for PEST 9.11 describe the design considerations necessary to incorporate the functional requirements [5, 6].  The Addendum also outlines the control flow and logic describing how the parallel processing will be performed.  Although the User's Manual was developed by Watermark Computing, DOE has added a table that maps the quality assurance requirements to the relevant sections in the manual.  The PEST 9.11 Implementation Document focuses on the compilation of the source code [8].  

In 2014, SNL migrated PEST 9.12 from the "Alice" cluster of computers with Linux operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [14, 15].  SNL performed qualification testing and migration testing to ensure that PEST 9.12 continues to perform PA calculations correctly [16].

The test methodology, results and Agency findings pertaining to the qualification of PEST 9.12 are discussed below.

Test Methodology

The purpose of VVP is to describe the testing procedures that will be used to demonstrate that the code requirements outlined in the RD are met.  The VVP for PEST 9.11 presents 31 tests designed to validate all of the code requirements [9].  The only functional requirement for which a test is intentionally not designed is related to predictive analysis.  This requirement is described in the VVP as the following:

      PEST shall be capable of maximizing/ minimizing a key model outcome (defined as a "prediction") while maintaining the goodness of fit between all other model outcomes and corresponding field data within a user-defined tolerance.  Implementation shall not rely on an assumed linear relationship between model parameters and model outputs; PEST shall calculate the true nonlinear maximum or minimum prediction based on theory.

The VVP indicates that it is not anticipated that the predictive analysis requirement will be needed for the WIPP PA, and will not be tested at this time.

In the VVP, a series of tests are presented based on calibration of a synthetic model for a single layer confined aquifer.  In DOE's calibration exercise, PEST's SVD-assist functionality is employed to estimate values for 296 parameters, 148 of which represent storage capacity, the other half of which represent transmissivity.  All parameters pertain to pilot points, 148 of which are deployed throughout the model domain.  Synthetic transmissivity and storage capacity fields are heterogeneous, being generated on the basis of exponential variograms.  However, the model domain is zoned, and DOE employs different variograms for field generation within each of two different zones; hydraulic properties are discontinuous at the zone boundary.  Of the 148 pilot points employed as a basis for spatial parameterization, 79 of these are allocated to one zone, while 69 are allocated to the other. 

Use of this test case allows DOE to verify the pilot point interpolation and regularization procedures that vary between zones.  DOE notes, however, that this is not a fundamental requirement for the application of regularized inversion.  

The validation and verification tests are divided into four categories.  The "M" tests are used for verification that the model and all of its components (including utility programs that support the use of pilot points) run correctly and that all input files for the current set of tests have been properly installed and contain no errors.  The "P" tests pertain to operations undertaken in preparation for an SVD-assisted PEST run, including the introduction of regularization constraints to a PEST control file.  They are used for verification of the operation of a number of PEST utilities, and of PEST itself when using truncated singular value decomposition as a parameter estimation mechanism.  The "S" tests verify correct operation of PEST in undertaking SVD-assisted parameter estimation, and correct operation of a number of its utility programs in the post-processing of an SVD-assisted PEST run.  The "PP" tests verify correct operation of Parallel PEST in implementing SVD-based inversion.  A complete table showing the procedures and the requirements that they fulfill are presented in Chapter 7 of the VVP [9].

The VD indicates that validation tests were performed on Intel Xeon PCs running a Kernel 2.69-11 operating system [10].  The cluster is configured so that each of the client nodes uses the same hard disk, mounted from the server.  In addition, because the individual nodes on the cluster have identical hardware and operating systems, testing need only be done once to validate the entire cluster.  DOE notes that future installations and checkouts will be regression tested against the results obtained on this original validation platform.  DOE indicates that the same requirements regarding the clustered environment will apply to future platforms; namely, each different hardware and/or operating system configuration will be tested once for a set of cloned machines.

In addition to PEST 9.11 being tested on the Intel Xeon PCs, it was also tested on the "Geo-Hydro Linux Cluster."  This cluster is comprised of three different hardware groups, each with a group name; (1) Eleionomae, (2) Pegaeae, and (3) Crinaeae.  These AMD Athlon 64 computers running a Linux Kernel 2.6.18 operating system are connected to a job control server, "tethys.sandia.gov", which is not used for execution of codes.  

To qualify PEST 9.11 on these platforms, all of the test problems described in the VD [10] was first performed on the "Eleionomae" group [11].  The same suite of test problems were subsequently run on the "Pegaeae" and "Crinaeae" systems, and the UNIX diff command was used to compare the results to the validation results for "Eleionomae" [12, 13].

In 2014 SNL performed qualification of PEST 9.12 on the Solaris Blade with SunOS 5.11.  The validation document [16] notes that the same test suite used to qualify PEST 9.11 is used for PEST 9.12, therefore the detailed description of the testing provided above still applies.

Test Results

The acceptance criteria for each of the tests are detailed in Section 3 of the VD [10].  All the results from testing on the Intel Xeon PCs were within the expected limits of accuracy and uniqueness and met all specified criteria.  The results show that PEST performed as expected on the Intel Xeon PCs and on the "Eleionomae" group.

The regression testing of the "Pegaeae" and "Crinaeae" systems against the results obtained from the "Eleionomae" group indicates that all differences in output are limited to code run date and time, file and platform names.

The validation of PEST 9.12 verifies that the code satisfies the acceptance criteria on the Solaris Blade with SunOS 5.11 [16].

The Agency's Conclusions

The Agency closely examined the testing of PEST 9.12 and found that all test results run on the Solaris Blade with SunOS 5.11 operating system meets the functional requirements of the acceptance criteria specified in Section 2.0 of the VD [16].  

The Agency concludes that PEST Version 9.12 is verified for use on the Solaris Blade with SunOS 9.12UNIX.

References 

 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA (Performance Assessment) 2005.  "Change Control Form for PEST, Version 5.51 Sandia National Laboratories.  Sandia WIPP Central Files WPO #539290.  WIPP PA (Performance Assessment). 
 WIPP PA (Performance Assessment) 2005.  Software QA Plan for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #539280. 
 WIPP PA (Performance Assessment) 2008.  Requirements Document for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #548334. 
 WIPP PA (Performance Assessment) 2005.  Design Document for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #539283.
 WIPP PA (Performance Assessment) 2008.  Addendum to Design Document for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #548335.
 Watermark Computing. 2005.  User's Manual for PEST.  Sandia WIPP Central Files WPO #539287.
 WIPP PA (Performance Assessment) 2008.  Implementation Document for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #539286.
 WIPP PA (Performance Assessment) 2008.  Verification and Validation Plan/Validation Document for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #539282. 
 WIPP PA (Performance Assessment) 2008.  Validation Document for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #539284.
 WIPP PA (Performance Assessment) 2009.  Validation Document for PEST on the "Eleionomae" nodes of the GeoHydro Linux Cluster for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #550914. 
 WIPP PA (Performance Assessment) 2009.  Validation Document for PEST on the "Pegaeae" nodes of the GeoHydro Linux Cluster for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #550915. 
 WIPP PA (Performance Assessment) 2009.  Validation Document for PEST on the "Crinaeae" nodes of the GeoHydro Linux Cluster for PEST Version 9.11.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #550916.
 WIPP PA  -  "Summary Report on the Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Cluster on the Sun Solaris Cluster, AP-168, dated December 12, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #563103.
 WIPP PA  -  "AP-168 Analysis Plan for Migration of VMS files from the HP Alpha Cluster to the Sun/Solaris Cluster and Qualification of Codes from the Alice Linux Custer on the Sun Solaris Cluster, Revision 0 dated April 21, 2014." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561953.
 WIPP PA  -  "Validation Document for PEST Version 9.12, dated August 15, 2014." Sandia National Laboratories. Sandia WIPP Central Files ERMS #562466.

POSTBRAG

This section describes the regression test results for POSTBRAG.  POSTBRAG is a utility code that takes the binary output file generated by BRAGFLO and puts it into the CAMDAT (CDB) output file format.

Introduction

For WIPP PA, POSTBRAG is used to create CAMDAT files, which are examined with BLOTCDB and/or GROPECDB [5].  CAMDAT database files may also be referred to as CDB files.  POSTBRAG 4.00 was validated in February 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of two test cases met the acceptance criteria defined in the RD/VVP [1] for POSTBRAG 4.00.  POSTBRAG 4.00 was to support the CCA.  The code has not been revised since this validation.  Previous to this version, POSTBRAG Version 3.05ZO had a single test case validated to the acceptance criteria defined in the RD/VVP [2, 3, 4] for POSTBRAG 3.05ZO. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from POSTBRAG 4.00 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of POSTBRAG 4.00 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of POSTBRAG 4.00 on those operating systems [6].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [7, 8].  In March 2004, the Agency completed a report documenting the Agency's approval of POSTBRAG 4.00 on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [9]. POSTBRAG 4.00 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [10].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [10].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for POSTBRAG 4.00 to ensure that it continued to function correctly.

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [12, 14].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to POSTBRAG 4.02 running on the Solaris Blade with SunOS 5.11 [13].

Test Methodology

The tests for this code comprised the two test cases described in the Requirements Document & Verification and Validation Plan for POSTBRAG Version 4.00 RD/VVP) [2, 3].  Regression test results from POSTBRAG 4.02 run on the Solaris Blade with SunOS 5.11 were compared to results from the validation tests of POSTBRAG 4.00A run on a Compaq ES47 with OpenVMS 8.2 [11].  The UNIX diff command was used to compare test results.  SNL noted that numeric differences were expected due to the change to double precision and platform differences.  The two test cases show very minor numeric differences of one digit in a few time steps [13].

Test Results

The results of the tests described above are that only very minor differences (e.g., spacing, version number, very minor numeric differences) were found for the two test cases.  The comparison found that all differences found in the output are generally limited to code run date and time, platform names, system version numbers, the directory, and file names.

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and very minor numeric differences. The Agency concludes that POSTBRAG 4.04 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for POSTBRAG Version 4.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30681. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for POSTBRAG Version 4.00", Sandia National Laboratories.  Sandia WIPP Central Files WPO #30685. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for POSTBRAG Version 3.05ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23603. 
 WIPP PA (Performance Assessment) 1995.  "Validation Document for POSTBRAG Version 3.05ZO", Sandia National Laboratories.  Sandia WIPP Central Files WPO #23604. 
 WIPP PA (Performance Assessment) 2002.  "Software Problem Report" (SPR) for BLOTCDB Version 1.37.  SPR #02-004, Sandia National Laboratories.  Sandia WIPP Central Files ERMS #.525354. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of POSTBRAG 4.00 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated July 5, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543802.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for POSTBRAG Version 4.02 Regression Testing for the Solaris Blade, dated January 31, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557831.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.

POSTLHS

This section presents the regression test results for POSTLHS.  The statistical code, POSTLHS, evaluates parameter importance by reporting the partial correlation coefficients (PCC) and standardized regression coefficients (SRCs) on either the raw or ranked data.  The absolute values of the standardized regression coefficients (or mathematically-related partial correlation coefficients) can be used to measure parameter importance. 
Introduction

POSTLHS Version 4.06ZO was validated in October 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of two test cases met the acceptance criteria defined in the RD/VVP for POSTLHS 4.06ZO [1].  In February 1996, POSTLHS was revised to Version 4.07 and was validated on a DEC Alpha 2100 with OpenVMS 6.1.  Test cases identical to the test cases for the validation of POSTLHS 4.06ZO were run.  The acceptance criteria for these test cases were satisfied by showing that the output from POSTLHS 4.07 was identical to the output of the POSTLHS 4.06ZO validation tests [2].  POSTLHS 4.07 was used in the CCA. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from POSTLHS 4.07 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of POSTLHS 4.07 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's approval with respect to the migration and verification of POSTLHS 4.07 on those operating systems [4].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [5, 6].  POSTLHS 4.07 was used to support the 2004 CRA.

In September 2004, the Agency concluded that POSTLHS 4.07 met the acceptance criteria specified in the RD/VVP [3], and thus is considered as validated on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [7].  In April 2005, the POSTLHS code was revised to Version 4.07A in order to re-index the CAMDAT output files [8].  To ensure that this version was working properly, the DOE regression tested Version 4.07 against Version 4.07A on the COMPAQ ES40 and ES45 with OpenVMS 7.3-1 [9].  In March 2006, the Agency completed a report documenting the Agency's approval of POSTLHS 4.07A on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [10].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [11].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [11].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for POSTLHS 4.07A to ensure that it continued to function correctly [12].

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [14, 16].

The discussion below documents the test methodology, validation testing, and the Agency's conclusions with respect to POSTLHS 4.08 running on the Solaris Blade with SunOS 5.11 [15].

Test Methodology

The test suite for POSTLHS, as described in the VVP [1], consists of Test Cases 1 and 2.  The entire test suite for POSTLHS Version 4.07A was executed on the Compaq ES40 platform with OpenVMS 7.3 1, and then executed again on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [13]. 

POSTLHS needs the following input files: an LHS output file, an input control file, and an input CAMDAT file.  All input files used to execute the POSTLHS 4.07A tests were the same files used in the previous validation of POSTLHS 4.07A.

Each successful execution of POSTLHS generates the following output files: a set of binary output CAMDAT files and a debug file.  The regression test methodology uses the VMS DIFFERENCE command to compare output from the respective platforms.

The output CAMDAT files are binary files and cannot be compared with the VMS DIFFERENCE command.  The GROPECDB utility is used to write portions of the CAMDAT files as text, so that they can be compared.  Thus, the debug file and the GROPECDB output from each output CAMDAT file are differenced. 

In 2013, SNL validated POSTLHS 4.08 on the Solaris Blade using SunOS 5.11.  SNL notes in the VVP/VD that: "Regression testing would be difficult because the input now comes from a database and many cosmetic changes have been made to the text debug file to make it more readable and to make verification easier. Test Cases 1 and 2 have been removed from the test suite. A new test case, Test Case 3, has been designed to better reflect realistic LHS input" [15].

Test Results

SNL documents their review process in [15] Section 4.1.6.  The evaluation verifies that POSTLHS:

 correctly reads the LHS data from the database, 
 identifies corresponding CAMDAT material properties, and 
 transfers the LHS sampled parameter values to the output CAMDAT file for each vector. 

Test Case 3 verifies that POSTLHS 4.08 satisfies the functional, external interface, and additional requirements on the Solaris. 

The Agency's Conclusions

The Agency examined SNL's qualification process and results for POSTLHA 4.08.  EPA found that Test Case 3 satisfies the acceptance criteria for each requirement tested.  The Agency also found that SNL's analysis was complete and thorough.

 The Agency concludes that POSTLHS 4.08 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References

 WIPP PA (Performance Assessment) 1995.  "A Requirements Document & Verification and Validation Plan for POSTLHS Version 4.06ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23552. 
 WIPP PA (Performance Assessment) 1996.  "Software Installation and Checkout Form, POSTLHS Version 4.07."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #230717. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004. 
 WIPP PA (Performance Assessment) 2005.  "Change Control Form for POSTLHS, Version 4.07."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 2005.  "Installation and Checkout for POSTLHS Version 4.07A Regression Testing for the Compaq ES40 and ES45 Platforms."  Sandia National Laboratories. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of POSTLHS 4.07A on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 19, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543782.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Verification and Validation Plan/Validation Document for POSTLHS Version 4.08 dated March 5, 2013." Sandia National Laboratories. Sandia WIPP Central Files #559272.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
POSTSECOTP2D

This section presents the regression test results for POSTSECOTP2D.  POSTSECOTP2D creates a new CAMDAT database (the WIPP PA computational database) from the output of the SECOTP2D computer program and the previous CAMDAT file.  The program appends the computational database with ANALYSIS information output from the SECOTP2D code.  Specifically, for each time step of SECOTP2D output, POSTSECOTP2D writes values to the CAMDAT file.  These values are written to the "Analysis Results" section of the CAMDAT file: TIME, HIFLAG(=0), and ELEMENT variables (Species Concentrations and Darcy flow velocities).
	
Introduction

Since the CCA, the POSTSECOTP2D code has undergone a series of revisions.  POSTSECOTP2D Version 1.02, which was used in the WIPP CCA, was validated in June 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of a test case met the acceptance criteria defined in the RD/VVP for POSTSECOTP2D 1.02 [3, 4]. 

Validation was accomplished by demonstrating that the input data into POSTSECOTP2D 1.02 is the same as the output file to the CAMDAT database.  The program ST2D3_VERIFY_RES (compiled and linked from ST2D3_VERIFY_RES.FOR) was executed to extract data corresponding to the data extracted from the output CAMDAT database file, ST2D3_SECOTP_TEST.CDB, and the data were compared and showed that POSTSECOTP2D correctly transfers data from the binary output file to the CAMDAT database.  Only selected portions of each array written to the database were compared by DOE.  If the entire contents of the database were compared to the results on the binary output file, a manual inspection of tens of thousands of numbers would have to be made.  DOE points out that the magnitude of this task would be overwhelming.   

In June 1997, POSTSECOTP2D was revised to Version 1.04 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [1, 2].  Validation was accomplished by demonstrating that the results of the two test cases met the acceptance criteria defined in the RD/VVP for POSTSECOTP2D 1.02.  Both test cases were different than had been used in the previous validation.  Otherwise the methodology was the same as described above for Version 1.02. 
In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from POSTSECOTP2D 1.04 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of POSTSECOTP2D 1.04 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of POSTSECOTP2D 1.04 on those operating systems [6].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [7, 8].  POSTSECOTP2D 1.04 was used to support the 2004 CRA.

In March 2004, the Agency completed a report documenting the Agency's approval of POSTSECOTP2D 1.04 on the Compaq Alpha ES45 and 8400 which were both running OpenVMS 7.3-1 [9].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [10].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [10].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for POSTSECOTP2D 1.04 to ensure that it continued to function correctly.

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [12, 14].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to POSTSECOTP2D 1.05 running on the Solaris Blade with SunOS 5.11 [13].

Test Methodology

The tests for this code comprised the two test cases described in the Requirements Document & Verification and Validation Plan for POSTSECOTP2D Version 1.04 (RD/VVP) [1].  Regression test results from POSTSECOTP2D 1.04 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of POSTSECOTP2D 1.04 run on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [11].  The regression test methodology uses the VMS DIFFERENCE command to compare output from the respective platforms.

CAMDAT database files (CDB) are produced in each of the two POSTSECOTP2D test cases.  The output CDB files are converted from a binary, CDB, file to an ASCII file for comparison during the validation process.  In the previous POSTSECOTP2D 1.04 validation, the CDB files were converted using GROPE 2.10.  GROPE has since been revised to Version 2.12.  GROPE 2.12 was validated in June 1996 on a DEC Alpha 2100 with OpenVMS 6.1 [5].  GROPE 2.12 has been validated on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2, as part of the hardware regression test (see Section 5.10).  For this regression test, GROPE 2.12 is used to convert the CDB output files from POSTSECOTP2D 1.04 in OpenVMS 8.2.

In 2013, SNL used two Test Cases, 1 and 2, to regression test POSTSECOTP2D 1.05 on the Solaris Blade SunOS 5.11 comparing the results to POSTSCEOTP2D 1.04 run on the Compaq ES47 with OpenVMS 8.2 ([13] Section 1.0). The VMS results were transferred to the Solaris and converted as needed.

The UNIX diff command was used to compare test results. SNL notes: "For POSTSECOTP2D, double precision numeric values may differ by one in the final digit (with rounding)." ([13] page 7) The two test cases show very minor numeric differences of one digit in the final digit ([13] Section 5).

Test Results

The results of the tests described show that only very minor differences (e.g., spacing, version number) were found for the two test cases and minor differences in the final digit ([13] Sections 1.0 and 5.0).  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numeric differences ([13] Section 5.0).

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numeric differences.  The Agency concludes that POSTSECOTP2D 1.05 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1997.  "Requirements Document and Verification and Validation Plan for POSTSECOTP2D Version 1.04."  Sandia National Laboratories. Sandia WIPP Central Files WPO #45696. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for POSTSECOTP2D Version 1.04."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45699. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document and Verification and Validation Plan for POSTSECOTP2D Version 1.02."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37304. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for POSTSECOTP2D Version 1.02."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37370. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of POSTSECOTP2D 1.04 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543782.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for POSTSECOTP2D Version 1.05 Regression Testing for the Solaris Blade dated July 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #559789.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.

PREBRAG

This section presents the qualification and regression test results for PREBRAG.  PREBRAG is used to create BRAGFLO input files.  PREBRAG reads specific data from an input CAMDAT file and, through instructions supplied in an ASCII input file, generates an ASCII BRAGFLO input file.

Introduction

Prior to the CCA, the PREBRAG code had undergone a single revision.  PREBRAG 5.05ZO was validated [3] in September 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by acceptance testing a single test case, the output of which met the acceptance criteria defined in the RD/VVP for PREBRAG 5.05ZO [4]. 

In February 1996, PREBRAG was revised to Version 6.00 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [1, 2].  PREBRAG 6.00 was used in the WIPP CCA.  The validation test included the original test case defined for Version 5.05ZO, and two additional test cases. 

Acceptance of the added Test Cases 2 and 3, described in the RD/VVP for Version 6.00 [1], were satisfied by comparing output of the second test to the output of Test Case 1, while the acceptance criteria for Test Case 3 was satisfied by comparing its output to that of Test Case 2. 

PREBRAG 6.00 has one open problem report [5].  PREBRAG 6.00 uses an outdated list-directed I/O format that allows space-padded fields.  The output files from PREBRAG 6.00 validation and the VMS7.3-1 test include space-padded fields and cannot be read by BRAGFLO 4.10.  There is no requirement for test output of PREBRAG to be read as input to BRAGFLO.  To allow BRAGFLO 4.10 to read input files created by PREBRAG 6.00, a conversion script, EVAL_BF2_CONVERT_INPUT.COM, removes extraneous spaces from the input file.  Use of this conversion script is not necessary for this regression test of PREBRAG 6.00.  In March 2003, several modifications were made to PREBRAG 6.0 primarily to remove the "hardwiring of parameter values" and the code was updated to PREBRAG 7.0 [11]. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from PREBRAG 6.0 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of PREBRAG 6.0 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of PREBRAG 6.0 on those operating systems [6].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [7, 8, 12].  Testing of PREBRAG 7.0 consisted of conducting the three functional test cases described in Section 6 of the VD [9].  These tests were conducted on the Compaq ES45 and 8400 platforms with OpenVMS 7.3-1.  Output files from these test cases were compared to the corresponding output files from the validation of PREBRAG 7.0 on the Compaq ES45 with OpenVMS 7.3-1.  In March 2004, the Agency completed a report documenting the Agency's approval of PREBRAG 7.0 on the Compaq Alpha ES45 and 8400 that were both running OpenVMS 7.3-1 [13].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [14].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [14].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for PREBRAG 7.0 to ensure that it continues to function correctly. The approach used to validate PREBRAG Version 7.00 involved 3 test cases to satisfy the 17 Functional Requirements presented in Section 9 of the RD\VVP [10].  Testing of PREBRAG 7.00 consisted of conducting the three functional test cases described in Section 6 of the VD [9]. Regression test results from PREBRAG 7.00 run on the Compaq ES45 with OpenVMS 7.3-1 were compared using the VMS DIFFERENCE command to results from the validation tests of PREBRAG 7.00 run on a Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [15]. The results of these tests found that only very minor differences (e.g., spacing, version number) were found for the three test cases.  All differences found in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names. The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of PREBRAG 7.00.  The Agency concluded that PREBRAG 7.00 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the ES40, ES45, and ES47 with OpenVMS 8.2.  PREBRAG 7.00 was used to support the 2004 CRA.

In 2007, PREBRAG 7.0 was updated to PREBRAG 8.0 to provide new input needed for BRAGFLO 6.0.  This new input involves new keywords in the input control file and writes new data to the BRAGFLO input file.  The routine WASTE_POROS was removed (which calculates values of the initial conditions of iron and CH2O and saturated brine of old waste areas and reaction rates RKCOR and RKBIO.  The old waste area values are no longer set in PREBRAG. The reaction rates are now input directly with a PREBRAG command. 

In 2012, PREBRAG 8.00 was modified to PREBRAG 8.02 to support the 2014 recertification, in particular to calculate CRA14 Case14-0 which includes water balance computations [21].  The BRAGFLO input file was modified to accommodate addition input and control parameters.  PREBRAG was modified to build the BRAGFLO input file with the correct data.  

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [17, 19].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to PREBRAG 8.02 running on the Compaq ES45 and ES47 with OpenVMS 8.2 and PREBRAG 8.03 running on the Solaris Blade with SunOS 5.11 [18].


Test Methodology

SNL performed regression testing using three test cases, Test Cases 1, 3, and 4, to verify that PREBRAG 8.02 performed as expected and that new variable input computer code was correct [20].  SNL used the VMS DIFFERENCE command to compare the PREBRAG 8.02 to PREBRAG 8.00 output files on the Compaq ES45 and ES47 machines to qualify the code.  SNL also examined the output files to verify that new input data and control commands were included in the BRAGFLO input files correctly.

Testing of PREBRAG 8.03 consisted of conducting three test cases, Test Cases 1, 3, and 4. Regression test results from PREBRAG 8.03 run on the Solaris Blade with SunOS 5.11were compared using the python difference utility, Regr_Diff.py, (this code compares results line-by-line and calculates the relative percentage difference (RPD)), command to results from the validation tests of PREBRAG 8.02  run on a Compaq  ES47 with OpenVMS 8.2 [18].  If the RPD value calculated is less than 1E-4 then SNL considers the numeric differences acceptable. 

Test Results

PREBRAG 8.02 testing shows only minor differences in the VMS DIFFERENCES files [20].  The expected input data and control commands were also verified to be included in the BRAGFLO input files.

The PREBRAG 8.03 test results described above showed only very minor differences (e.g., spacing, version number, and some numeric differences) were found for the three test cases.  All differences found in the output are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numeric differences [18].

The Agency's Conclusions

The Agency found that the testing of PREBRAG 8.02 to be adequate and verifies that the code performs properly.  The Agency concludes that PREBRAG 8.02 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Compaq ES45 and ES47 computers with OpenVMS 8.2 for the 2014 CRA PA calculations.

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numeric differences below the RPD threshold [18].  The Agency concludes that PREBRAG 8.030 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for PREBRAG Version 6.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30676. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for PREBRAG Version 6.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30679. 
 WIPP PA (Performance Assessment) 1995.  "Validation Document for PREBRAG Version 5.05ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23596. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for PREBRAG Version 5.05ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23594. 
 WIPP PA (Performance Assessment) 2001.  "Software Problem Report 01-002 for PREBRAG 6.00 and BRAGFLO 4.10."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #519714. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA (Performance Assessment) 2003.  "Validation Document for PREBRAG Version 7.00."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #526627 
 WIPP PA (Performance Assessment) 2003.  "Requirements Document & Verification and Validation Plan for PREBRAG Version 7.0."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #526625. 
 WIPP PA (Performance Assessment) 2003.  "Change Control Form Version 7.0."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #526257. 
 WIPP PA (Performance Assessment) 2004.  "Analysis Report for PREBRAG Version 7.00 Regression Testing for the Compaq ES45 and 8400 Platforms."  Sandia National Laboratories. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of PREBRAG 7.00 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated July 5, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543801. 
 WIPP PA (Performance Assessment) 2007.  "Change Control Form Version 8.0."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #545263. 
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for PREBRAG Version 8.03 Regression Testing for the Solaris Blade dated March 18, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557818.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
 WIPP PA  -  "Addendum to User's Manual and Verification and Validation Plan/Validation Document for PREBRAG Version 8.02 dated January 14, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #558655. 
 WIPP PA  -  "AP-164 Analysis Plan for the 2014 WIPP Compliance Recertification Application Performance Assessment, dated January 31, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #559198.

PRECCDFGF

This section presents the regression test results for PRECCDFGF.  PRECCDFGF collates output from all other WIPP PA codes and formats this output into the RELTAB input file for CCDFGF.

Introduction

Version 1.0 of PRECCDFGF was used to support calculations during the CCA.  In 2003, Version 1.0 was upgraded to Version 1.0A and this new version of the code was tested by following the procedures outlined by the single test included in the Validation Plan on the COMPAQ ES40 and 8400 with OpenVMS 7.3-1 [1].  In September of 2003, minor changes were made in the codes output reporting and the version number was changed from 1.0A to 1.0B.  To ensure that this version was working properly, DOE regression tested Version 1.0B against Version 1.0A on the COMPAQ ES40 and 8400 with OpenVMS 7.3-1 [2].  In September 2004, the Agency concluded that PRECCDFGF 1.00B meets the acceptance criteria specified in the VVP [1], and thus is considered as validated on the COMPAQ ES45 and 8400 platforms with OpenVMS 7.3-1 [4].  PRECCDFGF 1.00B was used to support the 2004 CRA.

In July 2005, PRECCDFGF was updated to Version 1.01 with three significant modifications [5].

The first is that PRECCDFGF 1.01 will read the text output file of release tables written directly by CUTTINGS_S 6.00 or higher.  This file replaces a set of files that were created by SUMMARIZE from the output of the previous versions of CUTTINGS_S.  Secondly, when reading the release tables from the other codes written by SUMMARIZE and the CUTTINGS_S text output file, PRECCDFGF 1.01 reads the header records at the start of the files to ensure that the fields in the files correspond to the values expected by the READ statements in the code itself.  Failure of the header to match the expected text will cause PRECCDFGF to abort its execution with appropriate error messages being logged.  Finally, whereas PRECCDFGF Version 1.00B read a text output file from LHS to get data about the sampled parameter GLOBAL:PBRINE, Version 1.01 reads a set of CAMDAT files generated by POSTLHS to obtain this data.  Thus, PRECCDFGF is no longer required to read output files from LHS.  

In 2003, Version 1.01 of the code was validated to run on the Compaq ES40 with OpenVMS 7.3-1 [5].  In September 2004, the Agency concluded that PRECCDFGF 1.01 met the acceptance criteria specified in the VVP [1, 2, 3], and thus was validated on the Compaq ES40 with OpenVMS 7.3-1 [7].  Following the validation of the code on the ES40, it was regression tested in 2005 to run on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [6].  In March 2006, the Agency completed a report documenting the Agency's approval of PRECCDFGF 1.01 on the Compaq ES45 and 8400 with OpenVMS 7.3-1 [8].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [9].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [9].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for PRECCDFGF 1.01 to ensure that it continued to function correctly [10].

In 2010, SNL modified PRECCDFGF code for use in the 2014 CRA PA to include, "The capability to assemble direct brine release data resulting from single or multiple brine volumes has been implemented in PRECCDFGF version 2.0." [14].  SNL also modified the code to correct an error message when one of the SUMMARIZE input files has an invalid header.  SNL developed two test cases, Test Cases 1 and 2, to validate these changes included in PRECCDFGF 2.0.  The discussion below documents the qualification of PRECCDFGF 2.0 for use on the Compaq cluster using OpenVMS 8.2.

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [11, 13]. The discussion below also documents the test methodology, regression test results, and the Agency's conclusions with respect to PRECCDFGF 1.06/2.01 running on the Solaris Blade with SunOS 5.11 [12].

Test Methodology

The validation of PRECCDFGF 2.0 uses two test cases to evaluate the changes made to the code. They verify that the code performs the changes properly.

The regression tests for PRECCDFGF Version 1.06/2.01 comprise the two test cases described in the VVP [1].  Test results from PRECCDFGF 1.062.01 run on the Solaris Blade with SunOS 5.11 were compared using the UNIX diff command to results from the validation tests of PRECCDFGF 1.01 on the Compaq  ES47 with OpenVMS 8.2.  VMS results were moved to the Solaris and converted as needed [12].

Test Results

The results of the tests for PRECCDFGF 2.0 on the Compaq cluster verify that the code satisfies the criteria in the VVP [15]. 

The results of the tests for PRECCDFGF 1.06/2.01 on the Solaris described above are that only very minor differences (e.g., spacing, version number) were found for the two test cases.  The only differences found in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names [12].

The Agency's Conclusions

The Agency found that PRECCDFGF 2.0 test cases verify that the code performs correctly.  The Agency concludes that PRECCDFGF 2.0 meets the acceptance criteria in the VVP and is validated for the 2014 CRA WIPP PA use on the Compaq computer cluster with OpenVMS 8.2.

The Agency also found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of PRECCDFGF 1.06/2.01. The Agency concludes that PRECCDFGF 1.06/2.01 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 2003.  "Verification and Validation Plan for PRECCDFGF Version 1.00A Document Version 1.02."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #530467. 
 WIPP PA (Performance Assessment) 2003.  "Regression testing of PRECCDFGF 1.00B (Addendum to Validation Document for PRECCDFGF 1.00A)."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #531450. 
 WIPP PA (Performance Assessment) 2004.  "Analysis Report for PRECCDFGF Version 1.00B Regression Testing for the ES45 and 8400 Platforms."  Sandia National Laboratories.
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004.
 WIPP PA (Performance Assessment) 2003.  "Validation Document for PRECCDFGF Version 1.01 Document Version 1.05."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #539296.
 WIPP PA (Performance Assessment) 2005.  "Installation and Checkout for PRECCDFGF Version 1.01 Regression Testing for the Compaq ES45 Platforms."  Sandia National Laboratories. 
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of PRECCDFGF 1.01 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 17, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543451.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for PRECCDFGF Version 1.06/2.01 Regression Testing for the Solaris Blade dated November 19, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557822.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
 WIPP PA  -  "Validation Document for PRECCDFGF Version 2.0, dated June 2010." Sandia National Laboratories. Sandia WIPP Central Files ERMS #552577.
 WIPP PA  -  "Verification and Validation Plan for PRECCDFGF Version 2.0, dated May 2010. Sandia National Laboratories.  Sandia WIPP Central Files #552575.
PRELHS 

This section presents the validation and regression test results for PRELHS.  The PRELHS program extracts parameter distribution data requested by the user from the PAPDB and sets up the LHS (Latin Hypercube Sampling) input control file. 

Introduction

Since the CCA, the PRELHS code has undergone a series of revisions.  PRELHS Version 2.10 was used in the WIPP CCA.  PRELHS 2.10 was validated in February 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of eight test cases met the acceptance criteria defined in the VVP/VD for PRELHS 2.10 [4, 5]. 
In August 1997, PRELHS was revised to Version 2.20 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of the eight test cases met the acceptance criteria defined in the VVP/VD [1, 6, 7]. 

In August 2001, PRELHS was revised to Version 2.24 and was validated on a DEC Alpha 2100 with OpenVMS 7.2-1 [2, 8, 9].  The validation test included three new test cases defined for Version 2.24.  Previous versions of PRELHS accessed the old view-based Parameters Database.  PRELHS 2.24 accesses the new procedure-based Parameters Database.  The two databases are not compatible (i.e., PRELHS 2.24 cannot read a view-based Parameters Database), and the parameter entries that were created for testing the previous versions of PRELHS do not exist in the procedure-based Parameters Database.  Therefore, the test cases used to test previous versions of PRELHS (Test Cases 1 through 8) were discarded, and three new test cases (Test Cases 9 through 11) were used to test PRELHS 2.24. 

In November 2001, PRELHS was revised to Version 2.30 and was validated on a DEC Alpha 2100 with OpenVMS 7.2-1 [3, 10].  PRELHS 2.30 accesses the new procedure-based PAPDB).  It cannot read the databases accessed by previous versions of PRELHS.  The primary difference between the PAPDB and the old database is the manner in which parameter entries are identified. In the old database, a parameter entry was uniquely identified by material and property, and its compliance type and calculation.  Each parameter entry in the PAPDB is uniquely identified by its material and property, and the associated analysis, computational code, and retrieval number.  Therefore, Test Cases 9 through 11 were discarded by DOE and three new test cases (Test Cases 12 through 14) were designed to verify that PRELHS satisfies all of the requirements and additional functionality specified in the VVP/VD [3]. 
In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from PRELHS 2.3 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of PRELHS 2.3 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of PRELHS 2.3 on those operating systems [11].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [12, 13, 14].  In March 2004, the Agency completed a report documenting the Agency's approval of PRELHS 2.3 on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [15].  PRELHS 2.3 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [16].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [16].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for PRELHS 2.3 to ensure that it continued to function correctly.

In 2012, SNL moved the WIPP PA parameter database to new database software, MySQL [21].  PRELHS uses the parameter database and required qualification to verify that the code satisfies the three functional requirements, five external interface requirements, and one additional functionality requirement listed in the VVP/VD.  SNL developed three new test cases, Test Cases 15, 16, and 17, to validate PRELHS 2.40 for use in CRA 2014. 

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [18, 20].

The discussion below documents the test methodology, validation testing, regression test results, and the Agency's conclusions with respect to PRELHS 2.41 running on the Solaris Blade with SunOS 5.11 [19].

Test Methodology

In 2012, SNL validated PRELHS 2.40 on the Compaq computer cluster with OpenVMS 8.2 [21]. PRELHS 2.40 was validated against the criteria discussed in the VVP/VD.  SNL developed three new test cases to verify that the code can properly access the new parameter database (PAPDB 2.0) using the MySQL database software. 

In 2013, SNL validated PRELHS 2.41 on the Solaris with SunOS 5.11 [19].  The validation includes three test cases, Test Cases 15, 16 and 17 ([19] Section 4.0), that verify that the code can satisfy the requirements listed in [19] Table 4-1.  PRELHS 2.41 testing consisted of one regression test, Test Case 15 that tests existing requirements; and two validation tests (16 and 17). Test Case 16 was modified to verify two new requirements (R.9-parameter database debug output and A.2-parameter retrieval logging) as well as many existing requirements. Test Case 17 evaluates modifications to error message formatting and existing requirements.   

Test Case 15 results were compared to VMS results using the UNIX diff command and results for Test Cases 16 and 17 were evaluated against the acceptance criteria in the VVP/VD [19]. 

Test Results

SNL verified that PRELHS 2.40 performs various requirements listed in the VVP/VD properly and can adequately access PAPDB 2.0 on the Compaq cluster. 

Test Case 15 regression results show acceptable differences, such as run date/time, etc. Test Case 16 validates PRELHS 2.41 by showing PRELHS can process: the R.2-distribution types listed, R.9-parameter entry retrieval logging, and A.2-debug database output. Test Case 17 was not regression-tested because the format of error messages was changed, therefore requiring validation against the original acceptance criteria [19].

SNL's review verifies that PRELHS 2.41 satisfies the requirements listed in Table 4-1 of the VVP/VD [19].


The Agency's Conclusions

EPA reviewed SNL qualification of PRELHS 2.40 and found that the code is adequately tested.  The Agency concludes that PRELHS 2.40 meets the acceptance criteria in the VVP/VD and is validated for use on the Compaq computer cluster with OpenVMS 8.2 and can be used for the 2014 CRA PA.

The Agency closely examined SNL's validation of PRELHS 2.41 and verifies that the code requirements are adequately tested.  The Agency concludes that PRELHS 2.41 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 Analysis Plan (AP-065) 2000.  "Regression for the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 2001.  "Requirements Document & Verification and Validation Plan for PRELHS Version 2.30."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #519721. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for PRELHS Version 2.10."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30712. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for PRELHS Version 2.10."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #30716 
 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for PRELHS Version 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #43935. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for PRELHS Version 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #43938. 
 WIPP PA (Performance Assessment) 2001.  "Verification and Validation Plan for PRELHS Version 2.24."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #513612. 
 WIPP PA (Performance Assessment) 2001.  "Validation Document for PRELHS Version 2.24."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #518675. 
 WIPP PA (Performance Assessment) 2002.  "Verification and Validation Plan, Validation Document and Criteria Forms for PRELHS, Version 2.30."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #519722. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA (Performance Assessment) 2003.  "Addenda to Verification and Validation Plan/Validation Document for PREHLHS Version 2.30."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #525224. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of PRELHS 2.30 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543595.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Verification and Validation Plan/Validation Document for PRELHS Version 2.41 dated March 7, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #559260.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.
 WIPP PA  -  "Verification and Validation Plan/Validation Document for PRELHS Version 2.40, dated March 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #556630.

PRESECOTP2D

This section presents the regression test results for PRESECOTP2D.  The purpose of PRESECOTP2D 1.22 is to create all the input files required to run the code SECOTP2D.  Material properties, grid information, and source term information are obtained from CAMDAT databases.  The velocity field is obtained from a transfer file written by PRESECOFL2D.  Since the CCA, the PRESECOTP2D code has undergone a series of revisions.  PRESECOTP2D Version 1.11ZO was used in the WIPP CCA.  PRESECOTP2D 1.11ZO was validated in September 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by regression testing to a validated primitive package [3].  Regression testing demonstrated that the results of two test cases (2 and 3) met the acceptance criteria defined in the RD/VVP for PRESECOTP2D 1.11ZO [4]. 

Introduction

In August 1996, PRESECOTP2D was revised to Version 1.20 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [5, 6].  Test Case 1 for the validation of PRESECOTP2D 1.20 was identical to the test case for the validation of PRESECOTP2D 1.11ZO.  The acceptance criteria for this test case were satisfied by showing that the output from PRESECOTP2D 1.20 was identical to the output of the PRESECOTP2D 1.11ZO validation tests.  Test Cases 2 and 3 were modified to test code functionality that changed from Version 1.11ZO to version 1.20.  In these test cases, the acceptance criteria were satisfied by analysis of the output of PRESECOTP2D 1.20.  PRESECOTP2D 1.20 was used to support the CCA.

In June 1997, PRESECOTP2D was revised to Version 1.22 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [1, 2].  The validation test included the three test cases defined for Version 1.20, and an additional Test Case 4.  Test Case 4 was added to verify the variable time step functionality by showing the time increments produced by the code match those produced by the algorithm in the user's manual [7].  Acceptance criteria for Test Cases 1 - 3 were satisfied by comparing output of PRESECOTP2D 1.22 to the output of PRESECOTP2D 1.20, while the acceptance criteria for Test Case 4 were satisfied by analysis of the output of PRESECOTP2D 1.22. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from PRESECOTP2D 1.22 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of PRESECOTP2D 1.22 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of PRESECOTP2D 1.22 on those operating systems [8].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1 [9, 10].  In March 2004, the Agency completed a report documenting the Agency's approval of PRESECOTP2D 1.22 on the Compaq Alpha ES45 and 8400 both running OpenVMS 7.3-1 [11].  PRESECOTP2D 1.22 was used to support the 2004 CRA.

In 2006 SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [12].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [12].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for PRESECOTP2D 1.22 to ensure that it continued to function correctly.

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [14, 16].

The discussion below documents the test methodology, validation test results, regression test results, and the Agency's conclusions with respect to PRESECOTP2D 1.23 running on the Solaris Blade with SunOS 5.11 [15].

Test Methodology

Testing of PRESECOTP2D 1.23 uses four regression tests that verify that the code satisfies acceptance criteria listed in the RD/VVP [15].  One test was also run that verifies that the code inputs MODFLOW files correctly [15].  PRESECOTP2D 1.22 files on the Compaq ES47 computers were converted to text as needed, moved to the Solaris Blade, and compared.  Comparison is done using the Python utility (Regr_Diff.py-used to compare text files) which calculates the RPD, relative percent differences [15].  RPD values less than 1E-4 are acceptable.

Test Results

Test Cases 1, 2, and 3 show expected textural differences (e.g. file names, version numbers, etc.) and minor numeric differences below the RPD threshold.  However, Test Case 4 had one numeric difference above the RPD threshold. SNL reviewed and explained this difference showing that the difference in time step would not significantly impact computations.  All other Test Case 4 results were acceptable [15].  

Section 5.0 of the VD/VVP [15] describes the unit test that verifies that PRESECOTP2D 1.23 "...can read an actual MODFLOW file..." by manually converting binary results and comparing them using the difference utility code.  No results were above the RPD threshold.   

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names, and minor numeric difference.  EPA closely examined Test Case 4 that had one RPD greater than 1E-4 and agrees with SNL's explanation.  EPA also closely looked at the unit test and concludes that it is adequate.  The Agency concludes that PRESECOTP2D 1.23 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11. 
References 

 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for PRESECOTP2D Version 1.22."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45957. 
 WIPP PA (Performance Assessment) 1997.  "Validation Document for PRESECOTP2D Version 1.22."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45965. 
 "Primitive Data Package" for PRESECOTP2D, Version 1.11ZO, version date 8/16/93, Binder with the following information:  Abstract User Manual, On-Line Help, Verification, Review, Driver, and Source.  Sandia National Laboratories.  Sandia WIPP Central Files Record #220278. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for PRESECOTP2D Version 1.11ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #23324. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for PRESECOTP2D Version 1.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37295. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for PRESECOTP2D Version 1.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #40254. 
 WIPP PA (Performance Assessment) 1997.  "User's Manual for PRESECOTP2D Version 1.22."  Sandia National Laboratories.  Sandia Central Files WPO #45963. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of PRESECOTP2D 1.04 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543591.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Validation Document for PRESECPTP2D Version 1.23 dated May 2, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #559779. 
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #557765.

RELATE

This section presents the regression test results for RELATE.  RELATE transfers information from one CAMDAT database file (the "Reference" database) to another CAMDAT database file (the "Object" database) using either the relative positions of the meshes defined on the reference and object databases, or a symbolic mapping between the material and property names on the reference database and the material and property names on the object database.  CAMDAT database files are also referred to as CDB files.

Introduction

RELATE Version 1.42ZO was validated in October 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of three test cases met the acceptance criteria defined in the RD/VVP for RELATE 1.42ZO [1]. 

In March 1996, RELATE was revised to Version 1.43 and was validated on a DEC Alpha 2100 with OpenVMS 6.1.  Test cases identical to the test cases for the validation of RELATE 1.42ZO were run.  The acceptance criteria for these test cases were satisfied by showing that the output from RELATE 1.43 was identical to the output of the RELATE 1.42ZO validation tests.  RELATE 1.43 was used to support the CCA. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from RELATE 1.43 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of RELATE 1.43 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's conclusions with respect to the migration and verification of RELATE 1.43 on those operating systems [3].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [4, 5].  In March 2004, the Agency completed a report documenting the Agency's approval of RELATE 1.43 on the Compaq Alpha ES45 and 8400 [6].  RELATE 1.43 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [7].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [7].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for RELATE 1.43 to ensure that it continues to function correctly.

In 2012, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [9, 11].

The discussion below documents the 2012 test methodology, regression test results, and the Agency's conclusions with respect to RELATE 1.45 running on the Solaris Blade with SunOS 5.11 [10].

Test Methodology

The tests for this code comprised the three test cases described in the Requirements Document & Verification and Validation Plan for RELATE Version 1.42Z0 (RD/VVP) [1].  The regression test results from RELATE 1.43 run on the Compaq ES40 with OpenVMS 7.3-1 used the VMS DIFFERENCE command to compare results from the validation tests of RELATE 1.43 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2.

CAMDAT database files (CDB) are produced in each of the three RELATE test cases.  The output CDB files are converted from a binary, CDB, file to an ASCII, file for comparison during the validation process.  In the previous RELATE 1.43 validation, the CDB files were converted using GROPE 2.10.  GROPE has since been revised to Version 2.12.  GROPE 2.12 was validated in June 1996 on a DEC Alpha 2100 with OpenVMS 6.1 [2].  GROPECDB 2.12 has also been validated on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 as part of the hardware regression test (see Section 5.11 above). 

In 2012, SNL regression test RELATE 1.45 using a similar approach as described above.  SNL used three test cases to preform regression testing of RELATE 1.45 on the Solaris Blade with SunOS 5.11 compared to the results of RELATE 1.43 on the VMS Compaq E47 with OpenVMS 8.2. The VMS results were transferred to the Solaris and compared using the UNIX diff command [10].  

Test Results

The results of the tests referenced above show that only very minor differences (e.g., spacing, version number) were found for the three test cases.  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.

The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of RELATE 1.45.  The Agency concludes that RELATE 1.45 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for RELATE Version 1.42ZO."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #24184. 
 WIPP PA (Performance Assessment) 1996.  "Validation Document for GROPECDB Version 2.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of RELATE 1.43 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543592.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for RELATE Version 1.45 Regression Testing for the Solaris Blade, dated October 25, 2012."  Sandia National Laboratories. Sandia WIPP Central Files ERMS #557826.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.

SECOTP2D

This section presents the validation and regression test results for the SECOTP2D.  SECOTP2D performs single or multiple component radionuclide transport in fractured or granular aquifers.  Fractured porous media are represented using a dual porosity model.  The code uses total variation diminishing (TVD) schemes to model the advective part of the transport equation.

Introduction

Since the CCA the SECOTP2D code has undergone a series of revisions.  SECOTP2D Version 1.30 was used in the WIPP CCA.  SECOTP2D 1.30 was validated in April 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of three test cases met the acceptance criteria defined in the RD/VVP for SECOTP2D 1.30 [3, 4]. SECOTP2D 1.30 was used to support the CCA.

In July 1997, SECOTP2D was revised to Version 1.41 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of six new test cases met the acceptance criteria defined in the RD/VVP for SECOTP2D 1.41 [1, 2]. 

In order to test new operating systems that were added in 2002 - 2003, regression test results from SECOTP2D 1.41 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of SECOTP2D 1.41 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the EPA completed a report documenting the Agency's conclusions with respect to the migration and verification of SECOTP2D 1.41 on those operating systems [5].  SECOTP2D 1.41 was used to support the 2004 CRA.

In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [6, 7].  Modifications were also made to SECOTP2D 1.4 and the version number was changed from 1.4 to 1.4A [8].  In March 2004, the Agency completed a report documenting the Agency's approval of SECOTP2D 1.41A on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [9].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [10].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [10].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for SECOTP2D 1.41A to ensure that it continued to function correctly.

In 2013, SNL migrated WIPP PA software from VMS platform on the Compaq cluster of computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [12, 14].  SNL performed validation testing and regression testing to verify that SECOTP2D 1.43 continues to perform PA calculations correctly ([13] page 7).

The discussion below documents the test methodology, regression test results, qualification test results and the Agency's conclusions with respect to SECOTP2D 1.43 running on the Solaris Blade with SunOS 5.11.

Test Methodology

The tests for this code comprised the four test cases described in the Requirements Document & Verification and Validation Plan for SECOTP2D Version 1.41 (RD/VVP) [1].  Regression test results from SECOTP2D 1.41A run on the Compaq ES40 with OpenVMS 7.3-1 used the VMS DIFFERENCE command to compare results from the validation tests of SECOTP2D 1.41A run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [11].
In 2013, SNL performed validation and regression testing of SECOTP2D 1.43 on the Solaris Blade with SunOS 5.11 on all six test cases (1, 2, 3, 4, 6A, 6B ([13] Section 3.0)) to verify that the code performs PA calculations correctly.  These test cases verify that SECOTP2D 1.43 satisfies the acceptance criteria of the thirteen Functional Requirements (R.1 through R.13), three External Interface Requirements (R.14 through R.16), and eight Additional Functionality Requirements (A.1 through A.8) as noted in [13] Table 3-1 and are described in the RD/VVP [2]. SNL states, "Where possible, regression testing was used to verify that SECOTP2D Version 1.43 test results.  Where regression testing results in many differences [RPD above 1E-4], the SECOTP2D 1.43 test results were verified using the acceptance criteria from the SECOTP2D 1.41 RD/VVP [2]." ([13] page 7) SNL applied this approach to specific output values as needed.
Test Results

The six test cases for SECOTP2D 1.43 were executed on the Solaris Blade with SunOS 5.11.  SNL used a combination of regression testing and acceptance criteria test to verify that the code continues to perform the WIPP PA correctly ([13] page 7).  Mainly the differences were textual or data values were below the RPD threshold.  Other requirements were verified against the acceptance criteria specified in the VD ([3] page 7) to verify continued PA performance of the code.

SNL used the Regr_Diff.py Python utility which does a line-by-line comparison and calculates the RPD for input files ([13] page 9). Any difference with RPD greater than 1E-4 must be explained, this was usually done comparing the results to the acceptance criteria for these values ([13] Section 4.0). Test Cases 3 and 4 had differences below the RPD limit and were considered by SNL to satisfy the acceptance criteria.  Most of the differences for Test Cases 1 and 2 were below the RPD limit, however a few output values required further review against the acceptance criteria. For Test Cases 6A and 6B, SNL used the acceptance criteria to validate these tests [13].

The Agency's Conclusions

The Agency closely examined SNL's code validation process and found that it adequately complies with the approach described in the VD ([13] Section 3.0) and satisfies acceptance criteria.  EPA found that numerical differences were the result of conversion to double precision numeric format on the Solaris and that new functionality of the code was tested and reviewed reasonably. The Agency concludes that SECOTP2D 1.43 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade platform with SunOS 5.11.
References 

 WIPP PA (Performance Assessment) 1997.  "Requirements Document and Verification and Validation Plan for SECOTP2D Version 1.41."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45732.
 WIPP PA (Performance Assessment) 1997.  "Validation Document for SECOTP2D Version 1.41."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45735. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document and Verification and Validation Plan for SECOTP2D Version 1.30."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #36693. 
 WIPP PA (Performance Assessment) 1996. `Validation Document for SECOTP2D Version 1.30."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #36694. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 WIPP PA  -  "Change Control Form - SECOTP2D Version 1.41 to 1.41A" 2003.  Sandia National Laboratories.  Sandia WIPP Central Files WPO #526257. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  Validation (Performance Assessment) 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of SECOTP2D 1.41A on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 1, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543596.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162, dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA _ "Validation Document for SECOTP2D Version 1.43, dated August 5, 2013. Sandia National Laboratories.  Sandia WIPP Central Files ERMS #559784.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765. 

STEPWISE

This section presents the regression test results for STEPWISE.  STEPWISE is a statistical code that evaluates variable importance by developing regression models between the observed response and input variables using either a forward, backward, or stepwise regression procedure on the raw or ranked data.  STEPWISE 2.20 was validated in November 1995 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of three test cases met the acceptance criteria defined in the RD/VVP for STEPWISE 2.20 [2].  

Introduction

In November of 1996, STEPWISE was revised from Version 2.20 to Version 2.21.  Version 2.21 was validated on the DEC Alpha 2100 with OpenVMS 6.1 by a combination of acceptance and regression testing.  Test Cases 1 - 3 were validated through regression testing while Test Cases 4 and 5 underwent acceptance testing.  Test Cases 4 and 5 were created to illustrate the correction of errors found in Version 2.20, and were validated by comparing output results to the acceptance criteria defined in the RD/VVP for STEPWISE 2.21 [1]. STEPWISE 2.20 was used to support the CCA. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from STEPWISE 2.21 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of STEPWISE 2.21 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the EPA completed a report documenting the Agency's conclusions with respect to the migration and verification of STEPWISE 2.21 on those operating systems [3].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [4, 5].  In March 2004, the Agency completed a report documenting the Agency's approval of STEPWISE 2.21 on the Compaq Alpha ES45 and 8400, which were both running OpenVMS 7.3-1 [6].  STEPWISE 2.21 was used to support the 2004 CRA. 

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [7].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [7].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for STEPWISE 2.21 to ensure that it continued to function correctly.

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [9, 11].

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to STEPWISE 2.22 running on the Solaris Blade with SunOS 5.11 [10].
Test Methodology

The tests for this code comprised the five test cases described in the Requirements Document & Verification and Validation Plan for STEPWISE Version 2.21 RD/VVP) [1].  Regression test results from STEPWISE 2.21 run on the Compaq ES40 with OpenVMS 7.3-1 used the VMS DIFFERENCE command to compare results from the validation tests of STEPWISE 2.21 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [8].

In 2013, SNL regression tested (compared) STEPWISE 2.22 results executed on the Solaris Blade with SunOS 5.11 and STEPWISE 2.21 executed on the Compaq ES47 with OpenVMS 8.2 [10].  Results on VMS were transferred to the Solaris, converted to Solaris platform format as needed, and compared to the Solaris results using the UNIX diff command and plotted for visual comparison as needed [10].    

SNL noted: "Small numeric differences are expected due to slight differences in the two platforms. For STEPWISE, differences in the final digit of numeric values (with rounding) are acceptable." and "Small variations in the plots are acceptable, but any difference in the location of data on the plots must be explained" [10].

Test Results
All test cases showed only minor differences (e.g., spacing, version number) [10].  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.  Plots results showed minor differences for all test cases.  Test Case 2 showed minor numeric differences in the final digit.  Test Case 1 showed one digit with the last two digits different, SNL provided a reasonable explanation of this difference [10]. 
The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, file names and minor numeric difference.  EPA reviewed SNL's explanation of insignificant numeric differences and found them to be reasonable.  The Agency concludes that STEPWISE 2.22 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.

References 

 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for STEPWISE Version 2.21."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #42250. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for STEPWISE Version 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #27767.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280. 
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of STEPWISE 2.21 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543589.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162 dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Validation Document for STEPWISE Version 2.22 dated July 30, 2013." Sandia National Laboratories. Sandia WIPP Central Files ERMS #560366.
 WIPP PA  -  "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.

SUMMARIZE

This section presents the regression test results for the SUMMARIZE code.  SUMMARIZE reads ordered sets of sampled CAMDAT (CDB) data files.  For the current regulatory calculations, there are 100 different input CAMDAT files in each ordered set.  These files would normally constitute the principal output from one of the WIPP-PA analytical codes.  SUMMARIZE can extract a single item (or set of items) from each of its 100 input files and write the 100 item values to a single output file.  If the input data constitute time histories, and they often do, SUMMARIZE will select the one value from each input file that corresponds most closely to a user-selected time or set of times from the time history.  If requested, SUMMARIZE can interpolate input data to the exact time specified by the user.  As a result of this process, selected data that originally resided on 100 different binary CAMDAT files are rearranged and reported on a single ASCII output file.

SUMMARIZE reports its results in any of several convenient ASCII formats that can be read by commercial and WIPP-PA analysis and plotting codes.  SUMMARIZE's output format is normally selected by the user so as to be compatible with the analysis and/or plotting code that will be applied next in the run sequence.

Introduction

Since the CCA, the SUMMARIZE code has undergone a series of revisions.  SUMMARIZE Version 2.10 was used in the WIPP CCA.  SUMMARIZE 2.10 was validated in May 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of seven test cases met the acceptance criteria defined in the RD/VVP for SUMMARIZE 2.10 [2, 3]. 

In August 1996, SUMMARIZE was revised to Version 2.15 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [5, 6].  Test Cases 2, 3, 5, 6, and 7 for the validation of SUMMARIZE 2.15 were identical to test cases for the validation of SUMMARIZE 2.10.  The acceptance criteria for these test cases were satisfied by DOE when they demonstrated that the output from SUMMARIZE 2.15 was identical to the output of the SUMMARIZE 2.10 validation tests.  Test Cases 1 and 4 were modified to test code functionality that changed from Version 2.10 to Version 2.15.  In these test cases, the acceptance criteria were satisfied by analysis of the output of SUMMARIZE 2.15. 

In July 1997, SUMMARIZE was revised to Version 2.20 and was validated on a DEC Alpha 2100 with OpenVMS 6.1 [1, 2].  The validation test included the seven test cases defined for Version 2.15, and an additional Test Case 8.  Test Case 8 was added to verify the correction of an error found in Version 2.15.  Acceptance criteria for Test Cases 1 - 7 were satisfied by comparing output of SUMMARIZE 2.20 to the output of SUMMARIZE 2.15, while the acceptance criteria for Test Case 8 were satisfied by analysis of the output of SUMMARIZE 2.20. 
SUMMARIZE 2.20 has one current Software Problem Report [6].  The subroutine SURFER_PRINT_TWO_D_GRID prints data to a file that can be read by the SURFER plotting program.  This subroutine contains an error that causes the data to be printed incorrectly.  The error was determined by DOE (and checked by EPA) to be of no consequence, since the SURFER output capability is not used by WIPP PA.  SUMMARIZE has not been revised to correct the error.  Test Case 3 produces SURFER-formatted output as part of the test case.  Hence, DOE expected to find numerical differences in the output of Test Case 3. 

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from SUMMARIZE 2.20 run on the ES40 using OpenVMS 7.3-1 were compared to results from the validation tests of SUMMARIZE 2.20 run on a DEC Alpha 2100 using OpenVMS 6.1.  In June 2003, the Agency completed a report documenting the Agency's approval of SUMMARIZE 2.20 on those operating systems [7].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, both running OpenVMS 7.3-1 [8, 9].  In September 2004, the Agency concluded that SUMMARIZE 2.2 met the acceptance criteria specified in the VVP [1], and was considered as validated on the COMPAQ ES45 and 8400 platforms using OpenVMS 7.3-1 [10].  SUMMARIZE 2.2 was used to support the 2004 CRA.

In the 2004 CRA review of the SUMMARIZE code, EPA and DOE both found errors in how the code selected input data and cumulated results when generating output summary.  These errors were traced back to errors in the SUMMARIZE input files and not the code itself.  The errors were corrected for use with Version 3.00 of the SUMMARIZE code and verified by EPA [28].

Version 3.00 of the code was subsequently validated to run on the COMPAQ ES40 with OpenVMS 7.3-1 [12].  Following the validation of the code on the ES40 it was regression tested to run on the ES45 [13, 15]. 

In 2005, two Software Problem Reports were completed that identified two issues with SUMMARIZE 3.0 [18, 19].  SPR 05-002 discussed an underflow problem when interpolating very small numbers.  The error only occurs when the absolute difference between the two values to be interpolated is less than 0.29E-39 [18].  SPR 05-003 discussed a problem in the untested feature TIMES=All.  When TIMES=ALL requested in the *TIMES environment, an extra record is appended to the expected output records.  This extra record contains meaningless values [19].

In 2005, a Change Control Form was completed that indicates SUMMARIZE 3.0 will be updated to SUMMARIZE 3.01 after the problems are resolved [20].  The problem identified in SPR 05-002 was corrected by converting the single-precision CAMDAT input values to double-precision and doing all real arithmetic in double-precision.  SUMMARIZE 3.00 uses the CAMSUPES_LIB method of dynamic allocation.  Where appropriate, SUMMARIZE 3.01 arrays will be declared and allocated with Fortran 90 constructs.  To rectify the problem identified in SPR 05-003, the extra record appended to the output will not appear with SUMMARIZE 3.01 [19].  

Following the provisions of the Change Control Form, a new ID [21] and VVP/VD [22] were developed for SUMMARIZE 3.01.  As indicated in the VVP/VD, SUMMARIZE Version 3.01 is a modification of SUMMARIZE Version 3.00, and code has been converted to double precision to prevent problems interpolating very small numbers.  The error that occurred when requesting all CAMDAT times has been fixed and a new test case, Test Case 14, verifies that the problems addressed in the Change Control Form [20] have been corrected.  For this test case, the tester must examine the output data file for content.
A Software Problem Report 06-001 (2005) indicated that SUMMARIZE 3.01 could incorrectly determine that the attribute requested in the Input control file does not exist for the element block and abort [23].  The problem is caused by an error in the DBEL2BLK routine that should return the element block for a given element.  SUMMARIZE could also incorrectly determine that the attribute does exist for the element block, but this could only occur if the input control file was incorrect.  Since the problem causes SUMMARIZE to abort, the problem will not have an effect on calculation results. 
In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [26].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [26].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for SUMMARIZE 3.01 to ensure that it continued to function correctly [28].

In 2013, SNL migrated WIPP PA software from the VMS Alpha platform on a cluster of Compaq computers with OpenVMS 8.2 operating system to the Solaris Blade platform on Sun hardware with Intel based processors with SunOS 5.11 operating system [29, 31].  Results on VMS were transferred to the Solaris, converted to Solaris platform format as needed, and compared to the Solaris results using the UNIX diff command and plotted for visual comparison as needed [30].

SNL notes in the installation and checkout procedure that: "AP 162 states that numeric differences are expected, due to the change to double precision floating point variables in the code and differences in the two platforms. Some of the SUMMARIZE tests show a large number of numeric differences. AP 162 states that relative percent differences (RPD) of 1E-4 or less are insignificant" [30].

The discussion below documents the 2012 test methodology, regression test results, and the Agency's conclusions with respect to SUMMARIZE 3.02 running on the Solaris Blade with SunOS 5.11 [30].

Test Methodology

The test set for SUMMARIZE Version 3.00, as documented in the VVP/VD [12], consisted of 12 test cases; Test Cases 1, 2, and 4 through 13.  (Test Case 3 was removed from the test set.)  In the revised VVP/VD for SUMMARIZE 3.01, these 12 test cases are regression-tested by running SUMMARIZE Version 3.01 on the Compaq ES40 with OpenVMS V7.3-1 and comparing its output with the output from the validation of SUMMARIZE Version 3.00 [27], which was also run on the Compaq ES40 with OpenVMS V7.3-1.

SUMMARIZE needs the following input files: an input control file and a set of CAMDAT data files.  The SUMMARIZE input control file must always be modified slightly for each test case because it contains the name of the output data file, which includes the program class.  The differences between the SUMMARIZE Version 3.01 and Version 3.00 input control files are listed with the VMS DIFFERENCE utility in the VVP/VD [22].  The CAMDAT data files were the same files used in the previous validation of SUMMARIZE Version 3.00.

A successful SUMMARIZE execution generates one or more ASCII output data files and a log file.  The log file is not part of the code's functionality and is usually not examined or compared. Each output data file is compared to the corresponding file from the previous validation of SUMMARIZE Version 3.00 using the VMS DIFFERENCE utility.  DOE did not expect any differences for test cases that do not interpolate values, i.e., those test cases that request the "nearest" time step value or request only attribute or property values, which have no associated time.  However, the DOE notes that the conversion to double precision may cause numerical differences in interpolated values.  These differences make use of the VMS DIFFERENCE utility impractical.  Therefore, the DOE developed two short FORTRAN test programs to help verify that these differences are due to the conversion from single to double precision.  COMPARE_SUM compares the Version 3.01 and Version 3.00 output files line-by-line and identifies values that have a significant relative difference.  INTERP_CDB reads values for selected global variables for all times from the input CAMDAT file, and performs the interpolation at specified times in double and single precision.  

Test Cases 12 and 13 check error conditions that are designed to cause SUMMARIZE to abort.  If SUMMARIZE aborts, no output data file is generated; the log file indicates that SUMMARIZE detected the error condition and aborted.  For these two test cases, DOE compares the log file to the corresponding file from the previous validation of SUMMARIZE Version 3.00 using the VMS DIFFERENCE utility.

A new test case, Test Case 14, verifies that the problems addressed in the CCF [20] have been corrected.  For this test case, the tester must examine the output data file for content.

All test cases are run using the WIPP PA run control system.  The script and script input files reside in class QB0301 of library EVAL in the SCMS.  All other files related to validation testing of SUMMARIZE Version 3.01 reside in class QB0301 of library SUM in the SCMS.  All test inputs are fetched at run time by the scripts, and test outputs/results and run logs are automatically stored by the scripts in class QB0301 of library SUM in the SCMS.  A log file that indicates the input/output files is generated by each test case execution.

Once the tests described above were completed and favorable comparisons were obtained between the results from SUMMARIZE Version 3.00 and Version 3.01 both running on the Compaq ES40 with OpenVMS V7.3-1, additional regression testing was performed.  This testing involved comparing the regression testing results for the 13 test cases described in the VVP/VD for SUMMARIZE 3.01 and run on the ES40, ES45 and the ES47 with OpenVMS 8.2 to the results from the validation tests of SUMMARIZE 3.01 run on the ES40 with OpenVMS 7.3-1. 

In 2012, SNL regression tested SUMMARIZE 3.02 for use on the Solaris Blade with SunOS 5.11 against results on the Compaq ES47 with Open VMS 8.2.   The regression test included thirteen test cases; Test Cases 1, 2, and 4 through 14 (Test Case 3 was previously removed) examined numerous functions of the code [30].  

SNL developed a python utility code, Regr_Sum.py, to compare results line-by-line, calculate the RPD, and report lines with differences greater than the RPD threshold.  SNL states that, "Any differences with an RPD greater than 1 E-4 must be explained" ([30] page 8). SNL compared results greater than the RPD to expected values to verify that the values have a reasonable justification [30].

Test Results

Test Case 1 show no numeric difference and acceptable textural differences ([30] Section 5.0).  Test Cases 4, 14, and 6 through 11 show acceptable textural differences (e.g., spacing, version number, filenames, etc.) and numeric differences below the RPD threshold. 

Test Cases 2 and 5 have significant differences.  SNL notes differences in Sections 5.2 and 5.5 of the installation and checkout procedure [30] are: "... primarily due to the change in the seconds-to-year conversion factor...".  SNL did additional examination and explained the difference in Appendix C [30].

Test Cases 12 and 13 verify error handling for the code ([30 Sections 5.12 and 5.13) and showed acceptable differences.

The Agency's Conclusions

The Agency closely examined the regression testing of SUMMARIZE 3.02 and found that SNL's review and explanation of differences is adequate. The Agency concludes that SUMMARIZE 3.02 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the Solaris Blade with SunOS 5.11.
References 

 WIPP PA (Performance Assessment) 1997.  "Requirements Document & Verification and Validation Plan for SUMMARIZE Version 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #46449. 
 WIPP PA 1996.  "Requirements Document & Verification and Validation Plan for SUMMARIZE Version 2.10."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37458. 
 WIPP PA 1996.  "Validation Document for SUMMARIZE Version 2.10."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37461. 
 WIPP PA (Performance Assessment) 1996.  "Requirements Document & Verification and Validation Plan for SUMMARIZE Version 2.15."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #40252. 
 WIPP PA 1996.  "Validation Document for SUMMARIZE Version 2.15."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #40254. 
 WIPP PA 1997.  "Software Problem Report 97-016 for SUMMARIZE Version 2.10, 2.15, 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #246511. 
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA.  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA.  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 USEPA  -  "Review of WIPP Performance Assessment Computer Code Migration Activities  - Version 2."  September 2004. 
 WIPP PA 2005.  "Change Control Form for SUMMARIZE, Version 2.20 Sandia National Laboratories."  ERMS #540117.
 WIPP PA 2005.  "Verification and Validation Plan/Validation Document for SUMMARIZE Version 3.0."  Sandia National Laboratories.  Sandia WIPP Central Files. 
 WIPP PA 2005.  "Installation and Checkout for SUMMARIZE Version 3.0 Regression Testing for the Compaq ES45 Platform."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 2005.  "Verification of the SUMMARIZE Interface in the CRA-2004 Performance Assessment Baseline Calculation."  Sandia National Laboratories.  Sandia WIPP Central Files 540977.
 Dunagan, S. 2004.  Explanation of how SUMMARIZE and CCDFGF are checked/verified/tested for capturing the correct CDB data streams in the WIPP CRA-2004 Performance Assessment.  Technical Memorandum ERMS #536767, Sandia National Laboratories, Carlsbad, New Mexico.  
 Piper, L. 2004.  4[th] Response Submittal to EPA.  U.S. Department of Energy, Carlsbad, New Mexico. 
 WIPP PA (Performance Assessment) 2005.  "Requirements and Design Document for SUMMARIZE Version 3.0."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #540107. 
 WIPP PA 2005.  "Software Problem Report 05-002 for SUMMARIZE 3.0."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #519714. 
 WIPP PA 2005.  "Software Problem Report 05-003 for SUMMARIZE 3.0."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #519714. 
 WIPP PA 2005.  "Change Control Form for SUMMARIZE, Version 3.0."  Sandia National Laboratories."  ERMS #541842. 
 WIPP PA 2005.  "Implementation Document for SUMMARIZE 3.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542062.
 WIPP PA 2006.  "Verification and Validation Plan/Validation Document for SUMMARIZE 3.01."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542063.
 WIPP PA 2005.  "Software Problem Report 06-001 for SUMMARIZE 3.00 and 3.01."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #542399. 
 USEPA  -  "Technical Support Document for Section 194.23:  Models and Computer Codes-PABC Codes Changes Review."  March 2006.  Docket No. A-98-49/II-B1-8.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of SUMMARIZE 3.01 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543462.
 EPA 2006.  Technical Support Document for Section 194.23:  Review of WIPP Recertification Performance Assessment Computer Codes  -  CRA Code Review Docket No. A-98-49 II-B1-7.
 WIPP PA  -  "Summary Report on the Migration of the WIPP PA Codes From VMS to Solaris, AP-162 dated December 23, 2013." Sandia National Laboratories.  Sandia WIPP Central Files ERMS #561457.
 WIPP PA  -  "Installation and Checkout for SUMMARIZE Version 3.02 Regression Testing for the Solaris Blade dated December 11, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557814.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.

LIBRARIES

**SNL states in the Solaris migration plan ([15] AP-162, page 7), "Libraries do not require separate validation because their subroutines are validated when the code into which they are linked is validated."  EPA examined this comment and agrees with its concept.  Programmers develop object libraries to store common routines or subroutines that are used by multiple programs.  These routines are usually database access, input/output, or memory access, etc. subroutines used frequently by numerous programs.  Library routines are linked (combined) when the main computer code, such as BRAGFLO or CCDFGF, are created (complied).  The Agency concludes that SNL's position is appropriate and closely examined these issues when examining the qualification of the main programs during our review.  EPA found that all main line codes preformed adequately. The Agency left this section of the report unchanged to preserve historical review information.

This section presents the regression testing for the libraries, specifically the CAMCON_LIB Version 2.20, CAMDAT_LIB Version 1.25, CAMSUPES_LIB 2.22 and SDBREAD_LIB 2.11.

CAMCON_LIB 

This section presents the regression test results for the CAMCON_LIB software library.  CAMCON_LIB is a collection of routines that perform Quality Assurance, File processing, Free-Field Input processing, String processing, and Finite Element Index processing.  The data manipulations to be performed are expressed as algebraic equations involving the existing and/or newly created data.
5.35.1.1	Introduction
 
CAMCON_LIB 2.16 was used in the CCA [1].  CAMCON_LIB 2.16 was validated in January 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of seven Test Cases (1 through 7) met the acceptance criteria defined in the RD/VVP for CAMCON_LIB 2.16 (document Version 1.00) [3].  As a consequence of the upgrade to OpenVMS 7.3-1, CAMCON_LIB was re-compiled on the ES40 to create Version 2.20 [2].  No changes were made to the CAMCON_LIB source code.  The Implementation Document for CAMCON_LIB 2.20 documents the build of CAMCON_LIB 2.20 [6].
In January 1999 source code changes were made to CAMCON_LIB and the code was revised to Version 2.18.  CAMCON_LIB 2.18 was validated on a DEC Alpha 2100 with OpenVMS 7.1
[4, 5].  Test Cases 1 - 7 for the validation of CAMCON_LIB 2.18 were identical to test cases for the validation of CAMCON_LIB 2.16.  The acceptance criteria for these test cases were satisfied by showing that the output from CAMCON_LIB 2.18 was identical to the output of the CAMCON_LIB 2.16 validation tests.

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from CAMCON_LIB 2.2 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of CAMCON_LIB 2.20 run on a DEC Alpha 2100 with OpenVMS 6.1. In June 2003, the EPA completed a report documenting the Agency's conclusions with respect to the migration and verification of CAMCON_LIB 2.20 on those operating systems [7].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [8, 9].  In March 2004, the Agency completed a report documenting the Agency's approval of CAMCON_LIB 2.20 on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [10]. CAMCON_LIB 2.20 was used to support the 2004 CRA.

In 2006, an error was found in CAMCON_LIB 2.20 and is described in the Software Problem Report as, "If the routine STRCMPRS is called with a string that contains an internal blank, but does not have a blank at the end of the string, the routine will go into an endless loop.  For example, CALL STRCMPRS (STR,L), where STR is dimensioned CHARACTER*14 STR and set to STR= Initial Cavity" [11].  The SPR also notes that, "Since this problem causes the calling code to go into an endless loop, it could not have occurred in any analyses so far, and has thus had no impact."

A Change Control Form was also completed in 2006 that indicates the error in the STRCMPRS routine (string compression) will be corrected, and the test driver will be changed to allow more flexibility in naming the test output files [12].  Calls will be added to the test driver for this code to test the correction to the STRCMPRS routine and to test the QABANNER capability to display a short version of the QA banner.  These changes to CAMCON_LIB 2.20 were completed and the version was changed to CAMCON_LIB 2.21.   

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [13].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [13].  Because of these changes in the operating system and the addition of a new computing platform, regression testing has been conducted for CAMCON_LIB 2.21 to ensure that it continues to function correctly.

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to CAMCON_LIB 2.21 running on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2.

5.35.1.2	Test Methodology

The tests for this library comprised the seven test cases described in the Requirements Document & Verification and Validation Plan for CAMCON_LIB Version 2.16 (RD/VVP) [3].  Regression testing results from CAMCON_LIB 2.21 run on the ES47 with OpenVMS 8.2 were compared to results from the validation tests of CAMCON_ LIB 2.21 run on the ES40 with OpenVMS 7.3-1; Regression testing results from CAMCON_ LIB 2.21 run on the ES40 and ES45 with OpenVMS 8.2 were compared to the regression testing results from CAMCON_LIB 2.21 run on the ES47 with OpenVMS 8.2 [14]. 

The regression test methodology uses the VMS DIFFERENCE command to compare output from CAMCON_LIB 2.21 on the COMPAQ ES47 with OpenVMS 8.2 to the output from validation tests of CAMCON_LIB 2.21 on the COMPAQ ES40 with OpenVMS 7.3-1, and output from CAMCON_LIB 2.21 on the COMPAQ ES40 and ES45 with OpenVMS 8.2 to output from CAMCON_LIB 2.21 on the COMPAQ ES47 with OpenVMS 8.2.  The VMS DIFFERENCE command compares two files and identifies records that are different in the two files.

5.35.1.3	Test Results

The results of the tests referenced above are that only very minor differences (e.g., spacing, version number) were found for the seven test cases.  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.
5.35.1.4	The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of CAMCON_LIB 2.21.  The Agency concludes that CAMCON_LIB 2.21 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the ES40, ES45, and ES47 with OpenVMS 8.2.

5.35.1.5	References
 
 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 Analysis Plan (AP-065) 2000.  "Regression for the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for CAMCON_LIB Version 2.16" (document Version 1.00).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #27736.
 WIPP PA (Performance Assessment) 1999.  "Change Control Form for CAMCON_LIB, Version 2.18."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #51637.
 WIPP PA (Performance Assessment) 1999.  "Regression testing of CAMCON_LIB Version 2.18."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #51629.
 WIPP PA (Performance Assessment) 1995.  "Implementation Document for CAMCON_LIB Version 2.20" (document Version 1.04).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #525736.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA 2006.  "Software Problem Report 06-001 for CAMCON_LIB 2.20."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #542992. 
 WIPP PA 2006.  "Change Control Form for CAMCON_LIB 2.20 [proposed 2.21] Sandia National Laboratories."  ERMS #542994.
 WIPP PA  -  2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of CAMCON_LIB 2.21 on the Compaq ES40, ES45, and ES47 Platforms dated May 15, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543447.
 WIPP PA - "AP-162 Analysis Plan for Migration of the Performance Assessment Codes to the Sun Solaris Blade Server Running with Intel Processors, Revision 0 dated June 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files ERMS #557765.
      

CAMDAT_LIB 

This section presents the regression test results for the CAMDAT_LIB software library.  CAMDAT_LIB is a collection of routines that read from and write to a computational database (CAMDAT) file for use by WIPP PA computer codes.  The data manipulations to be performed are expressed as algebraic equations involving the existing and/or newly created data. 

5.35.2.1	Introduction

CAMDAT_LIB 1.22 was used in the CCA.  CAMDAT_LIB 1.22 was validated in January 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of seven Test Cases (1 through 7) met the acceptance criteria defined in the RD/VVP for CAMDAT_LIB 1.22 [1, 2, 4].  As a consequence of the upgrade to OpenVMS 7.3-1, CAMDAT_LIB was re-compiled on the ES40 to create Version 1.25.  No changes were made to the CAMDAT_LIB source code.  The Implementation Document for CAMDAT_LIB 1.25 documents the build of CAMDAT_LIB 1.25 [3].

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from CAMDAT_LIB 1.25 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of CAMDAT_LIB 1.25 run on a DEC Alpha 2100 with OpenVMS 6.1 [5].  In June 2003, the EPA completed a report documenting the Agency's conclusions with respect to the migration and verification of CAMDAT_LIB 1.25 on those operating systems [6].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [7, 8].  In March 2004, the Agency completed a report documenting the Agency's approval of CAMDAT_LIB 1.25 on the Compaq Alpha ES45 and 8400 with OpenVMS 7.3-1 [9].  CAMDAT_LIB 1.25 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [10].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [10].  Because of these changes in the operating system and the addition of a new computing platform, regression testing has been conducted for CAMDAT_LIB 1.25 to ensure that it continues to function correctly.

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to CAMDAT_LIB 1.25 running on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2.

5.35.2.2	Test Methodology

The tests for this library comprised the seven test cases described in the Requirements Document & Verification and Validation Plan for CAMDAT_LIB Version 1.22 (RD/VVP) [1].  Regression test results from CAMDAT_LIB 1.25 run on the Compaq ES40 with OpenVMS 7.3-1 used the VMS DIFFERENCE command to compare results from the validation tests of CAMDAT_LIB 1.25 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [11].
5.35.2.3	Test Results

The results of the tests referenced above are that only very minor differences (e.g., spacing, version number) were found for the seven test cases.  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.

5.35.2.4	The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of CAMDAT_LIB 1.25.  The Agency concludes that CAMDAT_LIB 1.25 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the ES40, ES45, and ES47 with OpenVMS 8.2.

5.35.2.5	References 

 WIPP PA (Performance Assessment) 1995.  "Requirements Document & Verification and Validation Plan for CAMDAT_LIB Version 1.22" (document Version 1.00).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #28109.
 WIPP PA (Performance Assessment) 1995.  "Validation Document for CAMDAT_LIB Version 1.22" (document Version 1.00).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #28112.
 WIPP PA (Performance Assessment) 1995.  "Implementation Document for CAMDAT_LIB Version 1.25" (document Version 1.25).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #525734.
 WIPP PA (Performance Assessment) 1996.  "Validation Document for CAMDAT_LIB Version 1.22" (document Version 1.10).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #37497.
 WIPP PA (Performance Assessment) 2003.  "Analysis Report for the OpenVMS 7.3-1 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #525277.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of CAMDAT_LIB 1.25 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 31, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543448.

CAMSUPES_LIB

This section presents the regression test results for the CAMSUPES_LIB software library.  The CAMSUPES_LIB library is a collection of routines that perform system-dependent functions and allocate memory for arrays at run time for FORTRAN-77 programs.  The system dependent functions provide a uniform interface to necessary operating system functions that are not included in the ANSI FORTRAN-77 standard.  The purpose of the memory management routines is to allow an applications programmer to write standard, readable FORTRAN-77 code making efficient use of memory resources. 

5.35.3.1	Introduction

CAMSUPES_LIB 2.18 was used to support the CCA [1, 4].  CAMSUPES_LIB 2.18 was validated in January 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of two Test Cases (1 and 2) met the acceptance criteria defined in the RD/VVP for CAMSUPES_LIB 2.18 [6].  As a consequence of the upgrade to OpenVMS 7.3-1, CAMSUPES_LIB was re-compiled on the ES40 to create Version 2.22 [2].  No changes were made to the CAMSUPES_LIB source code.  The Implementation Document (ID) for CAMSUPES_LIB 2.22 documents the build of CAMSUPES_LIB 2.22 [5].

In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from CAMSUPES_LIB 2.22 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of CAMSUPES_LIB 2.22 run on a DEC Alpha 2100 with OpenVMS 6.1.  In June 2003, the EPA completed a report documenting the Agency's conclusions with respect to the migration and verification of CAMSUPES_LIB 2.22 on those operating systems [7].  In January 2003, two new hardware systems were added to conduct PAs for the WIPP; a Compaq ES45 and a Compaq Alpha 8400, which are both running OpenVMS 7.3-1 [8, 9].  In March 2004, the Agency completed a report documenting the Agency's approval of STEPWISE 2.21 on the Compaq Alpha ES45 and 8400 which were both running OpenVMS 7.3-1 [10].  CAMSUPES_LIB 2.2 was used to support the 2004 CRA.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [11].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [11].  Because of these changes in the operating system and the addition of a new computing platform, regression testing has been conducted for CAMSUPES_LIB 2.22 to ensure that it continues to function correctly.
The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to CAMSUPES_LIB 2.22 running on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2.

5.35.3.2	Test Methodology

The tests for this software library comprised the two test cases described in the Verification and Validation Plan for CAMSUPES_LIB Version 2.20 (document Version 1.01) (VVP) [3].  Regression test results from STEPWISE 2.21 run on the Compaq ES40 with OpenVMS 7.3-1 used the VMS DIFFERENCE command to compare results from the validation tests of CAMSUPES_LIB 2.22 run on the Compaq ES40, ES45, and the ES47 with OpenVMS 8.2 [12].

5.35.3.3	Test Results

The results of the tests referenced above are that only very minor differences (e.g., spacing, version number) were found for the two test cases.  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.

5.35.3.4	The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of CAMSUPES_LIB 2.22.  The Agency concludes that CAMSUPES_LIB 2.22 meets the acceptance criteria in the RD/VVP and is validated for WIPP PA use on the ES40, ES45, and ES47 with OpenVMS 8.2.

5.35.3.5	References 

 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 Analysis Plan (AP-065) 2000.  "Regression for the Upgrade to OpenVMS Version 7.2 on the WIPP DEC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA (Performance Assessment) 1995.  "Verification and Validation Plan for CAMSUPES_LIB Version 2.20" (document Version 1.01).  Sandia National Laboratories.   Sandia WIPP Central Files WPO #51633.
 WIPP PA (Performance Assessment) 1995.  "Validation Document for CAMSUPES_LIB Version 2.18" (document Version 1.02).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #51634.
 WIPP PA (Performance Assessment) 1995.  "Implementation Document for CAMSUPES_LIB Version 2.22" (document Version 1.04).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #525738.
 WIPP PA (Performance Assessment) 1995.  "Requirements Document and Verification and Validation Plan for CAMSUPES_LIB Version 2.18" (document Version 1.00).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #27744.
 EPA 2003.  "Review of WIPP Performance Assessment Computer Code Migration, June 10, 2003."  Environmental Protection Agency.
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 EPA 2004.  "Review of WIPP Performance Assessment Computer Code Migration, March 31, 2004."  Environmental Protection Agency.
 WIPP PA  -  2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of CAMSUPES_LIB 2.22 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated May 18, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543449.

PLT_LIB

This section presents the regression test results for the PLT_LIB software library.  PLT_LIB is a general device-independent plotting package that performs basic graphics operations. 

5.35.4.1	Introduction
 
PLT_LIB 1.02 was used to support the CCA.  The Implementation Document (ID) for PLT_LIB documents the build of PLT_LIB 1.02 [1].  PLT_LIB 1.02 was validated in January 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of three Test Cases met the acceptance criteria defined in the VVP for PLT_LIB 1.02 [2].  As a consequence of the upgrade to OpenVMS 7.3-1, PLT_LIB was re-compiled on the ES40 to create Version 2.04.  No changes were made to the PLT_LIB source code.
 
In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from PLT_LIB 1.02 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of PLT_LIB 2.04 run on a DEC Alpha 2100 with OpenVMS 6.1 [3].  In January 2003, regression testing of PLT_LIB 2.04 was completed on the Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1 [4, 5].  PLT_LIB 2.04 was used to support the 2004 CRA.

In 2006, an error was found in PLT_LIB 2.04 and is described in the Software Problem Report as "The plot file name appears on the Title line near the top of the plot file.  When the Title line is longer than 80 characters (i.e., when the file name with directory is long), two blank lines are written before the Title lines.  The blank lines do not cause a problem when plotting the Postscript or Adobe plot files.  This problem affects all plot codes that link to PLT_LIB" [6].  The SPR also notes that, "It has no effect on the analysis."

A Change Control Form was also completed in 2006 indicating that, "this library will be recompiled on OpenVMS 8.2, so that it will be compatible with codes that are compiled on the new system and link in the library.  The library will be tested on all machines in the Alpha cluster, including the new ES47.  A correction will be made to PLT_LIB to fix a bug that causes two blank lines to be output to the plot file if the plot name is too long.  Note that PLT_LIB Version 2.05 was built, but will not be qualified" [7].  The revised version of PLT_LIB is 2.06.

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [8].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2 [8].  Because of these changes in the operating system and the addition of a new computing platform, regression testing has been conducted for PLT_LIB 2.06 to ensure that it continues to function correctly.

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to PLT_LIB 2.06 running on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2.

5.35.4.2	Test Methodology

The tests for this software library comprised the three test cases described in the Validation Document for PLT_LIB 2.04 (VD) [2].  Because PLT_LIB is a library, not a code, it is the object library (.OLB), rather than an executable (.EXE), that is of interest.  However, an executable is generated for the driver program that tests the library functionality.  Regression testing results from PLT_LIB 2.06 run on the ES47 with OpenVMS 8.2 were compared to results from the validation tests of PLT_LIB 2.04 run on the ES40 with OpenVMS 7.3-1; and regression testing results from PLT_LIB 2.06 run on the ES40 and ES45 with OpenVMS 8.2 were compared to the regression testing results from PLT_LIB 2.06 run on the ES47 with OpenVMS 8.2 [9].

5.35.4.3	Test Results

The results of the tests referenced above are that only very minor differences (e.g., spacing, version number) were found for the three test cases.  The comparison found that all differences in the output are limited to code run date and time, platform names, system version numbers, the directory, and file names.

5.30.4.4	The Agency's Conclusions

The Agency found that all differences in output are acceptable; namely, that the differences are limited to code run date and time, platform names, system version numbers, the directory, and file names.  The comparison found no differences in the numerical output of PLT_LIB 2.06.  The Agency concludes that PLT_LIB 2.06 meets the acceptance criteria in the VD and is validated for WIPP PA use on the ES40, ES45, and ES47 with OpenVMS 8.2.

5.35.4.5	References 

 WIPP PA (Performance Assessment) 1995.  "Implementation Document for PLT_LIB 2.06 Version 1.06" (document Version 1.04).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #543405. 
 WIPP PA (Performance Assessment) 1995.  "Validation Document for PLT_LIB Version 1.02" (document Version 1.02).  Sandia National Laboratories.  Sandia WIPP Central Files WPO #45406.
 Analysis Plan (AP-042) 1998.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 WIPP PA 2006.  "Software Problem Report 06-001 for PLT_LIB 2.04."  Sandia National Laboratories.  Sandia WIPP Central Files ERMS #542992. 
 WIPP PA 2006.  "Change Control Form for PLT_LIB 2.04 [proposed 2.06] Sandia National Laboratories."  ERMS #543391.
 WIPP PA 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of PLT_LIB 2.06 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543599.

SDBREAD_ LIB

This section presents the test results for the SDBREAD_LIB software library.  The SDBREAD_LIB library is a collection of routines that allow the calling program to retrieve parameter information from the parameter database (PAPDB).

5.35.5.1	Introduction
 
SDBREAD_LIB 3.10 was used to support the CCA.  SDBREAD_LIB 3.10 was validated in January 1996 on a DEC Alpha 2100 with OpenVMS 6.1 by demonstrating that the results of five Test Cases met the acceptance criteria defined in the VVP for SDBREAD_LIB 3.10 [1]. 

As a consequence of the upgrade to OpenVMS 7.3-1, SDBREAD_LIB 3.10 was re-compiled on the ES40 to create Version 3.11 [2].  No changes were made to the SDBREAD_LIB 3.10 source code.  In order to test new operating systems that were added in 2002 - 2003 (Section 1), regression test results from SDBREAD_LIB 3.10 run on the ES40 with OpenVMS 7.3-1 were compared to results from the validation tests of SDBREAD_LIB 3.11 run on a DEC Alpha 2100 with OpenVMS 6.1 [3].  In January 2003, regression testing of SDBREAD_LIB 3.11 was completed on the Compaq ES45 and a Compaq Alpha 8400, which were both running OpenVMS 7.3-1 [4, 5].  SDBREAD_LIB 3.11 was used to support the 2004 CRA.

A Change Control Form was also completed in 2006 that details the revision from Version 3.11 to 3.12 and indicates that "this library will be recompiled on OpenVMS 8.2, so that it will be compatible with codes that are compiled on the new system and link in the library.  The library will be tested on all machines in the Alpha cluster, including the new ES47" [6].  The Implementation Document (ID) for SDBREAD_LIB documents the build of SDBREAD_LIB 3.12 [7].

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines [8].  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 was upgraded to OpenVMS 8.2 [8].  Because of these changes in the operating system and the addition of a new computing platform, regression testing was conducted for SDBREAD_LIB 3.12 to ensure that it continues to function correctly.

In 2011 and 2012, SNL moved the WIPP PA parameter database from the SQLServer to MySQL, therefore SDBREAD_LIB 3.12 was updated to SDBREAD_LIB 4.00 to allow PA codes to access the new database functions [10].  SDBREAD_LIB 4.00 was used during the 2014 CRA calculations.

The discussion below documents the test methodology, regression test results, and the Agency's conclusions with respect to SDBREAD_LIB 4.00 running on the Compaq ES45, and ES47 machines with OpenVMS 8.2.

5.35.5.2	Test Methodology

The tests for this software library comprised the five test cases described in the Verification and Validation Plan & Validation Document (VVP/VD) for SDBREAD _LIB Version 3.10 [1], which is still in effect for Version 3.12.  Because SDBREAD _LIB is a library, not a code, it is the object library (.OLB), rather than an executable (.EXE), that is of interest.  However, an executable is generated for the driver program that tests the library functionality.

Regression testing results from SDBREAD_LIB 3.12 run on the ES47 with OpenVMS 8.2 were compared to results from the validation tests of SDBREAD_LIB 3.11 run on the ES40 with OpenVMS 7.3-1; regression testing results from SDBREAD _LIB 3.12 run on the ES40 and ES45 with OpenVMS 8.2 were compared to the regression testing results from SDBREAD_LIB 3.12 run on the ES47 with OpenVMS 8.2 [9].

To qualify SDBREAD_LIB 4.00, SNL developed two new test cases, Test Cases 11 and 12 to qualify access to the new MySQL parameter database, PAPDB Version 2.00 [10].  These test cases evaluate database open, access to parameter values, track access activities and database closer of the new database.  Database access results are recorded in output files and then manually compared to the MySQL database values to confirm proper performance of SDBREAD_LIB 4.0 routines [10].  

5.35.5.3	Test Results

Evaluation of test cases #11 and #12 verify that SDBREAD_LIB 4.00 routines successfully satisfy the acceptance criteria and access the PAPDB 2.0 parameter database properly ([10] Section 7.0).

5.35.5.4	The Agency's Conclusions

The Agency's review found that SDBREAD_LIB 4.00 satisfies the acceptance criteria and performs the functional requirements [10].    The Agency concludes that SDBREAD_LIB 4.00 meets the acceptance criteria in the VD and is validated for WIPP PA use on the ES45, and ES47 with OpenVMS 8.2.

5.35.5.5	References 

 WIPP PA (Performance Assessment) 1995.  "Verification Validation Plan for SDBREAD_LIB 3.10."  Sandia National Laboratories. 
 Sandia WIPP Central Files WPO #519728.  WIPP PA 2002.  "Change Control Form for SDBREAD_LIB 3.10 [proposed 3.11] Sandia National Laboratories."  ERMS #524653.
 Analysis Plan (AP-042) 2002.  "Regression for the Upgrade to OpenVMS Version 7.1 on the WIPP COMPAC Alpha Cluster."  Sandia National Laboratories. 
 WIPP PA  -  "Analysis Report for the ES45 Regression Test, March 6, 2003."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #530290.
 WIPP PA  -  "Analysis Report for the 8400 Regression Test," Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #527280.
 Sandia WIPP Central Files WPO #519728.  WIPP PA 2006.  "Change Control Form for SDBREAD_LIB 3.11 [proposed 3.12] Sandia National Laboratories."  ERMS #542998.
 WIPP PA (Performance Assessment) 2006.  "Implementation Document for SDBREAD_LIB 3.12."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #543027. 
 WIPP PA 2006.  "Installation of OpenVMS Version 8.2-1 on the WIPP Alpha Cluster and Regression Testing, dated March 16, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files WPO #542680.
 WIPP PA  -  "Regression Testing Report of SDBREAD_LIB 3.12 on the Compaq ES40, ES45, and ES47 Platforms Using OpenVMS 8.2 dated June 6, 2006."  Sandia National Laboratories.  Sandia WIPP Central Files.  ERMS #543599.
 WIPP PA  -  "Verification and Validation Plan/Validation Document for SDBREAD_LIB 4.00, dated March 12, 2012." Sandia National Laboratories. Sandia WIPP Central Files. ERMS #556461.

Codes Used to Support the Inventory Report

ORIGEN2

5.36.1.1	Introduction
	
The ORIGEN2, Version 2.2 software is an isotope generation and depletion code that uses the matrix exponential solution method.  The software was developed and distributed by the Radiation Safety Information Computational Center (RSICC) at the Oak Ridge National Laboratory [1].  The software was adopted into the Los Alamos National Laboratory  -  Carlsbad Operation (LANL-CO) Software Quality Assurance program to decay and report the inventory of radionuclides on a transuranic (TRU) waste stream basis to a common base year within the Comprehensive Inventory Database (CID).  The CID (Section 5.31.3) stores and controls access to TRU waste inventory data required for the WIPP PA calculations.

ORIGEN2, Version 2.2 is a versatile computer code system for calculating the buildup, decay, and processing of radioactive materials [1].  It is written in the FORTRAN programming language.  The software was originally developed on a large IBM mainframe computer.  However, developments and enhancements have subsequently been made on a Pentium PC running Microsoft(R) Windows 2000.  The software may be executed on any Intel Pentium or equivalent processor PC machine capable of running Microsoft(R) Windows.

The original ORIGEN computer code was developed in the early 1970s [2].  ORIGEN2, released in 1980, incorporated updates of the reactor models, cross sections, fission product yields, decay data, and decay photon data, as well as the source code.  ORIGEN2, Version 2.1 was released in August, 1991, replacing ORIGEN2.  In June 1996, the source code was not modified, but the code was recompiled with the Lahey F77-EM/32 V5.10 compiler to replace the Lahey F77 V4.00 compiled executables, which were distributed with the release of ORIGEN2, Version 2.1.  This was done because the Lahey F77 V4.00 executables were incompatible with Microsoft(R) Windows 95.  The Lahey F77-EM/32 V5.10 executables could be run in a DOS window of Microsoft(R) Windows 95 or greater.  In a May 1998 update, the installation procedure was simplified and files in the sample problems directories were reorganized so that the PC output generated at the ORNL could be distributed in a separate subdirectory for each test case.  Due to calculation discrepancy in the mass of fission products, ORIGEN2, Version 2.1 was replaced by Version 2.2 in June 2002.

The LANL-CO TRU Waste Inventory program qualified ORIGEN2, Version 2.2 software under the SNL-CPG software quality assurance program for the initial 2004 CRA of the WIPP.  As presented in Section 1.0, SNL-CPG uses NP19-1, Software Requirements to quality all their production software.

The following software qualification documents have been developed by LANL as part of DOE's life-cycle management process for software used to support the PA (see Section 2.1).

Requirements Document for ORIGEN 2, Version 2.2.

Design and Implementation Document for ORIGEN2, Version 2.2.

User's Manual for ORIGEN2, Version 2.2

ORIGEN2, Version 2.2 Verification and Validation Plan and Validation
 Document

Software Installation and Checkout Forms (various)

ORIGEN2, Version 2.2 is installed on a production machine platform by executing the compressed Microsoft(R) Windows file, C371DOS3.EXE, on the distribution compact disk (CD) creating a subdirectory C:\origen22.  ORIGEN2, Version 2.2 software has not been modified in any fashion for use by the LANL-CO TRU Waste Inventory Program.

The Microsoft(R) Windows executables were created at the RSICC at ORNL on a Pentium IV in a DOS window of Microsoft(R) Windows 2000 with the Lahey/Fujitsu FORTRAN 95 Compiler Release 5.50d compiler.  According to the Users Manual for ORIGEN2, Version 2.2 [3] the compilation generated warning messages but no fatal errors.  DOE also tested the executables under Microsoft(R) Windows 95.

The official version of ORIGEN2, Version 2.2 source code is listed in Appendix 1 of the Design and Implementation Document (DI) [4].  Appendix 2 in the DI is the ORIGEN2, Version 2.2 variable dimension data file that defines the problem size of the main routine.  Appendix 3 of the DI contains the relevant data library file (DECAY.LIB) supplied with the ORIGEN2, Version 2.2 distribution CD.

The Requirements Document for ORIGEN2, Version 2.2 (RD), identifies the requirements that ORIGEN2, Version 2.2 must satisfy under four categories [2]: 1) functional requirements, (2) external interface requirements, (3) design constraints, and (4) attributes.  Table 5.36-1 presents a summary of the specific requirements and the implementation methods to meet the RD.






Table 5.36-1.  Software Requirements and Implementation Methods
                                       
                            Functional Requirements
                         Design/Implementation Method
                                      F1
The software output must provide the version of the software being used.
ORIGEN2, Version 2.2 writes the version of the software to the output file(s).
                                      F2
The software output must provide the date and the time that the software is executed.
ORIGEN2, Version 2.2 writes the date and time the software was executed in the output file(s).
                                      F3
The software input must allow the user to specify a unique case title that is then printed to the output file.
ORIGEN2, Version 2.2 allows the user to enter a unique case title which is subsequently written to the output file(s).
                                      F4
The software data libraries must include all the radionuclides required to be tracked for the WIPP PA.  If the data libraries are missing any of the required radionuclides, the software must provide the ability to extend, update, and correct the libraries.
The decay data library listing (Appendix 3) lists all radionuclides that ORIGEN2, Version 2.2 tracks and will be verified against PA data needs.  The library is in an American Standard Code for Information Interchange (ASCII) test file, which allows it to be modified.
                                      F5
The software must have the ability to indicate that there were errors and the nature of the error in the execution of any case.
ORIGEN2, Version 2.2 has the ability to generate error messages and describes the associated nature of the error upon execution.
                                      F6
The software must employ a matrix exponential method to solve a large system of coupled, linear, first-order ordinary differential equations with constant coefficients to simulate the decay and buildup of radionuclides. ORIGEN-S functional requirement modified to state that, "The software must perform accurate radionuclide decay and buildup calculations" [10].
ORIGEN2, Version 2.2 is based on the matrix exponential method to solve a large system of coupled, linear, first-order ordinary differential equations with constant coefficients to simulate accurate decay and buildup of radionuclides.
              External Interface Requirements  -  User Interface
                         Design/Implementation Method
                                      E1
                                     (SI1)
The software input and output files must be in standard ASCII text file format with data in the format required by and generated by executing ORIGEN2, respectively. E1 same as SI1 [10].
ORIGEN2, Version 2.2 input and output files are in ASCII text file format.
                                     (SI2)
The software must provide an application program interface (API) which allows calling applications to execute a radioactive decay scenario. The results of the execution should be provided in an ASCII test-based file which may be passed by the calling application. New requirement and replaces TransOrigen software.
SCALE6/ORIGEN-S contains an API that allows an external program to execute ORIGEN-S and outputs the results to the calling program in ASCII format ([10] Section 3.2.).
                  Design Constraints  -  Hardware Limitations
                         Design/Implementation Method
                                      D1
The software must have the ability to be executed on an Intel Pentium or equivalent processor PC machine capable of running Microsoft(R) Windows 95 or greater.
ORIGEN2, Version 2.2 is designed to be operated within a DOS operating environment, which is included in Microsoft(R) Windows 95 or greater.
                            Attributes  -  Security
                         Design/Implementation Method
                                      S1
The software will only be available for execution to ORIGEN2, Version 2.2 licensed user's within the TRU Waste Inventory Program and will reside on Intel Pentium processor machines or equivalent running Microsoft(R) Windows 95 or greater.  Physical access to the software will be restricted to the Software Sponsor, the user(s) and the Software Configuration Management Coordinator (SCMC).
The Software Sponsor will designate user(s) of the software.  ORIGEN2, Version 2.2 will be installed on ORIGEN2, Version 2.2 licensed user's Intel Pentium processor machines running Microsoft(R) Windows 95 or greater.  Physical access to the software will be restricted to the Software Sponsor, the user(s) and the SCMC.
                        Attributes  -  Maintainability
                         Design/Implementation Method
                                      A1
The requirements related to maintainability that will be followed are set forth in LCO-QP19-1, Software Quality Assurance, and LCO-QPD-02, LANL-CO Software Quality Assurance Plan.
Maintainability of ORIGEN2, Version 2.2 will be implemented by following the requirements of LCO-QP19-1 and LCO-QPD-02.
                 Transferability and Conversion (Portability)
                         Design/Implementation Method
                                      P1
ORIGEN2, Version 2.2 must be installed on approved licensed user's machines where the workstation configuration standard must be Microsoft(R) Windows XP.
ORIGEN2, Version 2.2 will only be installed on approved user's machines where the workstation configuration standard is Microsoft(R) Windows XP or greater.
Note: Requirements in ( ) are additions or modifications related to the ORIGEN-S software requirements [10].

The Verification and Validation Plan and Validation Document (VVP/VD) describes only the decay and buildup features to be provided by the ORIGEN2, Version 2.2 software and does not verify and validate the reactor simulation of the software [5].

While the ORIGEN2, Version 2.2 software has many capabilities, the qualification for use in the WIPP LANL-CO TRU Waste Inventory Program is limited to radionuclide buildup and decay calculations.  DOE's qualification does not encompass any other code functionalities and any other use of ORIGEN2, Version 2.2.

The software is only used to calculate the inventory of radionuclides on a waste stream basis decayed to common base years, as defined by SNL [6].  

In 2011, Los Alamos National Laboratory (LANL) qualified the SCALE6/ORIGEN-S code for use in WIPP PA inventory calculations.  EPA reviewed LANL's qualification in 2014 as part of its 2014 CRA review. LANL migrated to ORIGEN-S because ORIGEN2 was no longer supported by the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) ([8] Section 1). The matrix exponential expansion model of the ORIGEN code is unaltered in ORIGEN-S [8], therefore the program description provided above is still valid.

As noted by LANL, "All features of ORIGEN are retained, expanded, or supplemented within the ORIGEN-S module." The main difference between ORIGEN-S and ORIGEN2 is that the decay library, "... has been updated, contains current half-life data and associated radionuclide branching function" [8]. These changes for some radionuclide decay time periods my cause results to exceed acceptance criteria which will require additional review and explanation. 

Tables 5.31-1 and 5.31-2 are modified to document changes in software requirements [10] and requirements tested [8]. The test methodology, results and Agency findings pertaining to the qualification of SCALE6/ORIGEN-S are discussed below.


5.36.1.2	Test Methodology

A number of the requirements that the ORIGEN2, Version 2.2 software must satisfy are qualitative in nature.  The qualitative requirements are validated through manual inspection of the test case output files in accordance with the acceptance criteria established for each test case in Section 4.0 of the VVP/VD [5].  The principal type of calculation performed by the ORIGEN2, Version 2.2 software relevant to the CID involves decaying radionuclides of TRU waste streams based on a reference assay year to common base years.  These quantitative calculations are accomplished within the ORIGEN2, Version 2.2 software through various numerical methods, with the key being the matrix exponential solution method.  The quantitative requirement for the ORIGEN2, Version 2.2 software to perform decay and buildup calculations was validated by comparison of the ORIGEN2, Version 2.2 results with those by the MicroShield(R) Version 6.02 software.  Quantitative comparisons of the results calculated by the software packages were accomplished by using Windows Microsoft Excel spreadsheets to check whether the acceptance criteria have been satisfied.

The primary methods of software test result validation utilized by LANL were by a combination of manual inspection of the output files and comparison of the test case results to those calculated by an independently developed and validated decay and buildup software.

MicroShield Version 6.02

MicroShield Version 6.02 is Windows-based software developed and maintained by Grove Engineering (Framatome ANP, Inc., d.b.a. Grove Engineering).  MicroShield Version 6.02 is used to analyze shielding and estimate exposure from gamma radiation.  Several of the specific uses of this type of analysis include designing shields and containers, assessing radiation exposure to people and equipment, selecting temporary shielding for maintenance tasks, inferring source strength for waste characterization and disposal from external gamma radiation measurements, minimizing exposure to people, and teaching principles of radiation and shielding.

MicroShield Version 6.02 verification and validation was conducted in conformance with Grove Engineering's Quality Assurance Program implementing the requirements of 10 CFR 50 Appendix B, as delineated in the MicroShield Quality Assurance Plan.  The Plan also implements ASME NQA 2a-1990, Part 2.7, Quality Assurance Requirements of Computer Software for Nuclear Facility Applications.

Microsoft Excel and Access Applications

Two Microsoft Excel applications were used for the ORIGEN verification and validation activities.  The first application, TransOrigen.xls, is a pre- and post-processor Excel workbook application for the ORIGEN2, Version 2.2 software.  The application provides an interface to process TRU waste stream data using the ORIGEN2, Version 2.2 software by facilitating the creation of input files, running the software, and post-processing the output files.  Documentation of the structure, functionality, and operations of the TransOrigen workbook application is provided in Section 3.0 of the VVP/VD [5].  The TransOrigen application also utilizes a Microsoft Access database file, TransOrigen.mdb, in its data transfer between the applications.  The second Excel application, GN_MS_Difference.xls, involves an Excel workbook developed to facilitate quantitative comparison of the results calculated by the ORIGEN2, Version 2.2 and the MicroShield Version 6.02 software for Test Cases 1 through 3 of the VVP/VD [5].  It consists of:

 The ORIGEN2, Version 2.2 decayed activity concentrations in units of Ci/m[3]
 The MicroShield Version 6.02 decayed activity concentrations in units of Ci/m[3]
 The absolute difference between the ORIGEN2, Version 2.2 and MicroShield Version 6.02 calculated activity concentrations in units of Ci/m[3]
 The relative percent difference (RPD) between ORIGEN2, Version 2.2 and MicroShield Version 6.02 calculated activity concentrations in units of Ci/m[3]

The inventory of radionuclides must be tracked for WIPP PA calculations and for determining the waste unit factor on a waste stream basis and/or on a WIPP-scale basis [6].  The key PA radionuclides are listed in Table 2-1 of the VVP/VD [5] with the corresponding six-digit numerical representation used by the ORIGEN2, Version 2.2 software.

Test Cases  

Testing of ORIGEN2, Version 2.2 software was performed on a PC platform utilizing an Intel(R) Pentium(R) M Processor 2.26 GHz/2.21 GHz with Microsoft Windows XP Professional, Version 2002 SP2 as the operating system.  Computer programs Microsoft Office Excel 2003 SP2 and Microsoft Office Access 2003 SP2 were also utilized.

The VVP/VD documents eight test cases. Test Cases 1 through 3 test the ability of the software to perform decay and buildup calculations for 12 radionuclides in the TRU waste over the range of decay times required by the TRU Waste Inventory Program.  Test Cases 4 through 6 test the ability of the ORIGEN2, Version 2.2 software to produce identical results across various PC platforms (Intel Pentium processor machines or equivalent running Windows 95 or greater).  These test cases require the thermal reactor simulation of the ORIGEN2, Version 2.2 software (i.e., O2_THERM.EXE), which was used for Test Cases 1 through 3. Test Case 7 tests the ability of the ORIGEN2, Version 2.2 software to indicate errors. Test Case 8 tests the ability of ORIGEN2, Version 2.2 software to include all the radionuclides required to be tracked for WIPP. Descriptions of the test cases are provided in Sections 4.1 through 4.8 of the VVP/VD.  Table 5.36-2 provides a summary of the requirements coverage by test case.

In 2011, LANL qualified ORIGEN-S Version 2.2 using the techniques described above, which include manual inspection of the output files and comparison of test results to those calculated independently (regression testing).  In this case ORIGEN-S results are compared to ORIGEN2 results calculated on the same computer platform (Intel(R) Pentium(R) III Xenon Processor @ 2.60 GHz) [8].  Modifications to the software requirements and test cases coverage are listed in Table 5.36-2.  LANL used Windows Microsoft Excel spreadsheets to compare ORIGEN-S and ORIGEN2 Version 2.2 results.

Qualification of ORIGEN-S Version 2.2 consists of seven test cases (Test Case 8 was removed because Test Cases 1 through 4 test the F4 requirement in Table 5.36-2):

Test Cases			Scope (as described in the VVP/VD) [8]

       1  		Ensure ORIGEN-S module contains all of the WIPP PA radionuclides.
  2 and 3	Verify that ORIGEN-S correctly decays PA radionuclides CH and RH activities for 100, 1000, and 10,000 years.
      4		Decays sum of CH and RH (total) activities for 169 radionuclides correctly for                  	100, 1,000 and 10,000 years.
  5 and 6 	Verify proper operation of ORIGEN-ARP and PlotOPUS. LANL results are                 		compared to RSICC results.
      7		Verify ORIGEN-S reports errors properly.

LANL also notes, "In cases where the acceptance criteria are exceeded, an explanation and justification will be provided." 
  
Table 5.36-2.	ORIGEN2 Version 2.2 Requirements Coverage by Test Case

                                  Requirement
                                   Test Case
                                      No.
                                  Description
                                       1
                                       2
                                       3
                                       4
                                       5
                                       6
                                       7
                                       8
                                      F1
The software output must provide the version of the software being used.
                                       X
                                       X
                                       X
                                       X
                                       X
                                       X
                                       *
                                       
                                      F2
The software output must provide the date and the time that the software is executed.
                                       X
                                       X
                                       X
                                       X
                                       X
                                       X
                                       *
                                       
                                      F3
The software input must allow the user to specify a unique case title that is then printed to the output file.
                                       X
                                       X
                                       X
                                       *
                                       *
                                       *
                                       *
                                       
                                      F4
The software data libraries must include all the radionuclides required to be tracked for the WIPP PA.  If the data libraries are missing any of the required radionuclides, the software must provide the ability to extend, update, and correct the libraries.
                                       *
                                       *
                                       *
                                       *
                                       
                                       
                                       
                                       X
                                      F5
The software must have the ability to indicate that there were errors and the nature of the error in the execution of any case.
                                       
                                       
                                       
                                       
                                       
                                       
                                       X
                                       
                                      F6
The software must employ a matrix exponential method to solve a large system of coupled, linear, first-order ordinary differential equations with constant coefficients to simulate the decay and buildup of radionuclides.  ORIGEN-S functional requirement modified to state that, "The software must perform accurate radionuclide decay and buildup calculations" [10].
                                       X
                                       X
                                       X
                                       *
                                       
                                       
                                       
                                       
                                      E1
                                     (SI1)
The software input and output files must be in standard ASCII text file format with data in the format required by and generated by executing ORIGEN2, Version 2.2, respectively.
                                       X
                                       X
                                       X
                                       X
                                       X
                                       X
                                       *
                                       
                                     (SI2)
The software must provide an application program interface (API) which allows calling applications to execute a radioactive decay scenario. The results of the execution should be provided in an ASCII text-based output file which may be passed by the calling application.
                                       *
                                       *
                                       *
                                       *
                                       
                                       
                                       *
                                       
                                     (D1)
The software must have the ability to be executed on an Intel Pentium or equivalent processor PC machine capable of running Microsoft(R) Windows XP.
                                       *
                                       *
                                       *
                                       *
                                       *
                                       *
                                       *
                                       
Note: Requirements in ( ) or with * are additions or modifications related to the ORIGEN-S software requirements [8].  Test Case 8 removed because Test Cases 1 through 4 test the F4 requirement.

5.36.1.3	Test Results

The purpose of Test Cases 1 through 3 is to decay 1 Ci/m[3] of each of 12 radionuclides with the highest concentrations in TRU waste over 3 time periods (i.e., 37, 50, and 100 years).  This same initial starting concentration was specified as input to the MicroShield Version 6.02 code.  The calculations obtained by running both ORIGEN2 Version 2.2 and MicroShield Version 6.02 code results were subsequently compared.

The stated intent of Tests 1 through 3 is to test the ability of the software to perform decay and buildup calculations for 12 radionuclides of TRU waste over the range of decay times required by the TRU Waste Inventory Program.  It is unclear why the comparison is not performed over a longer simulation time.  For instance, the verification of TransOrigin discussed in Section 5.36.2 included times of 100, 350, 1,000, 5,000 and 10,000 years.

The predicted discrepancy between ORIGEN and MicroShield at 37 years for Sr-90 was 1.6%; at 50 years, this error increased to 4.26%, and at 100 years, it decreased to 2.15%.  Although the differences were within the acceptance criteria of 5%, the relatively short timeframes do not allow an evaluation of how the errors may propagate over the time period of interest.  This concern was raised by the Agency to the DOE on June 10[th], 2010.  In response to the EPA's concern, DOE provided the following response:

      In light of the recent technical comment from the EPA regarding ORIGEN2, version 2.2 qualification under LANL-CO Software QA program, and the lack of a 10,000 year decay test, we investigated the LANL-CO ORIGEN2 Verification & Validation Plan and Validation Document (VVP/VD) tests to understand the problem better.
      
      First, we know that EPA recertified the WIPP in 2006.  Under the CRA (1), ORIGEN2, version 2.2 was used to decay the radionuclide inventory.  At that time, it was tracked under the software QA program at SNL-CPG.  SNL-CPG had qualified it after publishing a VVP/VD document of their own (ERMS #535718).  So, we thought it worth comparing the SNL-CPG VVP/VD test cases with those listed in LANL-CO's VVP/VD.  We know that we generally modeled our own VVP/VD from SNL's when we published ours in 2006. 
      
      We found that five test cases are exactly as prescribed by SNL-CPG with no differences in output other than date/time stamping.  In addition to these, three test cases are very similar in their methodology, but use different input parameters and thus have different output.  One additional test case, though, was not performed by LANL-CO (this happens to be the one lacking test that EPA is calling into question).  LANL-CO never ran a specified test which decayed a radionuclide for 10,000 years and compared its results to those from another software package.  SNL-CPG's test case #7 prescribed the decay of 4,076 Ci of Pu-238 for 10,000 years, and compared the results from ORIGEN2 to those of their PANEL code.  Only long-lived (half-lives > 22 years) daughters were compared in the results. 
      
      Upon further investigation, however, we found that two of our test cases (#4 and #6, which involved executing sample problems that came packaged with the ORIGEN2, version 2.2 code) did in fact decay radionuclide data for a range of terms from 0.1 years up to 1,000,000 years, including a 10,000 year term.  When we ran these tests, our results matched exactly those documented with the published sample problems packaged with the ORIGEN2, version 2.2 code.
      
      But in order to address EPA's concern, we decided to execute SNL-CPG's test case #7 on our ORIGEN2, version 2.2 production platform here at LANL-CO.  The results we obtained are exactly equal to those documented in SNL-CPG's VVP/VD.  Attached is the results of the comparison that was performed.

In response to DOE's explanation, the Agency reviewed the code comparison that DOE provided, as well as relevant sections in SNL's Verification and Validation Plan and Validation Document for ORIGEN2 Version 2.2 [7].  The LANL results for Test Case 7 do match exactly those results obtained by SNL for the same test problem.  Furthermore, the error falls below the acceptance criteria of 5%. 

In the 2011, qualification of ORIGEN-S Version 2.2, Test Cases 1, 2 and 3 showed similar results.  For these tests cases, differences in results were relatively minor and caused by the half-life differences in the new ORIGEN-S program library. EPA agrees with LANL's explanation.  Test Case 4 showed a maximum of 0.247% difference between ORIGEN-S and ORIGEN2 results, which is reasonable. 

Test Case 5 used RSICC Sample Problem 2 to validate that ORIGEN-S properly executes ORIGEN-ARP on LANL computers. Test case five results were identical to RSICC results.  Test Case 6 was used to validate the PlotOPUS plotting utilities.  Test Case 6 produced results that exactly mimic those generated by RSICC.  Test Case 7 verified ORIGEN-S' error handling capability.  ORIGEN-S reported errors and showed the nature of the errors properly.

5.36.1.4	The Agency's Conclusions

Based upon a review of the verification activities, the Agency concludes that ORIGEN-S Version 2.2 meets the acceptance criteria and is validated for decay and buildup calculations for the WIPP PA and the annual inventory calculations on LANL computers (Intel(R) Pentium(R) computers with Microsoft Windows Operating System and software).

5.36.1.5	References

 ORNL 2002.  RSICC Computer Code Collection: ORIGEN2.2, Isotope Generation and Depletion Code Matrix Exponential Method.  CCC-371, Oak Ridge National Laboratory, Oak Ridge, Tennessee.
 LANL 2007.  Requirements Document for ORIGEN 2, Version 2.2.  INV-OGN-01 Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2007.  MicroShield 6 User's Manual.  LCO-MIC-03, Los Alamos National Laboratory  - Carlsbad Operations, Carlsbad, New Mexico.   
 LANL 2007.  Design and Implementation Document for ORIGEN2, Version 2.2.  INV-OGN-02, Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2007.  ORIGEN2, Version 2.2 Verification and Validation Plan and Validation Document  -  WIPP PA.  INV-OGN-04.  Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 Dunagan, S.  2007.  "Sandia's WIPP Inventory Data Needs for Performance Assessment."  Letter to R. Patterson (CBFO), June 18, 2007.  Sandia National Laboratories, Carlsbad, New Mexico.
 SNL 2004.  ORIGEN2, Version 2.2 Verification and Validation Plan and Validation Document  -  WIPP PA.  Sandia National Laboratories  -  Carlsbad Programs Group, Carlsbad, New Mexico.
 WIPP PA  -  "ORIGEN-S Verification and Validation Plan and Validation Document, Revision 0", INV-OGS-03, dated June 3, 2011. Los Alamos National Laboratory  -  Carlsbad, New Mexico.
 WIPP PA  -  "ORIGEN-S, User' Manual, Revision 3", INV-OGS-02, dated June 7, 2011. Los Alamos National Laboratory  -  Carlsbad, New Mexico.
 WIPP PA  -  ORIGEN-S Requirements Document, Revision 1, dated May 16, 2011. Los Alamos National Laboratory  -  Carlsbad, New Mexico.

TransOrigen

**NOTE: Functionality incorporated into CID database software. 

  5.36.2.1 Introduction

Please Note: In 2014, EPA reviewed LANL's ORIGEN (ORIGEN-S, newest version) and CID documentation related to the TransOrigen code.  LANL incorporated all of TransOrigen's functionality into the CID database software.  The new CID database software imports input files and exports ORIGEN-S results directly.  The Agency focused its 2014 review on the changes to the CID database and software.  The Agency left this section of the report unchanged to preserve historical review information.

TransOrigen is a Microsoft Excel-based automation tool used to efficiently execute the ORIGEN2, Version 2.2 software in a batch sequence (once per waste stream), and compile the output data from all executions into a single spreadsheet.  The input for the execution is exported from the Comprehensive Inventory Database (CID).  TransOrigen writes a series of ORIGEN input files containing the input data.  Following the execution of ORIGEN on each of the files, the output data are transferred into a single "results" sheet within TransOrigen.  The CID subsequently imports these results for reporting purposes.

LANL completed an Analysis Report [1] to document the validity of the unit conversion and data transfer between the CID v.1.00 S.1.00 data version D.7.00 [2], and ORIGEN2, Version 2.2 software (Section 5.31.1).  The CID was qualified for use as developed software under the LANL-CO Quality Assurance (QA) Program in December of 2006, in accordance with LANL-CO Software Quality Assurance Plan, Revision 1 [3] and Quality Assurance for Developed Software, Revision 0, LCO-QP19-1 [4].  ORIGEN2, Version 2.2 was qualified for use as adopted software under the LANL-CO QA Program in August of 2007, in accordance with LANL-CO Software Quality Assurance Plan, Revision 2 [5] and Software Quality Assurance, Revision 1 [6]. 

This unit conversion and data transfer activity is accomplished using a utility named "TransOrigen," a Microsoft Excel 2003 workbook file (TransOrigen.xls) designed to accommodate the batch unit conversion and transfer of isotopic data, decay corrected using the ORIGEN2, Version 2.2 software.  TransOrigen employs Excel macros and code modules, written in Visual Basic for Applications (VBA), to perform calculations and transfer data between tabular spreadsheets and ASCII text files, which are the basis for the ORIGEN2, Version 2.2 input and output files.  In addition, TransOrigen makes use of a Microsoft Access 2003 database (TransOrigen.mdb) table to boost processing performance.  Appendix I of the Analysis Report provides a detailed description and application of the use of the workbook [1].

The testing of TransOrigen was performed on a workstation with the following characteristics:

 LANL Property No. PN1146933
 Hardware:  Dell Optiplex GX270
 Intel Pentium(R) 4 CPU - 3.2 GHz
 GB RAM
 212 GB HD
 Operating System: Microsoft(R) Windows XP Professional
 Version 2002 Service Pack 3
 Software: Microsoft Office Excel 2003 SP3
 Microsoft Office Access 2003 SP3
 Comprehensive Inventory Database v.1.00 S.1.00, data version D.7.00 [7]
This activity is performed by the LANL-CO Transuranic (TRU) Waste Inventory Program, and provides SNL information on the waste stream decay of radiological data for the PA calculations. 

The Analysis Report [1] was prepared as prescribed by the Analysis Plan for Transuranic Waste Inventory, INV-AP-01, Revision 3 [8], and is written in accordance with Analyses, Revision 3 [9].  The Analysis Report is also prepared in accordance with current revisions of the LANL-CO Software Quality Assurance Plan, Revision 3 [10] and Software Quality Assurance, Revision 1 [11].  The test methodology, results and Agency findings pertaining to the qualification of TransOrigin are discussed below.

  5.36.2.2	Test Methodology

The CID maintains radionuclide concentrations (in Ci/m[3]) on a waste stream basis.  The CID also contains the year, for each waste stream, in which the radionuclide information was analyzed.  SNL requires that the radionuclide activities be decay-corrected to seven different common base years for purposes of running the PA.  In order to accomplish this, the radionuclide concentrations must be exported from the CID to TransOrigen, decay-corrected by ORIGEN2, Version 2.2 to the specified common base year, and re-imported from TransOrigen, for reporting by the CID.  ORIGEN2, Version 2.2 can accept a radionuclide distribution (in grams) for a single waste stream at a time, prescribing a single decay period (in years) in the form of a *.inp input file. Therefore, TransOrigen is designed to automate the following:

 Unit conversion from Ci/m[3] to g/m[3] for each waste stream's radionuclides
 Generating an ORIGEN2, Version 2.2 input file (*.inp) for each waste stream containing the radionuclide parameters
 Processing multiple executions of ORIGEN2, Version 2.2 (one per waste stream) as a batch
 Importing resulting radionuclide data from the ORIGEN2, Version 2.2 output files (*.u11)
 Unit conversion from g/m[3] back to Ci/m[3] for each waste stream's radionuclides

TransOrigen converts Ci/m[3] to g/m[3], which are subsequently supplied to ORIGEN2, Version 2.2, as an input using the ORIGEN2, Version 2.2 numeric ID.  ORIGEN2 Version 2.2 is used to decay the radionuclides for the prescribed period.  TransOrigen then reads the output file and converts the decayed radionuclides back from g/m[3] to Ci/m[3], and compiles the results of each execution into a format that can be imported back into the CID.  In order to validate the unit conversion and transfer of data between the TransOrigen utility and ORIGEN2, Version 2.2, LANL performed a separate validation execution to demonstrate that the pre- and post-processing data transfers are functioning as intended.  The "20081112_Val" folder (included in the electronic media associated with this analysis) contains the files pertaining to this execution.  In order to demonstrate the TransOrigen utility functions properly, a set of input data were decayed by ORIGEN2, Version 2.2 for a decay period of 0 (zero) years.  This dataset is essentially the CID data version D.7.00 radionuclide data, with the exception that all generation years have been explicitly set to 2007, with decay thru the year 2007.  The intent of this execution is to compare the input dataset with the output dataset for equivalency.  Any discrepancies were to be documented and an explanation given.  The acceptance criteria for validation are that no radionuclide data are lost as a result of the pre- and post-processing transfer.  Exceptions to these criteria are if the data are intentionally removed (i.e., an input activity concentration of zero Ci/m[3]), and if the data is lost because of limitations related to the TransOrigen utility (i.e., gram parameters cannot be smaller than 1.00E-24 or larger than 1.00E+38). 

LANL validated the execution of the TransOrigen by conducting a technical review of the analysis results.  As part of that analysis, an example waste stream (SR-W027-221H-HET-RH) was selected for detailed inspection for each of the seven decay runs.  The results of this inspection are described within the Analysis Report [1].  The input data for the calculations are given in Table 1 of that report [1].  Waste stream SR-W027-221H-HET-RH contains 11 radionuclides, which are Am-241, Np-237, Pu-238, Pu-239, Pu-240, Pu-241, Pu- 242, U-234, U-235, U-236, and U-238.  The radionuclide concentrations for the waste stream reside in the CID data version D.7.00 [7].

Seven decay calculations are performed separately and verified to ensure that the radionuclide concentrations are properly converted from TransOrigen to ORIGEN2, Version 2.2.  The decay calculations for the waste stream SR-W027-221H-HET-RH are as follows.

 Decay through year 2007 (baseline inventory date)
 Decay through year 2033 (WIPP closure)
 Decay through year 2133 (100 years from WIPP closure)
 Decay through year 2383 (350 years from WIPP closure)
 Decay through year 3033 (1,000 years from WIPP closure)
 Decay through year 7033 (5,000 years from WIPP closure)
 Decay through year 12033 (10,000 years from WIPP closure)

The first worksheet in the TransOrigen workbook, called "qryOrigenDump_Final," contains the input data.  The computational methodology (implemented in the TransOrigen.xls workbook) involved a 12-step process described in Section 3 of the Analysis Report [1].  Briefly, these steps involved inputting data into TransOrigen, the data is manipulated and results are extracted and extraneous numeric format is removed, duplicate input is removed, and the inputted data are translated to symbols and numeric format.

  5.36.2.3	Test Results

Overall results of the comparison show that there is no difference between the TransOrigen and the ORIGEN2, Version 2.2 output, except for the last significant figures, due to rounding and unit conversions between TransOrigen and ORIGEN2, Version 2.2.  Details of each comparison for the 11 radionuclides are shown in Figures 1 through 4 in Section 4 of the Analysis Report [1].

5.36.2.4	The Agency's Conclusions

Based upon a comparison of work sheets, input and output files, the Agency concludes that TransOrigin meets the acceptance criteria and is validated for WIPP PA use.
5.36.2.5	References

 LANL 2008.  TransOrigin Unit Conversion and Data Transfer for the 2007 TRU Waste Inventory.  Analysis Report.  INV-SAR-14.  Los Alamos National Laboratory  -  Carlsbad Operations. 
 LANL 2008.  Software Quality Assurance Plan, Revision 3, LCO-QPD-02, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL  -  CO).
 LANL 2005.  Criteria for the Certification and Recertification of the Waste Isolation Pilot Plant's Compliance With the 40 CFR Part 191 Disposal Regulations, Final Rule, 40 CFR 194, Federal Register, February 9, 1996.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2005.  Software Quality Assurance Plan, Revision 1, LCO-QPD-02, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2007.  Quality Assurance for Developed Software, Revision 0, LCO-QP19-1, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2007.  LANL-CO Software Quality Assurance Plan, Revision 2, LCO-QPD-02, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2008.  Comprehensive Inventory Database, v.1.00, S.1.00, data version D.7.00, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2008.  Analysis Plan for Transuranic Waste Inventory, Revision 3, INV-AP-01, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2008.  Analyses, Revision 3, LCO-QP9-1, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2007.  Software Quality Assurance, Revision 3, LCO-QP19-1, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2002.  Software Quality Assurance, Revision 1, LCO-QP19-1, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).

Comprehensive Inventory Database
 
  5.36.3.1 Introduction

To support the CCA and 2004 CRA, the DOE compiled an inventory of the TRU waste that were expected to be shipped to and disposed of in the WIPP.  The documents that provide these waste projections are Transuranic Waste Baseline Inventory Report, Revision 2 [1]; Transuranic Waste Baseline Inventory Report, Revision 3 [2]; and Transuranic Waste Baseline Inventory Report  -  2004 [3].  The generation of TRU waste inventory reports after the 2004 CRA have been facilitated by utilizing the CID.  The CID is a database application that was created using Microsoft(R) Access Data Project(R) (ADP) technology and provides access to a database running on a Microsoft SQL Server(R) 2000 platform.

The CID uses waste stream information provided by the DOE TRU waste generator sites.  This waste stream information includes a waste stream profile that is assigned to a Final Waste Form by the DOE TRU waste sites.  The CID has replaced the Transuranic Waste Baseline Inventory Database (TWBID), Revision 2.1, as the central inventory information repository for tracking all existing and potential (TRU) waste for use in the CRA-2009 and future CRAs.

The CID is frequently updated and contains all radiological, physical, chemical, and volumetric information for TRU waste generated at DOE complex TRU waste sites that eventually may be shipped to the WIPP.  The information is updated by either electronically importing data or by manual entry.  The waste information is tracked at a waste stream level, rather than on a container-by-container basis.

The following software qualification documents were developed as part of DOE's life-cycle management process for the CID to support their PA (Section 1.0):

 Requirements Document (RD), Requirements Document for the Comprehensive Database [4]
 Verification and Validation Plan (VVP), Verification and Validation Plan for the Comprehensive Inventory Database [5]
 Design & Implementation Document (DID), Design & Implementation Document for the Comprehensive Inventory Database [6]
 User's Manual (UM), User's Manual for the Comprehensive Inventory Database [7]
 Validation Document (VD), Validation Document for the Comprehensive Inventory Database [8]
 Software Installation and Checkout Forms (various)

The CID is updated and collected from the generator sites and WIPP on an annual basis.  Inventory data are compiled, summarized, scaled, reported from the CID, and published within the Performance Assessment Inventory Report (PAIR).  The CID was qualified December 2006 under QA 19-1 procedures [9, 10].  
From 2011 through 2013, LANL ran three qualification activities to verify that the CID database and software continued to perform inventory operations correctly [12, 15, 16]. EPA reviewed LANL's qualification of the CID database and software in 2014 as part of its 2014 CRA review. In 2011, LANL ran 75 test cases to qualify CID Version 2.00 on Windows computer hardware using Microsoft Office Access and Excel 2010 and Microsoft SQL Server 2008 database software. In 2012, eleven test cases were run on CID Version 2.01 to test changes due to database/software problems reported by users and change control proposals [15].  In 2013 sixteen test cases were run on CID Version 2.02 to also test database/software changes [16]. 
Various cases tested a number of requirements the CID database and software are expected to perform, as documented in the requirements document [11].  These requirements include data entry, data import, manual data lookup, ORIGEN-S interface, report generation and other inventory related activities [11].  Functionality of TransOrigen was moved to the CID software. These functions were tested by a number of test cases, such as TC-018 and TC-083 [16]. 
The approach used by LANL is similar to that described in this report as described in [12] Section 5.3 (page 12). Numerous test cases were developed to test more than 200 requirements specified for the CID database and software [11]. The test methodology, results and Agency findings pertaining to the qualification of the CID are discussed below.
  5.36.3.2	Test Methodology

Microsoft(R) Access Data Project(R) (ADP) was used to develop the CID.  This technology was based upon the following considerations:

 Distributed Architecture
 Increased Development Efficiency
 Scalability
 Security
 Performance

The CID server component is a database running on a Microsoft SQL Server 2000 platform.

The VVP [5] identifies the following requirements:

    Twenty-three (23) functional requirements pertaining to the waste streams (e.g., volume shipped, radionuclide composition, complexing agents, oxianions, etc.)
    Twenty-six (26) reporting requirements (e.g., projected waste generation, waste shipping progress, waste stream volumes by container type)
    Thirty-four (34) interface requirements (e.g., viewing, conflict detection)
    Six (6) nonfunctional requirements (e.g., portability and performance)

The VD [8] presents 34 test cases that are designed to test all of the functional requirements listed above.  Table 5.31-1 presents the workstation configurations used during Validation testing.

Table 5.31-3.	Workstation Configurations Used during Validation Testing

                                  Workstation
                                   CPU Speed
                                      RAM
                                    HD Size
                               Operating System
                                System Software
                            Screen Resolution Used
PN1098668

Dell Latitude Laptop

Property No. 1098668
996 MHz
524 Mb
30 Gb
MS Windows 2000 Professional SP4
MS Access 2003 SP2

MS Excel 2003 SP2

MDAC 2.8 SP1
1024x768 pixels
PN1163363

Dell Optiplex GX280

Property No. 1163363
3.8 GHz
2.0 Gb
150 GB
MS Windows XP Professional ver. 2002 SP2
MS Access 2003 SP2

MS Excel 2003 SP2

MDAC 2.8 SP1
1024x768 pixels
                                    Server
                                   CPU Speed
                                      RAM
                                    HD Size
                               Operating System
                                System Software
                                       
E-CO2

Dell PowerEdge 2600

Property No. 1145676
3.06 GHz
2.0 Gb
272 Gb
MS Windows Server 2003 Standard SP1
MS SQL Server 2000 Standard SP3



To perform the testing, a synthetic dataset was developed to provide a sample set of parameters to test the report generation function.  This synthetic dataset was installed as part of the first test case execution (i.e., Test Case TC-001) and designated the "CIDFull" database, which included the parameters necessary to test the reporting requirements.  Although this dataset was designed to include enough variations in parameters to adequately test the reports, it is not inclusive of every possible combination of parameters.  One simplification in the dataset is that the radionuclide concentrations are not decayed through time.  Although this assumption does not simulate actual conditions, the DOE believes (and the Agency agrees) that for the purposes of testing, it is acceptable.  The parameters that comprise the synthetic dataset are provided in Appendix B of the VD [8].  The data entry requirements that were not tested on the "CIDFull" dataset were tested in other individual test cases.

In 2006, all the test cases were executed by first running the first two test cases (i.e., TC-011 and TC-002).  These test cases include information provided in the form of screen shots, spreadsheets, and reports that illustrate the execution of different steps that are required for all of the test cases [8].  DOE notes that the results of these test cases provide the evidence that the database has met the requirements of the RD [4].  The results from all of the testing are presented in Appendix C of the VD [8].
In 2011, 2012, and 2013, LANL verified that CID Versions 2.00, 2.01 and 2.02 inventory database and software continued to perform inventory operations correctly [12, 15, 16]. LANL ran numerous test cases to qualify CID Version 2.00, 2.01 and 2.02 on Windows computer hardware using Microsoft Office Access and Excel 2010 and Microsoft SQL Server 2008 database software. The approach used by LANL is similar to that described above as explained in Section 5.3 of the VVP/VD [12]. Numerous test cases were developed to test the full range of requirements specified for the WIPP PA (such as decay times up to 10,000 years) for the CID database and software [11]. 
  5.36.3.4	Test Results

During the test execution, a number of issues were identified.  DOE determined some to be flaws in the test case steps, while others were deemed to be software-related issues requiring correction. These issues, along with an explanation or their respective dispositions, are listed in the table presented in Section 6 of the VD [8]. 

The Agency agrees with the DOE determination that only the issues related to the Hazardous Waste Numbers entry (TC-008) should be corrected in the CID software.  Therefore, corrections were made and Test Case TC-008 was retested to validate the functionality.  Corrections were made to the CID, and a new version was released (i.e., S.1.00 RC 2).  Since the software was corrected, both test cases TC-001 and TC-002 (installation of the software) were retested.  Test Case TC-008 was also retested using v.1.00 RC 2, and no issues were identified.

EPA closely examined the results of LANL's CID qualification activities and found that their work thoroughly tested the many requirements for the CID database and database software. As previously performed, some test cases were rerun to verify problems found during the first execution of these test and were properly corrected (for example, test cases TC-076, TC-080, TC-081 and TC-084) [16].

  5.36.3.5	The Agency's Conclusions

EPA found testing to be adequate based upon the complete range of requirements tested by LANL for the inventory database and software of the Comprehensive Inventory Database (CID Version 2.02). Therefore, the Agency concludes that the CID database and software meets the acceptance criteria and is validated for the WIPP PA and the annual inventory reports on LANL computers (Intel(R) Pentium(R) computers with Microsoft Windows Operating System and software).

  5.36.3.6	References
 
 U.S. Department of Energy, 1995, Waste Isolation Pilot Plant Transuranic Waste Baseline Inventory Report, Revision 1, CAO-94-1005, February 1995.
 DOE 1996.  Transuranic Waste Baseline Inventory Report.  Revision 3, June 1996, DOE/CAO-95-1121, U.S. Department of Energy, Carlsbad, New Mexico.
 DOE 2006.  Transuranic Waste Baseline Inventory Report  -  2004.  Revision 0, DOE/TRU-2006-3344, U.S. Department of Energy  -  Carlsbad Field Office, Carlsbad, New Mexico.
 LANL 2006.  Requirements Document (RD), INV-CID-01, Requirements Document for the Comprehensive Database.  Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2006.  Verification and Validation Plan (VVP), INV-CID-02, Verification and Validation Plan for the Comprehensive Inventory Database.  Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2006.  Design & Implementation Document (DD & ID or DID), INV-CID-03, Design & Implementation Document for the Comprehensive Inventory Database.  Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2006.  User's Manual (UM), INV-CID-04, User's Manual for the Comprehensive Inventory Database.  Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2006.  Validation Document (VD), INV-CID-05, Validation Document for the Comprehensive Inventory Database.  Los Alamos National Laboratory  -  Carlsbad Operations, Carlsbad, New Mexico.
 LANL 2007.  Software Quality Assurance, Revision 3, LCO-QP19-1, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
 LANL 2002.  Software Quality Assurance, Revision 1, LCO-QP19-1, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
  [11] LANL 2011. Requirements Document for the Comprehensive Inventory Database, Revision 5, INV-CID-01, dated July 18, 2011, Carlsbad, New Mexico. Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO). 
  [12] LANL 2011, Verification & Validation Plan and Validation Document for the Comprehensive Inventory Database, Revision 2, INV-CID-02, dated July 28, 2011, Carlsbad, New Mexico. Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
  [13] LANL 2014, Design & Implementation Document for the Comprehensive Inventory Database. Revision 3, INV-CID-03, dated January 10, 2014, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
  [14] LANL 2013, User's Manual for the Comprehensive Inventory Database, Revision 2, INV-CID-04, dated December 19, 2013, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
  [15] LANL 2012, Validation Document for the Comprehensive Inventory Database version v.2.01 S.2.01, Revision 0, INV-CID-06, dated March 10, 2012, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
  [16] LANL 2013, Validation Documentation for the Comprehensive Inventory Database version v.2.02 S.2.02, Revision 0, INV-CID-07, dated December 19, 2013, Carlsbad, New Mexico.  Los Alamos National Laboratory  -  Carlsbad Operations (LANL - CO).
  
  
Summary and Conclusions

In 2006, SNL procured four Compaq ES47 machines to add to the computing resources of two Compaq ES40 and two Compaq ES45 machines.  In addition to the hardware upgrades, the operating system OpenVMS 7.3-1 has been upgraded to OpenVMS 8.2.  The 31 computer codes and 5 libraries and 3 databases that were used to support the 2009 CRA and 2009 PABC calculations were on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2 (Table 3.1-1).  After completing the Agency's review, the Agency concludes that the versions of the computer codes indicated in Table 3.1-1 are approved for use in compliance calculations for the WIPP PA on the Compaq ES40, ES45, and ES47 machines with OpenVMS 8.2. 

In 2013, SNL qualified six of the WIPP PA codes for use in the 2014 recertification application (2014 CRA) on the Compaq ES45 and ES47 computers using OpenVMS 8.2.  SNL also migrated the entire WIPP PA to the Solaris Blade platform with SunOS 5.11.  SNL/DOE established a new PA baseline (PABC09) entirely on the Solaris Blade Platform using SunOS 5.11 and performed integration tests to verify that the Solaris PA system generates adequate PA results. EPA concludes from its review that the WIPP PA codes are approved for use in the 2014 CRA on the Compaq cluster with Open VMS 8.2. The Agency also approves the WIPP PA codes, except for EQ3/6, migration to the Solaris Blade platform with SunOS 5.11 for future compliance calculations and the PABC09 baseline entirely on the Solaris platform.



