Wednesday, March 14, 2012

Recurring lessons in weapon T&E programs

Introduction

Test and evaluation (T&E) in acquisition has long been a subject of research interest among students and faculty at the Naval Postgraduate School (NPS), Monterey, CA. Over the years, NPS researchers have studied T&E in a number of major system acquisition programs, including the Abrams Main Battle Tank (M1A2), the Javelin Anti-Armor Weapon System, the Enhanced Position Location Reporting System, the Avenger Air Defense System, the Kiowa Warrior armed scout helicopter, the Maneuver Control System, the Family of Medium Tactical Vehicles, the Air Defense/AntiTank System, and the Apache (AH-64) Attack helicopter. These programs represent several different types of systems ranging from electronic and data communications and software to major weapon systems. They also represent different types of developments, including system upgrades, nondevelopmental items (NDIs), and full-scale developments.

These research projects indicate that several developmental and operational T&E issues recurred in system acquisition programs during the past decade. This article provides a summary and brief overview of significant issues identified by the author and several NPS students. More detailed analyses and findings may be found in two NPS Master of Science theses: A Comparative Analysis of Developmental Test and Evaluation in the United States Army by Arthur J. Aragon Jr., USA; and An Analysis of Weapon System Readiness for Operational Testing by James B. Mills, USA.

Sources for these research efforts included program management office (PMO) personnel, program testers, analysis personnel, user representatives, and contractors participating in T&E of the major programs selected. Research included reviews of after action reports from Service T&E agencies, lessons learned reports of major systems, General Accounting Office (GAO) reports, Congressional subcommittee reports, developmental test and evaluation (DT&E) and operational test and evaluation (OT&E) reports, and technical and professional journals.

Recurring T&E Issues

Our research indicates that five significant issues and problem areas exist in conducting ME programs. In order of significance, these ME problem areas are:

T&E schedules,

The acquisition process,

- Test culture,

Resource management, and

Changing requirements.

Additionally, the type of development strategy, such as the use of NDI or system upgrades, may influence which of these problems is most prevalent.

Scheduling Issues

Scheduling difficulties continue to be the most common and significant problems in conducting T&E. These problems are caused by an acquisition process that emphasizes completing the test on schedule rather than conducting the test according to plan. This leads to overoptimism and, thus, unrealistic schedule estimates. The process may cause program managers (PMs) and their staffs to develop unrealistic T&E schedule estimates without considering historical tests or the experience of the tester and analyst. The PM's overoptimism in planning and scheduling also forces other agencies to set unrealistic plans or schedules that are based on meeting aggressive schedules. Furthermore, because of excessive schedule "push," usually by the PMO, many systems are not fully configured or ready for testing.

Acquisition Process Issues

Nearly every agency named the acquisition process itself as a significant stumbling block to conducting robust ME programs, as well as being a cause of other related problems. The reasons for these problems included the funding process and PM overoptimism. The funding process rewards PMs for being on schedule, being under budget, and meeting the criteria of the next milestone. It does not reward PMs for being critical and objective about their systems, nor does it reward them for taking a user perspective. The acquisition process drives PMOs to focus on cost and schedule and to sometimes regard T&E as an opportunity to recoup time and money. Because many systems are competing for limited Defense dollars, PMs are having to understate the actual technological risk in programs to stay competitive; therefore, they cannot include all the needed testing in their acquisition or test plans. This may be a result of decisionmakers placing programs "at risk" of cancellation if they perceive too much development risk in a program.

Test Culture

Research has found that a negative test "culture" exists in many PMOs, and this culture may have been the basis of testing problems. Several PMOs, and sometimes contractors, have displayed a negative attitude toward testing, testers, and analysts. The representative causes noted for this problem included the acquisition process itself, lack of PMO understanding of test and analysis capabilities and constraints, and the assumption that testers and analysts always require more or excessive testing. However, it was also found that some testers and analysts have earned poor reputations among program offices by conducting tests that appeared to add no value to the process or testing for weapon capabilities that were beyond the design requirements.

Resource Management

For many Army, Navy, and Air Force systems, a majority of the problems that occurred during OT&E were directly related to test resource issues. GAO cited 27 cases where important test resources were limited or not available for testing. In spite of this apparent history of problems, resources still do not appear to receive the attention they deserve, and testing usually remains underfunded.

Resource management of critical test assets was also identified as a major problem in DT&E. The primary causes of this problem included short-term funding and limited resources (hardware and software). A system entering DT&E without adequate test funding may not receive the resources in the lead time needed for proper test conduct. Lack of funding could delay test setup, delay instrumentation and equipment checks, and reduce needed test support personnel.

Short-term funding also caused PMs to desire and plan for near-perfect success in their test programs. Part V of the Test and Evaluation Master Plan (TEMP) details the resources required; however, it appears that resource requirements do not get the attention they warrant. Systems under development are often constrained by limited prototypes, test models, and versions of software, and they may be spread across the country. The lack of resources can severely limit effective testing, evaluation, and reporting.

Changes In Requirements

Changes in requirements were identified as a major problem for T&E. The causes for this included lack of coordination and/or communication between agencies and the lack of understanding of DT&E and OT&E processes among some combat developers. Lack of communication and coordination resulted in documents such as the Operational Requirements Document, the TEMP, and the contract not matching in terms of requirements. This dichotomy has caused difficulties in defining test requirements and made test planning and the conduct of tests more difficult and expensive than originally estimated. Combat developers may not be totally familiar with the test process and may not realize the impact that a less than fully coordinated requirement change has on the T&E process.

Other Findings

NDI And System Upgrade Programs. Another finding from our ongoing research is that a system's development strategy may be related to the type of problems a system encounters. It appears that PMs and decisionmakers usually underestimate the actual T&E required for NDI and system upgrade programs. One of the main causes for this underestimation is that these types of acquisition programs tend to promote very high expectations among PMs, senior decisionmakers, and other agencies. It is usually perceived that NDI and system upgrade programs are less risky and, therefore, any T&E problems of these programs should be minimal (although the contrary is more likely). As a result, if T&E program problems arise or failures occur, senior decisionmakers, other agencies, and even Congress may increase scrutiny, reassess the system, or place the program "at risk" for cancellation. Therefore, some PMOs want to significantly reduce testing to limit their exposure to possible failures and increased scrutiny.

Early Involvement Of Test Participants. Early involvement of the tester, analyst, and combat developer is critical to minimizing and/or preventing T&E problems. Having the PMO bring these agencies in early to help estimate, plan, and coordinate the test effort was the most common recommendation made across systems, agencies, and all categories of programs.

Additionally, there is benefit to having test personnel available during requirement writing to ensure that requirements are "testable" or at least capable of being evaluated.

ME Is A Risk-Reduction Tool. It has often been repeated among testers that decisionmakers need to fully understand the role of testing in the systems engineering process. As a risk-reduction tool, the test process should identify and eliminate unfeasible alternatives during hightechnology program development. Therefore, some system failures during development testing should be expected and should not put a program "at risk" for cancellation when they occur. PMs and contractors should also realize the value that DT&E provides to their development effort and should push for more testing rather than less.

Conclusion

Several recurring lessons have surfaced over the years during the evaluation of weapon ME programs, including the following:

The PM should start test planning earlier with all the cognizant players and agencies represented, including the tester, the analyst, the contractor, and the combat developer.

Historical information and data from previous tests should be used to better estimate future test costs, schedules, and resource requirements.

PMs should plan for contingencies and not assume perfect success in the test process, while testers should demonstrate more flexibility in packaging test programs.

PMs, as well as others, should avoid underestimating DT&E requirements for NDI and system upgrade programs.

Decisionmakers should fully understand the risk reduction role of T&E in the systems engineering process. They should expect some failures to occur in DT&E and not place a program "at risk" for cancellation when a failure occurs.

[Author Affiliation]

THOMAS H. HOIVIK (USN, Ret.) is a Professor of Operations Research at the Naval Postgraduate School, Monterey, CA, specializing in hightechnology program management, systems engineering, and test and evaluation. A former Experimental Test Pilot, he has been a Navy aircraft Program Manager; Director, U.S. Naval Test Pilot School; and Commander of a major naval air base and squadron. He was a Federal Executive Fellow at the Center for Strategic and International Studies and has taught at NPS for 16 years.

No comments:

Post a Comment