Text SizeAAA Share Email

Will Missile Defense Work? Only Realistic Testing Will Tell

Guest Perspective in Inside Missile Defense, Vol. 7, No. 22, October 31, 2001

By George Lewis, Lisbeth Gronlund, and David C. Wright

In his Guest Perspective, "Why Missile Defense Will Work," [Inside Missile Defense, Oct. 3, p10] Bill Davis argues that a mid-course hit-to-kill defense against long-range missiles -- of the type developed by the Clinton Administration and now pursued by the Bush administration -- will be effective. As co-authors of the most comprehensive critical analysis of the effectiveness of such a missile defense system (and the only one cited by Mr. Davis), we would like to respond.

It is possible to analytically assess whether a particular missile defense system could in principle be capable against a particular type of missile threat. This is what we did in our Countermeasures study, using basic physics principles, and assuming the missile defense sensors and interceptors worked perfectly. We concluded that the fully deployed missile defense system planned by the Clinton Pentagon, which was scheduled for deployment sometime after 2010, would be ineffective against an emerging missile state that incorporated one of several countermeasures that would be easier to build than would a long-range missile itself. Although Mr. Davis apparently disagrees with our conclusion, he offers no specific criticisms of our calculations or technical assumptions.

Moreover, Mr. Davis argues not only that the missile defense system could work, but that it will work. Yet, the only way to assess whether a specific system will work is by testing it thoroughly. Thus, part of our study was devoted to examining the planned test program, and suggesting improvements that should be made to allow the US to assess how confident it could be in the defense effectiveness.

It is indeed appropriate for a testing program to "walk before it runs" as Mr. Davis argues, but it is not appropriate for the US to decide to deploy a missile defense system before it knows if the system will be able to "run." We and other critics of the test program are not calling, as Mr. Davis states, for the immediate use of realistic countermeasures but rather for testing the system against such countermeasures before deciding whether or not to deploy the system.

We now turn to some of Mr. Davis' specific points.

Mr. Davis argues that the small size of a potential missile attack from a developing country would allow very high levels of effectiveness to be achieved. However, while a small missile threat opens up the possibility that an effective defense could be built, it does not mean that any given defense will be effective. Mr. Davis states that against an attack by 10 warheads, a defense with a single shot kill probability of 90 percent that fires four interceptors at each warhead will have a probability of zero leakage of 99.9 percent. Quite aside from the fact the proposed US NMD system is also required to counter an accidental Russian launch, which need not be small, there are two fatal flaws in this argument.

First, the figure of a 90 percent single shot kill probability is (in Mr. Davis's own word) an assumption, which has no basis in either real-world or test-range experience. While a figure of 90 percent might be achievable on the test range against cooperative targets, there is no basis for assuming this will be the case in the real world where an attacker could attempt to exploit weaknesses in the system. There has never been a midcourse hit-to-kill test in which the defense has not had a-priori complete information about the nature of the attack (including the characteristics of the mock warhead and decoys) and in which the attacker actually attempted to defeat the defense. Until a defense is tested against missiles with countermeasures in which the defense is not provided all the relevant details in advance, there is no basis for assuming such an optimistic single-shot-kill-probability.

Second, Mr. Davis's argument hinges on the assumption that the successive intercept attempts would be independent events -- so that if one interceptor fails, the next one will still have a 0.9 probability of success. This may indeed be true on the test range, where essentially all misses are due to mechanical failures or other quality control problems, but is unlikely to be so in a real-world countermeasures environment. If a particular countermeasure causes the first interceptor to miss, then it is also quite likely to cause subsequent interceptors to miss as well. Rather than being independent events, successive intercepts are in fact correlated, and firing multiple interceptors may not significantly increase the overall defense effectiveness.

Mr. Davis also argues that critics underestimate the resilience of the defense because they do not have access to classified information about its ability to defeat countermeasures. However, access to classified information would not have altered the analysis in Countermeasures, which was physics-based and not engineering-based. We assumed that the defense technology was limited only by physics; classified engineering details would only have resulted in a less optimistic picture of the defense. Our study showed that for certain countermeasures there was no observable signature that could be exploited by the defense to identify the warhead from the decoys, regardless of what kind of radar, infrared or visible-light sensor the defense was using. No one, including Mr. Davis, has pointed out any errors in these calculations.

Mr. Davis further argues that critics incorrectly endow emerging missile states with the ability to build "perfect" countermeasures, and notes that even the United States was unable to build perfect countermeasures. He then asserts that emerging missile states would not invest in countermeasures they cannot have full confidence in. Yet the very limited flight test programs of any emerging missile state means that they are already investing in missiles they cannot be confident in.

However, countermeasures do not need to be perfect to be effective. The United States has developed missile defense countermeasures that, while not perfect, were assessed to be effective. Indeed, perfect countermeasures are only needed if the defense has essentially perfect advance knowledge of the nature of the attacking threat, and while this may be the case on the test range, it will not be the case in an actual attack. The use of anti-simulation, in which the warhead characteristics are either disguised or altered from those expected, further reduces the need for "perfection."

Moreover, the US had a much harder countermeasures task than would states seeking to counter the US mid-course hit-to-kill defense. The countermeasures developed by the US had to be effective against Soviet interceptors armed with large nuclear warheads. Many of the countermeasures discussed in Countermeasures would either be ineffective or much more difficult to implement against such a nuclear-armed defense. Using hit-to-kill for midcourse defense makes the US system vulnerable to defeat by far-less-than-perfect countermeasures.

Mr. Davis also argues that the critics of the missile defense system fail to take into account the resilience of the baseline system to countermeasures, and ignore the fact that "there is a plan for block upgrades to the baseline system that will allow it to grow to handle increasingly complex and sophisticated countermeasures." In fact, the Countermeasures study did take into account several modifications to the defense system that did not require major hardware changes but might serve as counter-countermeasures. However, the fundamental purpose of the report was to assess the capabilities of the full planned system, which the Clinton Administration had already stated was able to defeat even "complex" threats. Our Countermeasures study showed that this was not the case.

More fundamentally, there is no technical specificity to the plan for block upgrades, only a desire to improve the system. Such a desire does not mean that effective upgrades will be possible. Mr. Davis asserts that to be confident in the block upgrades, the US will need to "monitor the threat continuously and make adjustments as new events are observed." This is fundamentally flawed assumption. As the Rumsfeld Commission on the Ballistic Missile Threat to the United States noted in its 1998 report, emerging missile states will conduct very few flight tests of their missiles (unlike the US and Russia). In addition, these countries are likely to place a high premium on keeping their countermeasure developments secret. Indeed,

there are unlikely to be any "new events" for the US to observe and learn anything about the countermeasure programs of emerging missile states. But the absence of evidence is not evidence of absence.

Mr. Davis next argues that while hit-to-kill intercept tests have scored 15 successes in 33 tests, it is more relevant that of the 18 tests in which the endgame was reached, 15 were successful. He states that this shows "that when quality control type problems did not preclude testing of the critical "hit-to-kill" functions, the success rate is over 80 percent."

There are numerous problems with this argument. First, 11 of the 18 intercept tests (and nine of the successes) that Mr. Davis cites as reaching the endgame were tests of the Patriot PAC-3 missile and its predecessors. While PAC-3 uses hit-to-kill to destroy missiles, it is a terminal phase, low altitude, radar-homing, interceptor missile that maneuvers using atmospheric forces to intercept short-range missiles. It has essentially nothing to do with the midcourse, exo-atmospheric, infrared homing, divert-thruster maneuvering, interceptor he is discussing in his article.

Second, Mr. Davis's statistics are based on the claim that of the 16 failed intercept tests (out of a total of 22 tests) of midcourse systems (including THAAD), only one of these (the second test of the NMD system on January 18, 2000) actually failed in the endgame. This is plainly wrong, as even a cursory inspection of publicly available information on the testing record shows.

For example, consider the second of the two intercept attempts for the ERIS (Exo-atmospheric Reentry Vehicle Interceptor System) on March 13, 1992. The warhead target was accompanied by a single balloon decoy, separated from the warhead by about 20 meters. The kill vehicle was initially programmed to fly to the midpoint between the warhead and decoy, while collecting data on both, and then to home on the warhead. However, because the kill vehicle unexpectedly detected the warhead slightly later than the decoy, it initially began to home on the decoy. Although it subsequently detected and homed in on the warhead, it did not have enough time to recover, and reportedly missed its target by several meters. This was clearly a failure in the endgame, but Davis's statistics do not count this intercept attempt as reaching the endgame. Similarly, he does not count the second and third Homing Overlay Experiment intercept attempts as reaching the endgame, even though both were reported to have successfully demonstrated homing although software and electronics errors caused them to miss. In fact, when other uncounted endgame failures are taken into account, the success rate in the endgame is below 50 percent.

Most important, however, is the simple fact already stressed above, that success on the test range against cooperative targets does not establish that the system will work in the real world where adversaries would actually attempt to defeat the system. Each of the intercept tests conducted so far has been carefully designed and set up to be successful – without quality control errors and other malfunctions, the success rate would be 100 percent. This will not be the case in actual use. This point is well illustrated by the only actual use of a ballistic missile defense system -- the Patriot PAC-2 in the Gulf War. Although Patriot reportedly had a perfect record against theater missiles on the test range -- 17 successes in 17 intercept attempts -- it was a complete or near complete failure in its efforts to intercept the Iraqi missiles. This discrepancy occurred because the Iraqi missiles, which broke apart and maneuvered erratically, were different from the test range missiles, which flew on smooth, predictable trajectories.

Finally, Mr. Davis argues that the technical maturity of the midcourse missile defense system indicates that it has a high probability of success. In particular, he cites Technology Readiness Levels (TRLs) initially developed by NASA and used by the GAO to assess the technological maturity of defense programs. This TRL scale goes from 1 to 9, with 9 being the most mature. Mr. Davis states that the intercept tests now being carried out are in the TRL range of 6 to 7, with level 6 "being in a high fidelity laboratory or simulated operational environment" and level 7 "being an actual system prototype in an operational environment," with level 6 corresponding to a high probability of success. However, this assessment is only correct if the planned operational environment that the system will be used in is one without credible countermeasures, since that is the environment the system is currently being tested in. Put another way, this assessment of the NMD system's technical maturity may well indicate that it has high probability of eventually being successful on the test range, but it does not show that the system will be successful in the real world, since the system has not been tested in that environment.


 

George Lewis is Associate Director of the Security Studies Program (SSP) at the Massachusetts Institute of Technology (MIT); Lisbeth Gronlund and David Wright are Senior Staff Scientists at the Union of Concerned Scientists in Cambridge, Mass, and Research Fellows at SSP.

Powered by Convio
nonprofit software