It’s Broken, Fix It!

This article was first published in the Armed Forces Journal in April 1996, submitted under the title "The Emperor's Clothes," which was probably no less provocative than the title the magazine used to publish it. The editor's published title changed the meaning of my article, which didn't make me happy. Interestingly, pseudo-analytical decision-making continues in the Department of Defense to this day, rendering the article as germane today as it was in 1996. Admiral Bill Owen's recent retirement from his position as vice chairman of the Joint Chiefs of Staff (VCJCS) gives us an opportunity to examine one of his more controversial legacies: the process of determining joint warfighting requirements known as the Joint Warfighting Capabilities Assessment (JWCA). It's been almost two years since Admiral Owens, over the objections of some of the service vice chiefs, instituted this process-one that was supposed to revolutionize the way service program plans were put together. Everyone agreed that, prior to Owens' initiative, service program planning was disjointed and ad hoc. No one was really doing a high-level, first-order scrub of the nation's military requirements. Although the Joint Requirements Oversight Council (JROC) was responsible for high-level military requirements, it really only looked at specific system requirements, rather than at overarching capabilities. And the services made decisions affecting their piece of the pie independently, without regard to the larger picture, because by law that was all they were permitted to do. As a result, capability gaps existed in some areas, while duplication existed in others. As we began the 90's and entered a new era of austerity, a question lingered: Was this duplication of capability really necessary? Congress, citing cursory reviews, frequently opined that the duplication of capability was redundant, and therefore not necessary. In response, the Department of Defense usually said that the duplication was necessary, that no real redundancy existed. But whether DoD's answers were right or wrong, they based these opinions on equally ad hoc, short-term studies that provided little insight and, as far as some members of Congress were concerned, answered no questions. The Congress, therefore, demanded repetitive reviews of the redundancy" issue, hoping to eventually get the “right answer.” This frustrating cycle led planners to search for a process that would both solve the "service integration" problem and address Congress' redundancy concerns once and for all. There is no doubt that a system that Integrates joint capabilities to eliminate redundancies and identify shortfalls without regard to any real or imagined bias bas strong Intellectual appeal. But one person's redundancy is another person's competition. And although competition may not be the mother of invention, It's certainly a catalyst. Competition leads to Innovation. Further, sometimes you need to have backup systems to eliminate single-point failures in vital, life-threatening situations. So sometimes redundancy is good. Were we to seriously eliminate all forms of redundancy within DoD in an effort to increase perceived efficiency, actual efficiency would probably decrease, as would effectiveness. And when lives a.re at risk, effectiveness is usually more important than efficiency. Admiral Owens thought there had to be a better way to sort these issues out. So, based on a Navy-specific assessment system he implemented when he was the Navy's requirements and resources director (N8), he Invoked the JWCA process, intending for it to look at US military capabilities writ large, detect capability holes, eliminate redundancies , and suggest methods to correct these deficiencies. This system was supposed to be devoid of parochial interest-an honest broker" approach to determining our highest-payoff systems. And if nothing else, Owens is certainly to be commended for instituting a system that tries to get service planners to look "outside the box, to think as generic military planners rather than simply as planners for their specific services. But does the process work, or is it just a fancy way to get senior officers to give lip service to jointness? Since the JWCA (pronounced "juh-wicka") process has been in place for at least one complete program review cycle, it's time that we honestly evaluate its effectiveness. The JWCA Process In theory, the JWCA process is straightforward, even elegant. It can be described as follows: The general concept of warfighting is first divided into its component pans: land and littoral warfare; strike; sea, air, and space superiority; and so on. The Joint Staff then assembles teams of action officers from the services and defense agencies, one team for each warfighting area. The JROC, comprised of the four service vice chiefs of staff and led by the VCJCS, provides oversight to each team. Actual warfighting requirements a.re analyzed for each area. These requirements are then compared to a list of service and agency programs— both fielded and planned— that are intended to address each requirement. Redundant capabilities, requirement excesses, and capability shortfalls are identified as issues. The team puts together a list of recommended fixes. The list is modified and approved by the JROC, and then is briefed to the warfighting commanders- in-chiefs (CinCs). The final list, as amended by the recommendations of the CinCs, is forwarded to the secretary of defense in the form of a list of program recommendations or assessments of the chairman of the Joint Chiefs of Staff. The secretary of defense then makes the final decision as to how specific service programs should be modified. If in practice the system worked as in theory, it would probably go a long way towards fixing the integration problems the services have experienced in the past. But as usually happens, practice and theory have diverged in some areas so that the process does not always work as planned. In fact, by creating the impression that the result is a product of analytical rigor, this process may lull decision-makers into a false sense of security, and in so doing, may actually do harm. These criticisms have been voiced at the working level of the process for quite some time, but have had mixed success in reaching decision makers. Owens’ departure provides an opportunity to finally air all the criticisms, with the objective of constructing a better, more efficient, and more defensible process of planning the nation's military force. Problem Areas The JWCA process is supposed to help make financially-constrained choices. Unfortunately, it is sometimes structurally biased against the most financially sound options, for the following reasons: It sometimes misuses so-called "operations research" methods. This weakness has been recognized by the Joint Staff, and the staff has taken action to minimize it. But according to some JWCA team leaders and players, the weakness still exists to some extent. It can be summarized as follows. In order to generate a "best fit" solution, the JWCA process must compare the performance of several very different systems under similar conditions. But deciding whether "bomb A" is better than "missile B" for attacking a particular target is not trivial. Usually there are countless variables involved in deciding which system should get top billing. For example, early in a conflict, before enemy air defenses are suppressed, the missile may be the favored alternative despite the fact that it costs significantly more than the bomb, because using missiles is cheaper than getting your airplanes shot down. But later, once we are able to fly manned aircraft with virtual impunity, the cheaper system will probably come out on top in a cost-benefit analysis. But those complexities are difficult to model. Operations research methods are used to try to model them, so OR experts are sometimes brought in. But these experts are usually civilian contractors, called in at significant cost. And although some of them are very good, some don't really understand the military systems being considered. Nevertheless, they develop matrices, generate measures of effectiveness, produce cost ratios, and then sit the military guys down to insert judgment factors that are supposed to compensate for the intangibles. Finally, all the data is inserted into a spreadsheet, and out pops the answer—nice and neat. But unfortunately, this answer may have no correlation with reality because we have discarded the first rule of computing: garbage in equals garbage out. The OR guys are usually quite honest about the limitations of their tools, but since they may not understand the nuances of the specific systems they are dealing with, they may not be able to detect when their tools are being used improperly. What they may not know Is the input has been affected by our next problem area: outcome-shaping. Outcome shaping. Military action officers are generally a bright group of individuals. They understand the limitations of OR methods and that no method of numerical analysis can compensate for the complexity of the battlefield. As long as conflict involves one intellect pitted against another, no computer will ever be able to reliably predict the outcome of either a battle or an engagement. As John Madden has said of football, the best team does not always win. That's why we play the game. This universal truth leads many military staff officers involved in the JWCA process to believe, correctly I think, that military judgment ls frequently the best way to determine what is needed to address a specific requirement. And military Judgment leads them to intuitively know what the right answer should be for a particular capability. All these officers have to do is make sure the numerical analysis docs not impeach their Judgment. So, knowing the right answer, they simply make it so by manipulating the OR model's military Judgment input variables. There is nothing inherently wrong with this— the variables are in place specifically to provide a way of compensating for the incongruous. Speaking as a former member who was well entrenched in that “shaping” process, I understood what was going on. So, I think, did the decision-makers. And as long as the answer generally agreed with intuition, nobody would raise a stink. But this process is shockingly wasteful. The fact that thousands of man-hours, and potentially millions of dollars, are wasted in getting a product that could have been generated by decree, seems to be ignored. Or perhaps it .is seen as yet another sacrifice to the god of analytical insight, which of course in this case is a false sod. The Joint Staff Is now sensitive to this and has taken steps to scale back the improper use of OR methods, but is still taking place in some JWCA groups. It sometimes fails to require overlapping evaluation of systems. When, as the Navy's requirements and resources director, Admiral Owens developed the Navy equivalent of the JWCA.s, called Joint Mission Area (JMA) assessments, he included a brilliant construct: a system could be analyzed by more than one mission area if it contributed to those areas. So, If a ship was useful in a littoral warfare role, it was analyzed by the “littoral warfare" JMA. If it was also key to surveillance, the “surveillance" JMA group had to look at it as well. When all the JMA studies were complete, they were integrated together into an investment balance report, where multiple capabilities became apparent. This synergy added to the value of the weapon system, and its contribution was able to be more accurately represented in the final product. While overlap was encouraged in Navy JMAs, it Is structurally minimized in the Joint Staff’s JWCAs. That is, one JWCA team leader is reluctant to add work to an already overloaded team by analyzing a system he knows another team is already looking at. This omission creates significant problems. Consider the following example: missile defense systems are evaluated by the “sea, air, and space superiority” JWCA, while weapons of mass destruction are evaluated by the “counterproliferation" JWCA. But how you build a missile defense system is greatly influenced by whether the missile you are defending against is carrying a conventional, nuclear, chemical, or biological warhead. For example, if you destroy an incoming missile at an altitude of 1,000 feet, that may be fine if the incoming missile is conventional But if it was armed with a biological weapon, causing it to detonate at 1,000 feet may actually be worse than If you had destroyed it in space or even allowed it to strike the ground, because destroying the warhead at this altitude allows better dispersal of the toxic agent. You therefore must consider counterproliferation issues when dealing with missile defenses. But because overlapping analysis is not required in this case, and missile defenses belong to the “sea, air, and space superiority” and not the “counterproliferation” JWCA, these issues are not being dealt with. But even more importantly, this failure: to require overlapping analysis across mission areas skews the outcome by marginalizing multipurpose platforms and Inappropriately devaluing the contribution of multimission-capable systems. An unintentional result of this bias is that the process drives you towards single-purpose systems, which are inherently more expensive in the aggregate than are multipurpose systems. Hence, a process which is intended to save you money ends up costing you more money in the long run. This weakness has also been acknowledged by the Joint Staff, and it has taken action to correct it. But this corrective action does not include mandatory analysis to identify cheaper, multimission systems. It's important that overlapping analysis be declared mandatory for the reasons cited above. Get me a product-now! The process runs on a never-ending series of immediate deadlines. A two-hour staffing deadline on a 40-page document that summarizes months’ worth of work is not uncommon. The first rule of staff work has been forgotten: if you want an answer bad, you'll get it bad. Service cynics charge that these short deadlines are the Joint Staff's primary tool for minimizing criticism. That is, you can't object to an issue if you don't have the time to find It. Having worked ln the Joint Staff, I don't believe this. I can tell you that staff inefficiencies are quite capable of causing delays of this magnitude. But, whatever the cause of these distortions, no useful review of a complex issue can be completed in the short-turnaround deadlines prevalent in the JWCA system. It focuses on developing consensus. Consensus is sometimes useful, especially when dealing with a variety of players that have a great deal of power in their own right. And one of the strengths of the JWCA process is that it educates senior military officers about emerging issues that may become very important to the future security of the United States. But sometimes it seems as if the answer is less important than the audience. Rather than taking the time necessary to get the correct answer, "the concern sometimes seems to be on getting the right answer-one that, among other things, the CINCs will accept. This focus on consensus at the expense of the product sometimes steers the process toward the expedient, the hot topics of the day. This problem is structural, because by law the CINCS must focus on day-to-day operations within their region, rather than on a military that must transcend regional biases-one that won't even exist for 10 or 15 years. That Is, the Goldwater-Nichols Act intended for the CINCs to be advocates of the "here” and the "now.” Issues that far down the pike simply don’t fall on their radar screens. If the CINCs forego their parochial view in favor of the larger picture, then nobody will speak for their theater of operations. Hence, they will not be doing their jobs. It loses sight of its target. The process is supposed to identify significant excesses, overlaps, or shortfalls, but because of the zeal to generate consensus, it sometimes creates nothing more than a CINC "wish-list. Looking at this final product, one can only conclude that this is a very convoluted and expensive way to produce a product that, in the end, could have been generated by a couple of action officers given a week's time. WHAT TO DO Reduce the bureaucracy. The current system is too bureaucratic; the groups are too large. When you try to satisfy 21 players, each of whom has an equal vote, you will take five times as long as you should have and will end up satisfying no one anyway. Cut the teams way back. Each group should have no more than two members from each service. Supporting characters should play only behind the scenes. Service staffs should be restructured to provide full-time members rather than having officers perform these duties in a collateral (and sometimes perfunctory) manner. And only the services should be given membership; supporting defense agencies could provide input, but should not get a vote. The law of the land says that the services are responsible to provide, train, and equip forces for the unified CinCs. We should let the services do their jobs. Focus on the “big picture" only. The Goldwater-Nichols Act gave the chairman of the Joint Chiefs of Staff authority to voice his opinion on service budget matters. But the JWCAs spend a great deal of time in the weeds-way below the threshold where any significant money could be made or where Joint Staff' officers have the program-specific, detailed knowledge to make reasoned and reasonable decisions. Admiral Rickover's famous maxim, “the devil is in the details,” may be true, but we're not talking about nuclear safety here. There is a time when the details are important, and a time when details are just details. The Joint Staff has to stop trying to be all things to all people. Mandate overlapping analysis where de facto mission overlap exists. However, the JWCAs are finally named and categorized, for the reasons cited above, systems have to be evaluated in every JWCA area to which they contribute. We must give priority to multimission systems to save money in the long run. Remove the quantitatively rigorous facade. Operations research methods have very good uses, but have serious limitations as well. Let's discontinue the practice of using them to provide a sense of analytical rigor in cases where quantitative methods really just skew whatever insight might be otherwise available. Let's accept the reality that military judgment is sometimes the only way to solve a mushy problem. Develop a reasonable front-end. Define a long-term strategic concept and set of alternative future scenarios against which the semi-random collection of JWCA recommendations can be rationalized Although I do not presume to think these corrections will yield a perfect, or even a good, process, I can only promise that the outcome will be improved over what we are doing today.

It’s Broken, Fix It!