Intelligence dissemination management
From Wikipedia, the free encyclopedia
Intelligence dissemination management is a maxim of intelligence arguing that intelligence agencies advise policymakers instead of shaping policy.[1] Due to the necessity of quick decision-making in periods of crisis, intelligence analysts may suggest possible actions, including a prediction of the consequences of each decision. Intelligence consumers and providers still struggle with the balance of what drives information flow. Dissemination is the part of the intelligence cycle that delivers products to consumers, and intelligence dissemination management refers to the process that encompasses organizing the dissemination of the finished intelligence.
Intelligence information ranges from the equivalent of "we interrupt this television program" - to book-length studies which may, or may not, be read by policymakers. Large documents sometimes are legitimately for specialists only. Other lengthy studies may be long-term predictions. Recent worldwide events show that high-level policymakers simply do not read large studies, while staff briefer's may.
In principle, intelligence is merely informational, and does not recommend policies. The effects of alternatives are actually taken into account in at least two specialized methods. A net assessment, also known as a correlation of forces analysis or a strategic assessment, compares the capabilities of both parties and examines the potential outcomes of different course of action. The other is to use both information on one's own capabilities - and the best information on the others, and run realistic role-playing games or simulations, with people having senior policy experience either acting as the opposition, or possibly executing one's own role in a hypothetical situation.
Models
Dissemination and use decisions need to consider the nature of interaction between provider and consumer, and the special requirements of security.
In logistics,[2] the two basic models are:
- push: the producer initiates the flow and the consumer receives it
- pull: the consumer requests or initiates the flow and the provider generates it.
Stating these as binary models, however, does not reflect the time factor. Drawing from business, another way of thinking about flow is the idea of just-in-time (JIT) inventory management, where a minimum of parts stays in a factory or store, with a closed loop between supplier and manufacturer/seller. Sellers keep the producers informed, in real time, of their inventory and consumption rate. Suppliers adjust the rate of production and the mix of products to keep an efficient logistic pipeline filled but not overflowing.
Push and pull, and JIT, models apply in intelligence as well, but are not always recognized as such. Logistical models, however, are simpler than certain cases of intelligence. Where the logistic model commands in one direction and receives in the other, some, but not all, forms of intelligence are interactive. To bring them together in a consistent model, consider, for a set of intelligence events:
- Intelligence producer activity
- Intelligence consumer activity
- Flow type caused by event

Intelligence has to be relevant. Assuming that some reference is considered to be useful, such as a basic country reference, users pull information from it when they have a question. There is relatively little interaction involved in producing the reference; analysts add to it when they have new material.
While analysts will do the basic work in developing indications and warning checklists, the analysts should get reality checks from the consumers. Where the analysts drive things, however, is when they issue warnings.
The issuance of a warning is almost always going to generate questions. A tactical report, however, may be enough, in and of itself, not to need more clarification, but to have the consumer take immediate action. Situation monitoring involves a steady flow of analyst actions, but there will be fairly frequent refinement requests from the consumers.
While analysts generate estimates, estimates may not be relevant unless consumers have been involved in defining the requirements for estimates.
Irrelevance is a related and arguably bigger problem for analysts than politicization. Intelligence analysis rarely impresses itself upon policymakers, who are inevitably busy and inundated with more demands on their time and attention than they can possibly meet. Intelligence officials must draw attention to their product and market their ideas. This is especially true in the case of any early-warning or intelligence-related development that has potentially significant consequences for important interests. A phone call, a personalized memorandum, a meeting-any and all are required if the situation is sufficiently serious. Involving relevant policymakers and other consumers in the regular personnel evaluations of the analysts who serve them would strengthen the importance of such an effort and provide an incentive to individual analysts
— CFR[3]
The proper relationship between intelligence gathering and policymaking sharply separates the two functions....Congress, not the administration, asked for the now-infamous October 2002 National Intelligence Estimate (NIE) on Iraq's unconventional weapons programs, although few members of Congress actually read it. (According to several congressional aides responsible for safeguarding the classified material, no more than six senators and only a handful of House members got beyond the five-page executive summary.) As the national intelligence officer for the Middle East, I was in charge of coordinating all of the intelligence community's assessments regarding Iraq; the first request I received from any administration policymaker for any such assessment was not until a year into the war
— P. Pillar[4]
Constraints
Many nations have an extremely restricted daily report that goes to the highest officials (e.g., the President's Daily Brief in the US), a more widely circulated daily that omits only the most sensitive source information, and a weekly summary at a lower classification level.
More generally than the classified equivalent of all-news channels, dissemination is the process of distribution of raw or finished intelligence to the consumers whose needs initiated the intelligence requirements.
Some parts of the intelligence community are reluctant to put their product onto even a classified web or wiki, due to concern that they cannot control dissemination once the material is in the published online format[5]
While modern information storage simplifies the handling and dissemination of exceptionally sensitive material, especially if it never is committed to hard copy, the systematic handling of compartmented information probably is most associated with the Special Liaison Units (SLU) originally for the distribution of British Ultra COMINT .[6] These units, and the equivalent US Special Security Offices (SSO), usually brought material to indoctrinated recipients, perhaps waited to answer questions, and took back the material. In some large headquarters, there was a special security reading room.
SLUs/SSOs had, from WWII on, dedicated, high-security communications links intended for their use alone. On occasion, senior commanders would send private messages over these channels.
Prior to US adoption of the British system, officer couriers brought COMINT to the White House and State Department, staying with the reader in most cases. For a time, after an intercept was found in the wastepaper basket of FDR's military aide, the Army and Navy cryptanalytic agencies unilaterally cut off White House access.[7]
In current US practice, there may be a special security office with the physical security of a Sensitive Compartmented Information Facility (SCIF) used for Sensitive Compartmented Information (SCI) and Special Access Programs (SAP) information inside an organization. Parts of intelligence agencies, or manufacturing facilities for Special Access Projects, may, in their entirety, be considered secure for such material. Computer workstations with access to special security systems, if the overall building (e.g., CIA or NSA headquarters) is not approved for them, may be kept in the SCIF.
Soviet military intelligence residenturas in embassies had a central record room, from which individual GRU officers would check out locked file boxes, carry them to curtained alcoves, do their work, reseal the box, and check it back in.[8]
Dissemination of intelligence products
Basic intelligence
Consumers often need to look up individual facts. Increasingly, this material is in hyperlinked online documents, such as Intellipedia, which exists at the unclassified but for official use only, SECRET, and TS/SCI levels. Different agencies have different attitudes toward such publication; a resident consultant said that the CIA Directorate of Intelligence is uncomfortable with other than an Originator Controlled (ORCON) model,[5] while NSA seems more willing to let information flow among cleared users.
Warning and current intelligence
At a given level (e.g., alliance, country, multinational coalition, major military command, tactical operations), current intelligence centers continually provide customers, including other intelligence organization, with the all-source equivalent of daily newspapers and weekly news magazines. Current intelligence often is briefed directly to senior officers.
Current intelligence organizations need to be concerned with both tactical and strategic warning.[9]
The goal of tactical warning is to inform operational commanders of an event requiring immediate action. Warning of attacks is tactical, indicating that an opponent is not only preparing for war, but will attack in the near term.[10]
The goal of strategic warning is to prevent major surprises for policy officials.[9] Surprises, in part, manifest themselves in changes in the probability that some conceived-of event will take place, such that contingency plans are in place to respond to tactical warnings. A warning to national policymakers that a state or alliance intends war, or is on a course that substantially increases the risks of war and is taking steps to prepare for war, comes from:
- Attacks against the analyst's country and its interests, which can come from state or non-state actors, through military, terrorist, economic, information, and other means
- The breakdown of stability in an area or country critical to one's own side
- Key changes in adversary strategy and practice, especially with respect to terrorism or WMD proliferation attacks against the United States and its interests abroad by states and non-state actors via military, terrorist, and other means[10]
In combination, indications and warning (I&W)[10] provide alerts of potential or highly probable threat on the part of a hostile organizations. Indications may suggest preparation, such as an unusual pace of preventive maintenance, troop or supply transfers to shipping points, etc. Warnings are more immediate, such as the deployment of units, in combat formation, near national borders. Positive indications and warning may call for more specific "news reports" and briefings.
Countries with modern intelligence and operations communications networks can, on detecting indications and warnings, can initiate collaboration tools to help analysts share information. Specific actions, ranging from border crossings in large formations, to sorties of nuclear-capable ships and aircraft, to actual explosions, move at high priority on both operational and intelligence networks.
In the current environment of asymmetrical warfare by transnational groups, a given person dropping from sensors, or suddenly appearing outside their usual place, can be an indication of activity.
Tactical intelligence
According to the basic US Army manual on intelligence,[11] "Fundamental to decision-making in regard to any military operation is knowledge of the environment since it enables combatants to optimize the assets they have, to target their effort, to anticipate developments and husband their force." Current doctrine treats intelligence as "one of seven battlefield operating systems: intelligence, maneuver, fire support (FS), air defense, mobility/counter-mobility/survivability, combat service support (CSS), and command and control (C2) that enable commanders to build, employ, direct, and sustain combat power." Exercising the intelligence function produces information in several areas:
- Intelligence Preparation of the Battlefield (IPB). IPB helps the commander understand the current situation and its background.
- Situation development, which presents potential Enemy Courses of Action (ECOA) to the commander.
- Intelligence support to Force Protection (FP)
- Conduct police intelligence operations.
- At appropriate organizational levels, contribute to national intelligence through the Joint Military Intelligence Program (JMIP). Tactical Intelligence and Related Activities (TIARA) is the mirror image of military JMIP, providing direct support to war fighters
- With supporting intelligence units, synchronize intelligence development with surveillance and reconnaissance. The Operations Officer tasks the various units, with the Intelligence Officer defining objectives and analyzes the result.
- Manage distribution of national-level information (i.e., TENCAP, or Tactical Exploitation of National Capabilities), which may be Sensitive Compartmented Information (SCI) or Special Access Programs (SAP). SCI and SAP need special access controls
"All these tasks take place within the dimensions of threat, political, unified action, land combat operations, information, and technology.
Starting with the basics: SALUTE
Standardization works very well for reporting, when it is drilled in from the beginning. The US Army has a new slogan, reinforced with video games, that "every soldier is a sensor." Standardized reporting for the most basic tactical things work under pressure, such as that which is defined by the acronym SALUTE, for a report about spotting the enemy:
- Size: how many men in the unit?
- Activity: what are they doing?
- Location: where are they? Give map coordinates if available, otherwise the best description available.
- Unit: who are they? Uniforms? Descriptions?
- Time: when did you see them?
- Equipment: what weapons do they have? Vehicles? Radios? Anything else distinctive?
Friction and standardization
With a trend toward multinational operations, there is even more opportunity for friction, when different doctrines (e.g., US intelligence cycle vs. NATO CCIRM) and restricted-to-own-nationals information become involved. US units, through TENCAP, are able to access national-level assets such as IMINT satellites, but rarely can share it with coalition partners.
For nuclear, biological, and chemical attacks, a CBRN Warning and Reporting System (CBRN WRS) is standardized among NATO countries and Australia. The basic reports are:
- CBRN 1-Initial report, used for passing basic data compiled at unit level.
- CBRN 2-Report used for passing evaluated data.
- CBRN 3-Report used for immediate warning of predicted contamination and hazard areas.
- CBRN 4-Report used for passing monitoring and survey results.
- CBRN 5-Report used for passing information on areas of actual contamination.
- CBRN 6-Report used for passing detailed information on chemical or biological attacks.
These reports, as are many such, are forms, with lines identified by letter. By text or by radio, one reads off letter codes, such as "CBRN 1. B(ravo) (my position). C(harlie) (direction of attack)" and so forth.
Beyond watch centers: Situation and interdisciplinary monitoring
As distinct from information needed for continuing tactical information, which often will come from organic or attached military intelligence units, national-level crises will need continuing and focused situation monitoring at the national or multinational level. To be able to establish situation-specific task forces or topical centers, there must be adequate basic intelligence, pre-assignments to task forces, and appropriate collaborative tools (e.g., Intellipedia) and basic intelligence.
Situation intelligence products need to be relevant to the level of reflection of the HQ making use of it: strategic (EU HQ, Operation Command HQ), operational (in-theatre force command HQ) or tactical (HQ deploying a force component in a local operation);
In some cases, the existing current intelligence, or indications & warnings, centers of national agencies may be adequate. The next step would be to set up conference calls or other collaborative techniques to link the relevant specialists in operations centers of different agencies, different countries, or perhaps multinational centers. These collaborations will produce periodic situation reports (SITREPS) to be disseminated to appropriate policymakers. It also disseminates other daily intelligence updates and products.
Longer range, more intractable intelligence challenges are addressed by grouping analytic and operational personnel from concerned agencies into close-knit functional units. Examples at both the national and multinational levels include counter-terrorism (e.g., Singapore counter-terrorism and ASEAN counter-terrorism training center, transnational crime and drug enforcement (e.g., Interpol) and verification of compliance with nonproliferation treaties (e.g., Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO, to be established)).
Estimates
Estimates are coordinated analyses, by the intelligence community at the national levels, or by the various intelligence staff sections at a command level, of the various courses of action available to an actor of interest, and the likelihood of each. Estimates primarily consider the unilateral actions of the other side, or its actions in response to a well-defined actions of one's own side. Estimates are not strategic assessments, which examine a broader scope of strengths and weaknesses between one's side and that of another.
Most countries with a well-developed system of estimates have different types, with different time scales, and representing either an intelligence community consensus, or possibly an ideologically oriented group that justifies a premade conclusion.
In the US, National Intelligence Estimates are detailed analyses, typically in the tens to hundreds of pages, produced once a reasonable consensus is reached. These documents may contain dissenting footnotes, also called reclamas, documenting a disagreement by a particular organization or specialists. NIEs are typically associated with standing task groups. Special National Intelligence Estimates (SNIE) are short-term community documents, in response to a specific customer requirement.
There is controversy over the US Office of Special Plans and whether it bypassed the intelligence community cross-checking process. Winston Churchill had his own intelligence advisor, Sir Desmond Morton, who might bypass the WWII estimative process.
Estimative intelligence helps policymakers to think strategically about long-term threats by discussing the implications of a range of possible outcomes and alternative scenarios. They are not strictly operational, but fall into a special analysis and dissemination category because they usually involve multi-agency coordination.
In the topmost article of this series, Sun Tzu was quoted .[12] The Griffiths translation does speak of "estimates", but, according to Pillsbury,[13] a new translation from the Chinese Academy of Military Science argues that Griffiths mistranslated: "strategic assessments" is more accurate than "estimates".[14]
Between intelligence and action
Going beyond the pure intelligence is the assessment of known capabilities of one's own resources versus the best estimate of opponent capabilities.
Clausewitz' book On War[15] asks a simple question: How can the national leadership know how much force will be necessary to bring to bear against a potential enemy? Clausewitz replies, "We must gauge the character of . . . (the enemy) government and people and do the same in regard to our own. Finally, we must evaluate the political sympathies of other states and the effect the war may have on them.
Clausewitz warns that studying enemy weaknesses without considering one's own capacity to take advantage of those weaknesses is a mistake.
Many first heard of the term centers of gravity in the context of Desert Storm or COL John Warden, but Warden's contribution was adapting the idea [16] to air campaigns, of the Clausewitz idea[15] of a "center of gravity," a feature that if successfully attacked, can stop the enemy's war effort. Assessment requires considering the potential interaction of the two sides. According to Clausewitz, "One must keep the dominant characteristics of both belligerents in mind."
Net assessment
First and foremost, Net Assessment is nothing but the capability to analyse and measure the strategical assets of each warring side (for example, how many tanks it has, how many soldiers etc.). But Net Assessment is not only the measurement of material matters, but it is also the capability to try to estimate moral assets such as a people's will to fight in a given war, how far a state is capable of waging war, how willing are their generals and soldiers to fight in a war and other moral aspects of warfare in general. In the US, strategic assessment is one step beyond intelligence estimates, although intelligence analysts may well participate in the subsequent process of strategic assessment. The result, called a net assessment in the US and the correlation of forces in the USSR, are not themselves contingency plans, but are critical to the formulation of plans. Strategic assessment, above all else, is an examination of interactions, rather than the likely unilateral actions of another side or coalition.
Formalizing the role of a command historian was one of the first steps in the evolution of a true general staff,[17] as opposed to the personal entourage of a commander. By applying the planned assessment methodology to historical data, the methodology, cautiously, may be validated. Caution is needed because contingencies can make historical behavior obsolete.
In fact, a widely praised explanation for the causes of war is precisely that strategic assessments were in conflict prior to the initiation of combat—one side seldom starts a war knowing in advance it will lose. [18] Thus, we may presume there are almost always miscalculations in strategic assessments of varying types according to the nature of the national leadership that made the assessment.[13]
How not to do net assessment
How have major nations conducted strategic assessments of the security environment? There is no one standard. There are some errors that repeat. The Office of Net Assessment, in the US Department of Defense, under Andrew Marshall, commissioned a set of seven historical examples of strategic assessment from 1938 to 1940,[13] compares the pre-WWII equivalent of net assessment by seven countries. Mistakes in these assessments become "lessons learned" are relevant to any effort, Pillsbury's focus, to understand how the Chinese leadership conducts strategic assessment of its future security environment.
In his specification for the study, Marshall specified four motivations for assessment:
- Foreseeing potential conflicts
- Comparing strengths and predicting outcomes in given contingencies
- Monitoring current developments and being alerted to developing problems
- Warning of imminent military danger.
The main problem was how to frame assessments, particularly with regard to political-military factors such as who were the potential threats and potential allies, and what international alignments would be vital to the outcomes of future wars.
Simplistic force ratios and assumptions
An early but obsolete approach to estimation was a very simple quantitative one, using Lanchester's Laws. A simple comparison of forces of roughly equal capability can come down to number of soldiers, ratio of attackers to defenders, defensive quality of the terrain, and other basics. That sort of model breaks down, however, when the forces are dissimilar in quality of leadership, troop morale and initiative, or doctrine and technology. Japan made this mistake as it prepared for WWII, putting garrisons on a large number of islands, and considering their battleships one of their centers of gravity.
Japan's early WWII strategy towards the US, for example, made numerous assumptions that would lead the US Fleet to sail into the Western Pacific, to fight, on advantageous terms for the Japanese, a "Decisive Battle". Unfortunately for the Japanese, the US did not choose to make battleships its primary arm, fight for every Japanese outpost, use submarines only against the warrior targets of warships, or require a return to a fleet anchorage for regrouping. Intelligence was not a preferred assignment in the Japanese military, and their operations planners tended to make optimistic assumptions. An eccentric but incredibly creative US Marine, Earl Ellis, however, did foresee the US strategy for the Pacific in the 1920s, and set a standard for long-term estimates and net assessment[19]
France, as well as Japan, used overly simplistic assumptions and calculations in assessing the 1939 situation with Germany. Army manpower and equipment were roughly equal, with a slight advantage to France. French tanks, individually, were superior in weapons and protection to their German equivalent. German air power, however, was nearly double that of France. At its highest governmental levels, France did not understand the way in which Germany would combine tanks and aircraft, closely coordinated, and drive quickly into the rear, with German infantry securing the breaches. Ironically, a fairly junior officer, named Charles de Gaulle, had described just such tactics as Heinz Guderian conceived in Blitzkrieg.
In the areas of breakthrough, the Germans achieved at least a 4:1 advantage by not advancing on a broad front. Numeric ratios alone, as in the Lanchester equations, could not deal with concentration of force, or the force multiplier effect of coordinated air and armor. France also failed to consider that Germany might first defeat East European allies such as Poland and Czechoslovakia. The lesson for intelligence here is not to assume a very limited range of allies, or that certain avenues of attack, such as the Germans through the Ardennes or the Low Countries, are impossible. The analyst has the responsibility to let consumers, who may be subject matter experts, know about unlikely scenarios, as well as giving the consumers the detail they want on the likely scenarios.
Psychological and diplomatic assumptions
Another mistake is to assume which nations and groups will see the country doing an assessment as a friend. In planning for WWII, the United States developed the "Rainbow Series" of war plans; the serious assumption was that Japan would be the only significant enemy. While this was war gamed again and again, there had been little analysis of a two-front war with the Axis.
For WWII, Britain had assumed France would be an effective ally, rather than quickly defeated. Britain also did not consider the effect of the Soviet Union as a second front. This was understandable given the initial Molotov–Ribbentrop Pact before Germany attacked the USSR, but forced the reevaluation of the European balance of power.
Not all WWII assumptions were flawed. The USSR assumed Japanese neutrality toward the Soviets, which was, indeed, the case until the USSR declared war at the very end of WWII.
In 1990–1991, while the US had assumed it would be able to base troops in Saudi Arabia to meet a threat to that nation, such as the invasion of Kuwait, that was not a prior commitment by the Saudis. Even when preliminary negotiations were positive, the size of the proposed American force shocked the Saudis. For a time, until the King was convinced, the US assumption was just that.[20]
Iraqis did not greet American troops with flowers in 2003.
Assumptions about opposing decision making
Robert S. McNamara, US Secretary of Defense during most of the Vietnam War, came from a background of quantitative analysis both in conventional warfare and industry, but appeared to assume that the North Vietnamese leadership would use logic similar to his own.[21] Lyndon B. Johnson, however, personalized conflict, seeing Ho Chi Minh as someone to dominate. Both assumptions were severely flawed.[22] Intelligence analysts need to estimate on what is known about the opposition, not what one's own leadership would like them to be. Unfortunately, as McMaster points out, Johnson and McNamara tended to ignore intelligence that contradicted their preconceptions.
Assumptions about doctrine and capabilities
It can be dangerous to assume wartime capabilities of an opponent, based on their published doctrines, known training exercises, deployments, and news reporting. There was widespread belief that some of the key US weapons systems, such as the M1A1 Abrams tank, AH-64 Apache helicopter, stealth technology and precision guided munitions would not be effective in the deserts of Saudi Arabia, Kuwait and Iraq. While detailed postwar analysis showed that early reports about weapons effectiveness were overstated, precision guidance was a significant force multiplier. An unexpected force multiplier was GPS, which let Coalition soldiers go off-road and move through the desert as ships moved through the seas, where Iraqis stayed on roads for ease of navigation.[20]
Performing strategic assumptions
What goes into strategic assessment? A RAND Corporation study[23] starts with assessing national power, based on resources, the nation's ability to use those resources, and the capabilities of both its standing military and how that military could be multiplied by national mobilization.

This study, however, was focused on conventional warfare, and did not consider the much more common national military and nonmilitary options other than war. The latter, variously known as nation-building, peace operations,[24][25] or stability operations[26]
While many of his ideas are controversial, Thomas P.M. Barnett created a paradigm that better combines the military and nonmilitary aspects. His fundamental model says "The problem with most discussion of globalization is that too many experts treat it as a binary outcome: Either it is great and sweeping the planet, or it is horrid and failing humanity everywhere. Neither view really works, because globalization as a historical process is simply too big and too complex for such summary judgments. Instead, this new world must be defined by where globalization has truly taken root and where it has not.
"Show me where globalization is thick with network connectivity, financial transactions, liberal media flows, and collective security, and I will show you regions featuring stable governments, rising standards of living, and more deaths by suicide than murder. These parts of the world I call the Functioning Core, or Core. But show me where globalization is thinning or just plain absent, and I will show you regions plagued by politically repressive regimes, widespread poverty and disease, routine mass murder, and—most important—the chronic conflicts that incubate the next generation of global terrorists. These parts of the world I call the Non-Integrating Gap, or Gap".[27] Barnett states the approach as creating two forces, "Leviathan" (a term from Thomas Hobbes) and "System Administrator".[28]
The system administrator force focuses on connecting nations to the "Core". Typically, it would be a multinational organization, not primarily a military force although containing police and security forces, and having regular military force available. "Leviathan" would be a "First World", network-centric combat force that can take down the conventional military of almost any nation. While Barnett's arguments that the 2003 invasion of Iraq are questionable in hindsight, one can also observe that the invasion used Leviathan alone, and an outcome might have been different had a System Administrator force been following Leviathan, with adequate resources and legitimacy.
US contemporary
In the broadest definition, "strategic assessment" implies a forecast of peacetime and wartime competition between two nations or two alliances that includes the identification of enemy vulnerabilities and weaknesses in comparison to the strengths and advantages of one's own side. Many lessons have been learned, including perspective on balancing security of information against use of information. In the 1950s, RAND Corporation analysts who were doing the studies of Soviet power for the Defense Department, was producing badly skewed results, based on the Soviets being more dangerous than they were in reality. The analysts were not allowed to know, for security reasons, that Soviet Bison and Bear bombers had critical reliability problems. More bombers might crash in the Arctic than could arrive in North America .[29] US strategies, therefore, were less risk-taking. When the senior commanders and the intelligence community eventually found out the effects of the disconnect, it led to some reexamination of the tradeoffs between having absolutely secure intelligence versus intelligence that could actually affect policy.[29]
The practice of strategic assessment by the U.S. Department of Defense in the past 25 years has been divided into six categories of studies and analysis:[13]
National/Multinational military balance
"Measure and forecast trends in various military balances, such as the maritime balance, the Northeast Asian balance, the power-projection balance, the strategic nuclear balance, the Sino-Soviet military balance, and the European military balance between NATO and the former Warsaw Pact. Some of these studies look 20 or 30 years into the future to examine trends and discontinuities in technology, economic indicators, and other factors."
Weapons and force comparisons
"Weapons and force comparisons, with efforts to produce judgments about military effectiveness that sometimes "revealed U.S. and Soviet differences in measuring combat effectiveness and often showed the contrast between what each side considered important in combat."
Validation
"validation examines lessons of the past using historical evaluations as well as gathering data on past performance of weapons used in the context of specific conflicts.
Red Team
Red Team perceptions of foreign decision makers and even the process by which foreign institutions make strategic assessments. As Andrew Marshall, Director, Net Assessment, wrote in 1982 about assessing the former Soviet Union, "A major component of any assessment of the adequacy of the strategic balance should be our best approximation of a Soviet-style assessment of the strategic balance. But this must not be the standard U.S. calculations done with slightly different assumptions . . . . rather it should be, to the extent possible, an assessment structured as the Soviet would structure it, using those scenarios they see as most likely and their criteria and ways of measuring outcomes . . . the Soviet calculations are likely to make different assumptions about scenarios and objectives, focus attention upon different variables, include both long-range and theater forces (conventional as well as nuclear), and may at the technical assessment level, perform different calculations, use different measures of effectiveness, and perhaps use different assessment processes and methods. The result is that Soviet assessments may substantially differ from American assessments. Studies analyzing perceptions are difficult because the data used often must be inferred from public writings and speeches. Implicit biases of Americans based on our own education and culture must also be avoided."
Tool research
- Search for new analytical tools, such as developing higher "firepower scores" than may be used for the Air Force and Navy as well as the initial inventor, the ground forces. In the early 1980s, a multiyear effort was funded at The RAND Corporation to develop a Strategy Assessment System (RSAS) as a flexible analytic device for examining combat outcomes of alternative scenarios.
Assessing alternatives
- Professional analyses of particular issues of concern to the Secretary of Defense that may involve identifying competitive advantages and distinctive competencies of each size military force posture; highlighting important trends that may change a long-term balance; identifying future opportunities and risks in the military competition; and appraising the strengths and weaknesses of U.S. forces in light of long-term shifts in the security environment. Past practitioners from the Office of the Secretary of Defense have underscored the need for American strategic assessment to focus on long-term historical patterns rather than on
Russian contemporary
The most relevant comparison for China may be the Soviet Union, but this is also the most secret. As Professor Earl Ziemke put it, after three decades of research on Soviet military affairs, even when he tried to use historical data to look back from 1990 to 1940:
The Soviet net assessment process cannot be directly observed. Like a dark object in outer space, its probable nature can be discerned only from interactions with visible surroundings. Fortunately, its rigidly secret environment has been somewhat subject to countervailing conditions. . . . Tukhachevsky and his associates conducted relatively open discussion in print.[30]
Chinese Contemporary
There is intense secrecy about Chinese national security matters, but comparisons with other nations' processes of strategic assessment can increase our understanding of how China may assess its future security environment. By viewing China in comparative perspective, it may be possible to understand better how China deals with its assessment problems.[30]
"Comparing the Soviet structure with Chinese materials in the 1990s, it is apparent from the way in which Soviet strategic assessment was performed in the 1930s that a number of similarities, at least in institutional roles and the vocabulary of Marxism-Leninism, can also be seen in contemporary China. The leader of the Communist Party publicly presented a global strategic assessment to periodic Communist Party Congresses. The authors of the military portions of the assessment came from two institutions that have counterparts in Beijing today and were prominent in Moscow in the 1930s: the General Staff Academy and the National War College.
Another similarity was that the Communist Party leader chaired a defense council or main military committee and in these capacities attended peacetime military exercises and was involved deciding the details of military strategy, weapons acquisition, and war planning.[30]
In the US, there are independent or ideologically associated "think tanks", and there are government contract research organizations both not-for-profit and for-profit. In China, the primary difference between these Chinese institutes and American research institutes is their "ownership." Research institutes are "owned" by the major institutional players in the national security decision making process in China. Members of these institutes often decline to discuss in any detail the exact nature of their internal reports. They are not puppets, however, and many research institutions are important in their own right for the creative ideas they produce. Their leaders carry great prestige and have high rank in the Communist Party.[30]