structure and effectiveness of intelligence...

32
Structure and Effectiveness of Intelligence Organizations Robert Behrman Engineering and Public Policy Carnegie Mellon University Pittsburgh, PA 15213 412-268-1876 [email protected] Abstract: This paper lays out an abstract model for analyzing the structure and function of intelligence organizations and the activities of units within them. Metrics of intelligence organization effectiveness derived from the intelligence and decision making literatures are presented; then social network and computational methods of analyzing the developed model in terms of the discussed metrics are presented. Methods of validating the model are discussed. Implications of this model for the analysis of intelligence and intelligence-using organizations are discussed, and areas in need of further research are identified. This study, although preliminary, provides an initial attempt to model and analyze

Upload: vudan

Post on 27-Apr-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Structure and Effectiveness of Intelligence Organizations

Robert BehrmanEngineering and Public Policy

Carnegie Mellon UniversityPittsburgh, PA 15213

[email protected]

Abstract: This paper lays out an abstract model for analyzing the structure and function of intelligence organizations and the activities of units within them. Metrics of intelligence organization effectiveness derived from the intelligence and decision making literatures are presented; then social network and computational methods of analyzing the developed model in terms of the discussed metrics are presented. Methods of validating the model are discussed. Implications of this model for the analysis of intelligence and intelligence-using organizations are discussed, and areas in need of further research are identified. This study, although preliminary, provides an initial attempt to model and analyze intelligence organizations in terms of their effectiveness.

Structure and Effectiveness of Intelligence Organizations

1. Introduction

Popular concern over well-known intelligence failures, a widespread

disagreement over whether the current intelligence and law enforcement

infrastructure is capable of handling the additional demands of the counter-

terrorism mission, and recognition of lack of interagency cooperation have

prompted concern over the structure and function of the United States

intelligence community. New missions and a different global/political climate

from the one of the cold war have placed additional and different demands on

intelligence agencies: they must be able to collect against new targets, many of

which require different collection methods; meet new or different international

and interagency sharing requirements, often with nations in which we do not

have a long-standing cooperative intelligence relationship; they must cooperate

with civilian service agencies and law enforcement agencies; and they must do

all of this while continuing to meet military and tactical intelligence requirements,

maintaining efficiency, and operating under closer public scrutiny. In order to

meet these demands, many solutions are being discussed - increases in the

scope, power, authority, and size of the national intelligence structure; a

restructuring and recentralization of the intelligence community (examples

include the creation of a new agency to handle domestic intelligence) (Berkowitz

and Goodman, 2000); establishing intelligence coordinating positions

(“intelligence ombudsmen” or liaisons); or major changes in structure, such as a

“networked intelligence structure” (Berkowitz and Goodman, 2000; Alberts,

Garstka, and Stein, 1999; Comfort, 2002). Nor is this discussion of intelligence

confined to strategic and military intelligence – the business sector,

notwithstanding its own cloak and dagger stories, has long invested in

intelligence collection, research, and analysis designed to increase the accuracy

of business decisions; in short, in intelligence organizations. All of these

solutions involve structural changes in command and communication networks of

intelligence organizations, but there has been little analysis of these networks in

either the network analysis or the organization theory literature.

This article will discuss a method for a formal, abstract analysis of the

structure and function of intelligence organizations, the activities of the units

within them, and the correlation between these and the effectiveness of the

organization. The first part of this paper will develop an abstract model of

intelligence organizations and define terms used in the analysis. The second

part of this paper will discuss how to measure the effectiveness of intelligence

organizations, and will discuss the application of social network and

computational analysis methods to the model in order to generate these

measurements. In the third part of the article, methods of applying and validating

this model will be discussed, weaknesses in the model and its theoretical backing

will be identified, and possible areas for future research and experimentation will

be mentioned.

2. Modeling Intelligence Organization Structure

The action of intelligence organizations is typically modeled in terms of

the ‘intelligence cycle:” plan, collect, process, produce, disseminate, repeat.

Though there is skepticism within the literature about the usefulness of the formal

process, the planning, collection, and processing phases all need to be modeled

as capabilities of the organizations. During the planning phase, intelligence

consumers generate information requirements and send them to intelligence

organizations. These information requirements are used to generate tasks for

units within the intelligence organization, which are then prioritized and sent to

the units that can handle them. During the collection phase of the intelligence

cycle information is gathered by collection assets and reports are generated and

sent to units that use them. For simplicity, the ‘process’, ‘produce’, and

‘disseminate’ phases of the intelligence cycle are modeled as one phase, which

this paper refers to as the processing phase. During this phase, reports are

‘read’ by processors, and either sent to intelligence consumers or databases or

discarded. This simplification of the process, produce, and disseminate phases

of the intelligence cycle is supported by the intelligence literature – Berkowitz and

Goodman group the process and production phases together in a phase called

analysis, and separate the disseminate phase (Berkowitz and Goodman, 1989);

however it will become clear during the forthcoming discussion of communication

ties within the model why this paper chooses to model the dissemination phase

within the processing phase. The three phases identified in this discussion of the

intelligence cycle and used within the model correspond to the functions of three

different types of units within the model: decision makers, collectors, and

processors. By modeling these phases as the actions of specific units, multiple

intelligence cycles within specific sectors of the intelligence organization can be

identified and failures or inefficiencies in the operation of the organization can be

identified.

The model that will be developed will be a ‘sociogram’ of the type

discussed in Scott, 2000; it will consist of various nodes that will represent units

within the intelligence organization that are linked by ties, which represent both

communication networks and hierarchical position. Additionally, regions of the

sociogram will be identified as agencies, the intelligence organization, or the

environment. All elements of the graph – ties, nodes, regions – will have

‘attributes,’ which are parameters governing the handling of phenomena by the

element.

Two types of ties will be modeled in an intelligence organization – tasking

and reporting ties. These are directed ties, in that A being able to task or report

to B does not imply that B can task or report to A. These ties do not merely

indicate communication; instead they indicate a combination of communication

and, along with certain attributes of the phenomena that travel along them and

the units at the ends, hierarchical position and command relationships. The

presence of a tasking tie indicates that a unit can issue tasks to another unit,

which the receiving unit is compelled to either obey or forward as is appropriate.

‘Obey’ is determined by the function of the unit that receives the task – for

example, collectors obey tasks by collecting the information that is required by

the task or by queuing the task for later execution, while processors obey tasks

by producing certain reports from received and stored information. ‘Forward,’ in

the case of tasks, means that the unit can send the task to another unit that it has

tasking ties to. The presence of a reporting tie indicates that a unit can send an

intelligence report to another unit, which the receiving unit can use, forward, or

ignore as appropriate. ‘Use’ is determined by the function of the unit –

intelligence consumers receive reports and use them to make decisions,

processors use reports in order to produce synthesized reports that are then sent

to intelligence consumers or stored in databases. Ties in the model have certain

attributes: time, type, and security. Time is the amount of time it takes a

phenomenon to move along the tie. Type is a descriptor of the tie, e.g. “radio,”

“email,” or “shout across the room;” that may be useful to certain non-quantitative

analyses of the network. Security is a measure indicating how ‘secure’ – from

environmental organizations compromising or overhearing the communication –

the tie is. Certain criteria tasks may only travel along ties with a certain security,

and the tie should be more secure than the sensitivity of reports traveling along it.

Certain phenomena in the intelligence organization model have been

mentioned repeatedly but not discussed at length: tasks and reports. Tasks

indicate requests for information or action, generated by decision makers, and

sent to other units within the intelligence organization for execution. Tasks can

take the character of formal commands – subordinate units, such as processors

and collectors, are enjoined to obey them; or they can take the character of

requests – decision makers who receive tasks can choose to forward them to

units subordinate to them or not, or alter their priority. Tasks travel ‘down’

tasking ties from an originating or forwarding unit to a receiving unit that forwards

the task, queues it, or completes it. Tasks have certain attributes: criterion,

problem, time, deadline, priority. Criterion can indicate the type of unit that must

accomplish the task (for example a collector with type 1, or an actor with type 3).

Problem indicates which problem the task is intended to generate reports

answering. Time is a parameter affecting the amount of time it takes a unit to

finish the task. Deadline indicates when the task must be completed. Priority

indicates whether the unit will attempt to finish the task before or after other tasks

in its queue. Reports indicate any unit of intelligence information that is to be

communicated – from formal reports, to oral conversations, to analytic products

such as planning maps. Reports travel ‘up’ reporting ties from an originating or

forwarding unit to a receiving unit, that either uses, queues for reading or

forwarding later, forwards, or discards it. Reports have certain attributes:

Criterion, problem, accuracy, perishability, sensitivity, length, and report number.

Criterion indicates which sort of collection asset originally generated the report.

Problem indicates which decision maker needs the information from the report.

Accuracy indicates how useful it is to the decision maker. Perishability indicates

how long it takes for the report to become less accurate or worthless. Sensitivity

indicates what type of reporting tie is suitable for transmitting the report. Length

indicates how long it takes a unit to consider information from the report. Report

number is an arbitrary parameter that differentiates the information in the report;

decision makers can only use each report number one time for each problem.

Note that report number is not necessarily unique – the decision maker may

receive the same report from two different units, or multiple collectors may notice

the exact same information. Note that phenomena can be copied

indiscriminately – tasks can be assigned to more than one unit, and reports can

be disseminated to multiple units.

Nodes within this model do not correspond to people, per se; instead they

correspond more to duty positions and functions. Nodes can indicate a single

person – e.g. the president or a CEO in the case of a decision maker – or a

group of people, such as an analysis team in the case of a processor. For this

reason, usually when nodes are discussed in this paper they are referred to as

units. In certain cases, for example modeling very small organizations, the same

person or group may be represented by more than one node – for instance a

market researcher (processor) who also conducts surveys for data (collector).

Nodes have functions, which describe its operations; and attributes, which

describe its phenomenon handling.

The most important type of units in the intelligence organization are

decision makers - proper representation of decision makers is critical to the

modeling of the intelligence process, since they are its natural end. Although not

specifically modeled per se (except in the simulation), decision makers have

some method by which they go about making decisions. Decision makers can

use intelligence provided by the intelligence organization to affect this process,

and metrics of intelligence organization effectiveness (to be discussed later) will

almost certainly deal with modeling and evaluating this process. In its simplest

form, this decision making process is modeled by the ability to generate tasks.

Decision makers are the only unit in the intelligence organization that can come

up with tasks on their own. For the purpose of modeling, decision makers do not

carry out tasks, instead they forward tasks to units that carry them out

(processors, collectors, actors). Because they are the origin of tasks, decision

makers always have tasking ties to at least one other unit (which can be of any

type). Decision makers can also receive tasking ties from other units but since

they cannot execute tasks on their own they must forward these tasks to other

units within their control. Because they use reports, decision makers also tend to

receive reporting ties. It is possible to conceive of a situation in which a decision

maker does not receive reporting ties, but such a situation is useless to an

understanding of the intelligence organization to model this. Decision makers

have certain attributes: problems, priority, and comprehension. Problems

indicate issues or topics that the decision maker is responsible for making

decisions on. For each problem, a decision maker may need certain amounts of

accurate information (that is, sum of the comprehended accuracy of used

reports) from certain criteria of collectors to make a ‘good’ decision. Not all

decisions makers have problems – some decision makers are included in the

organization solely to plan the forwarding of tasks, which is modeled as a

separate function. Power is a relative parameter that indicates how the decision

maker can handle tasks forwarded from other decision makers – if the receiving

decision maker has a greater power value than the sending decision maker, it

can handle the task as it desires; if the receiving decision maker has a lower

power value it may be forced to increase the priority of the received tasks or to

decrease the priority of other tasks that it will forward. Finally, comprehension is

a parameter that affects a random distribution of how much of the accuracy of a

received report the decision maker can apply to its problems – a decision maker

with a low comprehension is more likely to receive less information from a report

than a decision maker with a high comprehension. Decision makers have two

primary functions in this model: they generate tasks and forward tasks. The two

functions of decision makers correspond to the ‘plan’ phase of the intelligence

cycle: they make requests for information that they then turn into specific tasks

for units within the intelligence organization. A decision maker can make a

decision (that is, using the decision making process) to task units to collect

information to fulfill the information requirements of his problems. A decision

maker who receives a task from another unit can choose to forward it to a unit

that he has tasking ties to, to ignore it, or to change certain attributes of it (like its

priority). This allows decision makers to choose the relative importance of

subordinate decision makers’ requirements, or to choose how cooperative they

want to be with other decision makers.

The next type of units within the intelligence organization to be considered

is collectors. Collectors act as an interface between the intelligence organization

and the environment. Collectors notice changes in the environment and respond

to specific tasks to gather information on the environment. Collectors receive

tasks from decision makers and send reports to processors or, in certain

circumstances, directly to decision makers. Collectors have criteria. The

criterion of a specific collector indicates which kind of tasks it is capable of

responding to. Collectors sole function is to generate reports – they can receive

tasks from other units, ‘work on them’ for an amount of time dependent on the

time attribute of the task, and then generate a report on the task. Collectors that

receive tasks while working on other tasks either queue what they’re doing and

work on the new task or queue the new task (dependent on the relative priority of

what they’re doing and the received task(s)). Note that it is possible to model

collectors that do not receive tasks, and randomly generate reports that they

send to processors. This could model ‘listening’ to the environment or access to

open source information (the decision makers may receive so many reports that

they have to task collectors to read the newspaper).

Processors perform the process, produce, and disseminate elements of

the intelligence cycle. That is to say, they receive information from collectors,

process it for worth, usefulness, etc.; produce intelligence summaries or reports

for intelligence consumers; and send reports to databases for storage and/or

send referential information to referential databases. Processors receive tasks

from decision makers and automatically forward them to databases and

referential databases that they have access to. Processors receive reports from

collectors or databases, and send reports to processors, databases, or decision

makers. Processors have two functions: synthesize received reports into

analyzed intelligence products, and store information in databases and referential

databases. Processors can receive tasks from decision makers calling for

analyzed intelligence products of a certain type (that is, the problem attribute of

the task). They then query all available databases for reports on that problem,

read these reports (that is, ‘work on’ the reports for an amount of time equal to

the sum of the length of the reports), and combine the information from these

reports (that is, sum the accuracy of the reports) into one report (with a shorter

length value), which they send to the decision maker and/or to a database of

synthesized reports. Processors can also forward received reports to databases,

which they are assumed to do automatically. Processors do not necessarily

have any attributes, though they could be coded specifically for problem or

criterion. Additionally, a ‘skill’ attribute might be appropriate for certain

processors, which represents their ability to combine the most information into

synthesized reports.

Databases represent the memory stores of the intelligence organization.

Databases contain intelligence information stored for later retrieval. Databases

are contributed to by one or more processors. For example, each processor may

have a database that it maintains privately, while there may be a shared

database contributed to by every processor in the organization. Similarly, some

or all processors or decision makers may be able to query the database for

information. Databases use received reports by storing them, and forwarding a

copy of the report (that is, a new report with the same attributes). Databases

receive tasking ties from every unit that has access to the information in them,

and have a corresponding reporting tie back to the tasking unit. Databases

receive reporting ties from processors responsible for maintaining them.

Databases have a criterion attribute and/or a problem attribute that represents

the type of information that can be stored in them; a size attribute, which

indicates the number of reports that can be stored in them; and a time attribute,

which indicates how long it takes to retrieve and forward the information stored

within.

Referential databases contain information about units within the

organization itself. They may contain contact information for units within the

organization, to facilitate communication between units, or they may contain

information on what reports are stored in which database. In order to model this,

referential databases are represented as receiving tasking ties from units with

access to them and having tasking ties to units that they contain information on

(for example, databases or processors). Referential databases forward received

tasks to the appropriate unit, and forward generated reports back to the unit that

queried them. Referential databases are primarily included as a means of

modeling large organizations – they can represent phone directories or

information directories. Referential databases have the same attributes as

databases, though a size attribute models the number of units that it can forward

to and the time attribute indicates how long it takes to forward the task to the

receiving unit (the time attribute does not take into account the forwarding of the

report, since this is (in reality) a communication between the tasking unit and the

final tasked unit. This time is the sum of the reporting times between the

referential database and the two units.

There is one other type of node that can be modeled – actors. These

units do not generate reports or tasks, so they are largely unimportant to a

structural understanding of the intelligence organization, but they are important to

model in order to complete the intelligence organization’s environmental interface

and in order for use in certain analysis and metric methods. Actors represent the

other part of the intelligence organization’s environmental interface – the ability of

the intelligence organization to affect the environment. Actors receive tasks from

decision makers, and carry them out. They have no specific attributes, except

possibly for a problem coding (indicating the type of problems they can act on),

and they do not generate any other effect within the intelligence organization.

The last thing that needs to be discussed in the modeling of intelligence

organizations are the sub-organizations and agencies within the intelligence

organization and the environment that the intelligence organization operates

within.

The intelligence organization is the entire organization that is modeled – it

corresponds to the cooperative organization of intelligence agencies and unit that

attempts to provide intelligence to intelligence consumers. Though the

intelligence consumers that the intelligence organization serves may not be

physically or formally “within” the organization, they are modeled as part of it in

order to understand the way that the intelligence agency serves them and the

way that their requests are handled. For example, the U.S. national intelligence

structure is an intelligence organization designed to serve national level

intelligence consumers (the president and legislature, etc.).

Agencies are individual, specific, formal entities within an intelligence

organization. Agencies are composed of at least one decision maker, and a

number of other assets (collectors, processors, actors, or other decision makers).

Agencies are useful for designating command relationships and other ‘political’

relationships within an intelligence organization; for example, one decision maker

may wish to task an asset within another agency, but since he does not have

direct control of the asset he must request a decision maker within the agency to

forward the task. Both agencies and the intelligence organization as a whole

have ‘rules,’ which are overarching guidelines for the handling of phenomena

within the organization. For example, a rule could be “all reports with a sensitivity

of greater than or equal to 3 and an accuracy of greater than or equal to 75

should be sent directly from the collector to the tasking decision maker.”

Finally, there is the environment, which is the ‘world’ in which the

intelligence organization exists and operates. It is the source of information

collected, the target of intelligence consumers’ decisions, and is affected by

actors and random events. Events can take place in the environment that would

affect decision makers’ problems, and collection assets have a chance to notice

these events based on how ‘busy’ they are working on tasks. Similarly, actors

can cause events, and collectors may be tasked to find out information about the

effect of the action. A proper modeling of collector response to environmental

change is essential to the analysis of the damage assessment and indicators and

warnings missions of intelligence organizations.

3. Metrics of the effectiveness of intelligence organizations

The task of measuring the effectiveness of intelligence organizations is not

easy: there is little consensus within the organization theory literature on how to

come up with a universal or generalizable metric of organization performance.

The easiest relative metric is a metric of goal completion, but even this has

problems, since the goals of intelligence organizations are not always clearly

stated.

The fundamental mission of an intelligence organization is to provide

intelligence to support intelligence consumers’ decisions. Unfortunately, this

mission, as stated, does not lend much insight in how to measure exactly how

well an intelligence organization is doing this. Other tasks identified for

intelligence organizations include: providing ‘indicators and warnings’ of changes

in the environment; collecting information on other organizations within the

environment; assessing the impact of organization actions upon the environment;

and being adaptable and flexible enough to provide intelligence to intelligence

consumers with differentiated goals and circumstances; that is, being able to

adapt to changes in the environment and the structure of the intelligence

organization.

Other constraints are placed on the intelligence organization from outside

sources. Intelligence organizations are expected to be resource efficient, i.e.

they make the best use of available resources and technology at all times; they

maintain accountability for intelligence failures (misinformation) or unethical or

sloppy conduct (really bad decisions); and they prevent the disclosure of

sensitive information. Combining the internal and the environmental approaches

to the goals of intelligence organizations can create a meaningful multiple-

constituency framework for the analysis of intelligence organization (Connolly,

Conlon, and Deutsch, 1980).

Possible quantitative measurements for intelligence organization

structures can be derived from the above goals and constraints upon intelligence

organizations. Perhaps the most meaningful metric of intelligence organization

success is its ability to support intelligence consumers’ problem solving: consider

the intelligence organization over a period of time and see if the intelligence

generated was able to solve the decision makers’ problems. If so, the

percentage of problems solved over the problems not solved would be an

appropriate measure of the success of the intelligence organization.

If meaningful quantification of intelligence requirements for problems

proves to be inappropriate or intractable, the speed of the intelligence

consumers’ decision process can be use. This would use one of the cyclical

decision making, such as Boyd’s OODA loop, as a representation of the decision

makers’ decision process. The amount of time it takes to go through the loop

one cycle would be an appropriate metric of the ability of the intelligence

organization’s effectiveness. Similarly, the speed of the intelligence

organization’s function could be represented for intelligence consumers – that is,

the intelligence cycle could be represented in terms of the number of iterations it

takes for each intelligence consumer to return to the ‘plan’ phase of the

intelligence cycle. Both of these loop based metrics would provide useful details

on the comparative effectiveness of the intelligence organization with respect to

different clients.

Finally, one of the most popular metrics of intelligence organization

effectiveness is the lack of intelligence failures; that is, ensuring that the

intelligence organization does not report false information or fail to report

information of value to intelligence consumers. Though there is no provision in

the model as developed for false information, this would not be difficult to include;

the failure to report information constraint can be modeled in other metrics of

effectiveness such as proportion of problems solved.

When assembling these metrics, certain concerns should be kept in mind.

Intelligence consumers and processors are boundedly rational – they can only

review a certain amount of incoming information, and the intelligence

organization should keep away from information overload. Metrics such as

intelligence cycle or decision cycle speed can model these phenomena. Also,

units are boundedly capable: they can only perform a certain number of tasks,

and if they keep receiving new orders or distracting orders they will not get

anything done – ‘chasing the tail’ behavior, as described by Alberts, Garstka, and

Stein. A problem solving metric can identify these phenomena.

4. Analysis and experimentation methodsThe end purpose of this model is not solely to measure the effectiveness

of the intelligence organization, but to determine structural effects on the

effectiveness of intelligence organizations. For this purpose, social network and

computational analysis methods can be applied to the models of intelligence

organization structure to determine the correlation between structural

measurements and effectiveness measurements.

Social network analysis provides many useful measurements of

organization structure that can be applied to this model. These measurements

include centrality and cognitive load measurements, path length measurements,

density and degree measurements, and meta-matrix representations. For the

purpose of many of these measurements, it is convenient to separate the tasking

and reporting communications structures, and to analyze them separately.

Centrality measurements are generally based on the number of paths that

a node lies on. Centrality measurements are generally taken into account for

sociograms with non-directed ties; however these measurements can be applied

to directed graphs, in this case with meaningful results. High centrality in the

tasking communications structure indicates that the unit is responsible for

forwarding tasks. In a large intelligence organization, the units with the highest

task centrality should be referential databases; since the function of these units is

automatic, however, it may be useful to ignore them. In smaller intelligence

organizations and organizations in which referential databases are ignored, task

centrality should be highest among decision makers responsible for planning. If

planners are not highest in task centrality, it means that decision makers without

formally designated planning authority are nonetheless responsible for most

planning decisions (as may be the case with decision makers who are solely in

command of collection assets of a certain criteria that is highly demanded). High

centrality in the reporting structure means that the unit is responsible for

forwarding multiple reports, which should again correspond to either databases

or referential databases. Under these circumstances, databases should not be

ignored – a database with high report centrality is contributed to by many

processors, or at least many centrally placed processors, and therefore contains

a large amount of information; these databases should correspondingly have a

high tasking indegree, to ensure that significant portions of the intelligence

organization have access to this information. A high centrality on both the

tasking and reporting structures is indicative of large cognitive load, and should

correspond to the largest databases and the processors serving the most

important decision makers (since processors have tasking ties with databases

and bidirectional reporting ties with databases).

Average path length is also an important measurement of intelligence

organization structure. Long average tasking path lengths between intelligenge

consumers and collectors mean that consumers are largely sheltered from the

intelligence planning and task allocation process; long average reporting path

lengths between intelligence consumers and collectors mean that the intelligence

consumers are receiving highly analyzed and probably somewhat late

information. On the other hand, short average path lengths in both networks can

contribute to high cognitive load measurements for large numbers of units in the

network, contributing to possible widespread cognitive overload (the information

overload and chasing the tail phenomena described above). Other

measurements of the communication network, such as density, are less

meaningful to the study of intelligence organizations, though the comparison of

separate density measurements for different types of reporting or tasking tie can

be measured to analyze use of information technology in the organization.

The meta-matrix – slightly edited for the attributes of units in the

organization – provides an extremely powerful tool for analysis of network

structure in the case of intelligence organizations. The different types of meta-

matrix values can be altered by unit types (decision maker-collector, decision

maker-decision maker, etc.), or certain attributes (decision maker-criterion,

criterion-criterion), to determine various types of relationship in large intelligence

organizations. Unfortunately, the intelligence organization model under

development is still at too early a stage to present mathematical formulae for

deriving these network structure measurements.

This model of intelligence organization structure and the metrics

previously discussed lend themselves to computer analysis and experimentation.

Computerized methods can be used to compare different intelligence

organization structures, different agency or organization rules, and/or different

communication attributes with ease. In the simplest model, phenomena, units,

ties, and agencies are modeled as discrete variable bundles with sets of

attributes. The movement of phenomena throughout the network can be tracked

and recorded in databases to provide a log of phenomena action and

transmission. This sort of ‘traffic analysis’ can precisely identify snagging points

in the organization – points of information overload or assets forced into chasing

their tail; or points of inefficiency within the organization – needlessly long

reporting or tasking paths, persistently idle units, or decision makers who are not

receiving sufficient reports or tasks. By varying organization structure in

successive iterations of the experiment and observing changes in traffic patterns,

design guidance for intelligence organizations can be derived. Finally, by varying

the attribute parameters of ties, nodes, phenomena, and rules within the

organization singly and jointly, a sensitivity analysis can be done on the findings

of the computerized experiment and correlation between attributes can be

measured. The simple traffic analysis, though powerful, is only one aspect of

virtual experimentation on this model. A codification of decision loops or

intelligence cycles for each decision maker in the organization can allow for

speed-optimizing tests to be run on various organization structures. In doing so,

conclusions about structural effects on command and control speed can be

reached.

All of this presumes, however, that the model works. This paper would be

remiss if it did not discuss methods by which this model of intelligence

organization structure could be validated.

First, and most practical, is the comparison to historical intelligence

organization charts and performance records. Though we are unlikely to gain

access to historical databases of U.S. intelligence organization activities,

historical records of intelligence organization activities in other nations are

available. A model could be developed based on these organizational charts,

and predictions of intelligence organization effectiveness could be compared with

historical judgments. If a computer model is developed, phenomenon movement

logs can be saved for time iterations of the simulation and compared to

corresponding message logs within the organization. Though the will necessarily

differ somewhat, general trends can be compared and the ability of the model to

make accurate predictions can be derived.

5. Conclusion, Strengths and Weaknesses

Though this model presents a potentially powerful tool for the analysis of

intelligence organizations, it is by no means complete. Its theoretical backing

with respect to many of the trends and developments in organization theory is

incomplete or non-existent, it is as yet not validated, and at presents it lacks the

ability to exploit many powerful tools in organization analysis. On the other hand,

this model shows great potential – it is scalable from the largest intelligence

networks to the smallest; it takes into account multiple agencies, assets, and

decision makers; it is simple and easy to understand; and makes a formal and

quantitative analysis of intelligence organizations possible.

The theoretical backing for this model is as yet incomplete. Though it is

based on a relatively thorough understanding of the command and control,

military and strategic intelligence, and tactical decision making literature, it does

not take into account many elements in the academic literature that could be

relevant. It has not been considered in terms of the rational decision making

literature, which could provide additional relevant insights on how to measure

intelligence accuracy and its effect on decision makers’ choices. Its network

structural measurements are well grounded, but it ignores entirely the literature

on dynamic networks, adaptive networks, and organizational learning. Additional

research in these fields could provide a method by which this model could be

applied to dynamic networks, which would increase its applicability and power

substantially. Finally, this network model ignores the literature on economic

incentives and decision making within organizations (a market decision making

process). Nothing about the model precludes an application of market based

resource and unit allocation schemes save for the lack of research no how to

apply these. A further study of this literature could provide insights on how to use

this model to consider personal power in intelligence networks and information

economics measurements to the decision makers’ decision processes; which

would be quite useful in expanding the applicability of the model to civilian or

business sectors.

References

[Alberts, Garstka, and Stein, 1999] Alberts, David S.; Garstka, John J.; and Stein, Frederick P. (1999) Network Centric Warfare: Developing And Leveraging Information Superiority. National Defense University Press, Washington, DC.

[Berkowitz and Goodman, 1989] Berkowitz, Bruce D. and Goodman, Allan E. (1989) Strategic Intelligence for American National Security. Princeton University Press, Princeton, New Jersey.

[Berkowitz and Goodman, 2000] Berkowitz, Bruce D. and Goodman, Allan E. (2000) Best Truth: Intelligence In The Information Age. Yale University Press, New haven, Connecticut.

[Carley, 2002] Kathleen M. Carley, (2002) “Information Technology and Knowledge Distribution in C3I Teams” in Proceedings of the 2002 Command and Control Research and Technology Symposium. Conference held in Naval Postgraduate School, Monterey, CA. Evidence Based Research, Vienna, VA.

[Cohen, March, and Olsen, 1972] Cohen, Michael D.; March, James G.; and Olsen, Johan P. (1972) “A Garbage Can Model Of Organizational Choice” in Administrative Science Quarterly

[Comfort, 2002] Comfort, Louise K. (2002) “Institutional Re-Orientation and Change: Security As A Learning Strategy.” The Forum, Volume 1, Issue 2. Berkeley Electronic Press http://www.bepress.com/forum.

[Connolly, Conlon, and Deutsch, 1980] Connolly, Terry; Conlon, Edward J.; Deutsch, Stuart Jay; “Organizational Effectiveness: A Multiple-Constituency Approach.” Academy of Management Review, Volume 5. Academy of Management.

[Feldman and March, 1981] Feldman, Martha S.; and March, James G. (1981) “Information in Organizations as Signal and Symbol” Administrative Science Quarterly, Volume 26, Issue 2.

[Kent, 1951] Kent, Sherman. (1951) Strategic Intelligence for American World Policy. Princeton University Press, Princeton, New Jersey.

[Lawrence and Lorsch, 1969] Lawrence, Paul R.; and Lorsch, Jay W. (1969) “Organization-Environment Interface” in Classics of Organization Theory, third edition, eds. Shafritz and Ott. Wadsworth Publishing Company, Belmont, California.

[Liang and Xiangsui, 1999] Qiao Liang and Wang Xiangsui (1999) Unrestricted Warfare. PLA Literature and Arts Publishing House, Beijing.

[March and Simon, 1958] March, James G. and Simon, Herbert A. (1958) Organizations. John Wiley & Sons, Inc.

[Mizruchi and Galaskiewicz, 1994] Mizruchi, Mark S.; and Galaskiewicz, Joseph. (1994) “Networks of Interorganizational Relations” in Advances In Social Network Analysis, eds. Wasserman and Galaskiewicz. Sage Publications, London.

[Perrow, 1994] Perrow, Charles. (1992) “Small-Firm Networks” in Networks and Organizations: Structure, Form, and Action, eds. Nohria and Eccles. Harvard Business School Press, Boston, Massachusetts.

[Powell, 1990] Powell, Walter W. (1990) “Neither Market Nor Hierarchy: Network Forms of Organization” Research In Organizational Behavior, Volume 12. JAI Press, Inc.

[Scott, 2000] Scott, John. (2000) Social Network Analysis: A Handbook. Sage Publications, London, England