leading attackers through attack graphs with deceptions

10
Fred Cohen a and Deanna Koike b a Sandia National Laboratories - College Cyber Defenders Program b Principal Member of Technical Staff - Sandia National Laboratories Abstract This paper describes a series of experiments in which specific deceptions were created in order to induce red teams attacking computer networks to attack network elements in sequence. It demonstrates the ability to control the path of an attacker through the use of deceptions and allows us to associate metrics with paths and their traversal. Background and introduction A fairly complete review of the history of deception in this context was recently undertaken, and the reader is referred to it [1] for more details on the background of this area. Experimental results were also recently published and the reader is referred to another recent paper for further details of that effort [2]. One of the key elements in associating metrics with experimental outcomes in our previous papers was the use of attack graphs and time to show differences between attackers acting in the presence and absence of deceptions. After running a substantial number of these experiments we were able to show that deception is effective, but little more was explored about the nature of the attack processes and how they are impacted by specific deceptions. One of the things we noticed in these experiments was that patterns seemed to arise in the paths through attacks. While this has long been described in literature that seeks to associate metrics for the design of layered defenses, and in the physical reality it has long been used to drive prey into kill zones, to date we have not seen examples of the design of such defenses so as to lead attackers into desired paths in the information arena. Our ongoing theoretical work led us to the notion that in addition to measuring paths through attack graphs over time, we should also be able to design attack graphs so that they would be explored in a particular sequence. By inducing exploration sequences, we should then be able to drive the attackers into desired systems and content within those systems. Indeed, if we become good enough at this, we might be able to hold attackers off for specified time periods by specific techniques, change tactics automatically as attackers explore the space, so as to continue to drive them away from actual targets, and otherwise exploit the knowledge for both deception and counterdeception. In this paper, we describe a set of experiments in which we used a generic attack graph and specific available techniques to design sets of deceptions and system configurations designed to lead attackers through desired paths in our attack graph. The attack graph Based on previous work already cited, we developed the following generic attack graph which is intended to describe, at a specific level of granularity, the processes an attacker might use in attacking a computer system (see Figure 1). The process begins at ‘Start’ and is divided into a set of ‘levels’ which we can number as -4 through 4 inclusive. The attacker starts at level 0 and generally moves toward increasingly negative numerical values as they are taken into a deception and increasingly higher numerical values as they succeed at attacking real victims. Lines with arrows represent transitions and each node in the graph represents a complex process which we have not yet fully come to 402 0167-4048/03 ©2003 Elsevier Ltd. All rights reserved. Leading attackers through attack graphs with deceptions

Upload: fred-cohen

Post on 02-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Leading attackers through attack graphs with deceptions

Fred Cohena and

Deanna Koikeb

aSandia NationalLaboratories - College Cyber

Defenders Program

bPrincipal Member ofTechnical Staff - Sandia

National Laboratories

Abstract

This paper describes a series of experiments inwhich specific deceptions were created inorder to induce red teams attacking computernetworks to attack network elements insequence. It demonstrates the ability tocontrol the path of an attacker through theuse of deceptions and allows us to associatemetrics with paths and their traversal.

Background and introduction

A fairly complete review of the history ofdeception in this context was recentlyundertaken, and the reader is referred to it [1] for more details on the background of this area. Experimental results were alsorecently published and the reader is referred toanother recent paper for further details of thateffort [2].

One of the key elements in associating metricswith experimental outcomes in our previouspapers was the use of attack graphs and time toshow differences between attackers acting inthe presence and absence of deceptions. Afterrunning a substantial number of theseexperiments we were able to show thatdeception is effective, but little more wasexplored about the nature of the attackprocesses and how they are impacted byspecific deceptions. One of the things wenoticed in these experiments was that patternsseemed to arise in the paths through attacks.While this has long been described inliterature that seeks to associate metrics for the design of layered defenses, and in thephysical reality it has long been used to driveprey into kill zones, to date we have not seenexamples of the design of such defenses so asto lead attackers into desired paths in theinformation arena.

Our ongoing theoretical work led us to thenotion that in addition to measuring pathsthrough attack graphs over time, we should alsobe able to design attack graphs so that theywould be explored in a particular sequence. Byinducing exploration sequences, we should thenbe able to drive the attackers into desiredsystems and content within those systems.Indeed, if we become good enough at this, wemight be able to hold attackers off for specifiedtime periods by specific techniques, changetactics automatically as attackers explore thespace, so as to continue to drive them awayfrom actual targets, and otherwise exploit theknowledge for both deception andcounterdeception.

In this paper, we describe a set of experimentsin which we used a generic attack graph andspecific available techniques to design sets ofdeceptions and system configurations designedto lead attackers through desired paths in ourattack graph.

The attack graph

Based on previous work already cited, wedeveloped the following generic attack graphwhich is intended to describe, at a specific level of granularity, the processes an attackermight use in attacking a computer system (seeFigure 1).

The process begins at ‘Start’ and is divided intoa set of ‘levels’ which we can number as -4through 4 inclusive. The attacker starts at level0 and generally moves toward increasinglynegative numerical values as they are taken intoa deception and increasingly higher numericalvalues as they succeed at attacking real victims.Lines with arrows represent transitions and eachnode in the graph represents a complex processwhich we have not yet fully come to

402 0167-4048/03 ©2003 Elsevier Ltd. All rights reserved.

Leading attackers throughattack graphs withdeceptions

COSE 2205.qxd 01/07/2003 11:51 Page 402

Page 2: Leading attackers through attack graphs with deceptions

understand. There are a lot of transitions thatcross multiple levels of the graph. For example,an attacker in a real system can be led into adeception by ‘tripping across’ a deceptionwithin that system that deflects the attack intoa deception. In addition, there is a general‘warp’ that extends throughout the graph in thesense that from any given state, it is possible toleap directly to another state, however, thisappears to be fairly low probability and has notbeen well characterized yet.

Two processes are defined here, one startingwith a systematic exploration of the target spaceand the other through random guessing. We

have sought out other strategies to depict, buthave found none. It appears that transitions inthis attack graph are associated with cognitiveprocesses in the groups, individuals, and systemsused in the attack process as they observe,orient, decide, and act on signals they get fromtheir environment.

Our experimental design

Early in 2002, we created a series ofexperiments in which we attempted to designsets of interacting deception-based defenseswith the objective of inducing attackers tofollow specific paths through the generic attack

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

403

Assume/believeFailure - DK - Success

Expand access

Assume/believeFailure - DK - Success

Exploit access

Attacksuccess

Fail to find false target

Fail to find real target

Differentiate

Think fake

Think fake

Think real Real target Real target

Try arbitrary exploit

False target

Try arbitrary exploit

Exploit accessDeceptionsuccess

Assume/believeFailure - DK - Success

Find real target

Differentiate Think realFind false target

D/KSeek target No targetSelect arbitrary targetStart

Try to enter

Assume/believeFailure - DK - success

Seek vulnerabilities

Assume/believeFailure - DK - Success

Assume/believeFailure - DK - success

Try to enter

Assume/believeFailure - DK - success

Seek vulnerability

Expand access

Assume/believeFailure - DK - Success

Figure 1

COSE 2205.qxd 01/07/2003 11:51 Page 403

Page 3: Leading attackers through attack graphs with deceptions

graph. For example, in our first experiment, wedecided to try to induce attackers to (1) seektargets, (2) fail to find real targets, (3) find falsetargets, (4) attempt to differentiate false targetsfrom real ones, (5) seek other targets, (6) findfalse targets, (7) differentiate them from otherfalse targets, (8) decide to seek vulnerabilities,(9) try to enter, (10) fail to find vulnerabilities,(11) fail to enter, (12) eventually succeed ingaining limited entry, (13) attempt to exploitaccess, (14) decide to try to expand access, and(15) continue the process over a period of fourhours. We will use these numbers in thefollowing paragraphs to associate ourmechanisms with the actions we sought toinduce.

Our planning process consisted of creating setsof possible targets of attack with characteristicsthat could be identified and differentiated withdifferent levels of effort using available toolsand known techniques. This process was drivenby the ‘assignment’ of the team (1) which wasto find user systems and try to gain specificinformation about a criminal conspiracy fromthose systems. By making the more easilyidentified targets more obviously false, we wereable to induce the behaviors associated with theloop in which attackers (3) find false targets,(4) differentiate them as false, (5) and seekother targets. Similarly, we used (2)concealment techniques to make it difficult tofind real targets so that the attackers would befar more likely to miss them and find falsetargets.

To get attackers to proceed to seekvulnerabilities and try to gain entry, (6) wecreated real systems that were apparently inuse based on normal observations. Forexample, (7) these systems appeared togenerate traffic that would commonly beassociated with users performing activity, (8)they apparently had services running on them,they appeared to respond to various probes,and so forth. The goal was for the attackers tobecome adequately convinced that they were

legitimate targets to (9) try to gain entry. After(11) some number of failed entry attempts,(12) relatively simple entry paths were foundthat allowed rapid entry through apparentmisconfigurations, and (13) select contentimplying the need for more access to get tomore important content was placed in thosecomputers to (14) entice the attackers to try toescalate privileges under the belief that thismight gain them the information they sought.Some of the information that could only beobtained under escalated privileges made itvery clear that this system was not the realtarget, thus driving the attacker back to thetarget acquisition phase. In addition, IPaddresses were changed every few minutes anduser access was terminated periodically tocause the attacker to return to the targetacquisition process and attempted entryprocess respectively. It was anticipated thatover time, these targets would be identified asfalse and that other targets would be sought.(15) Other less obvious targets were providedin a similar vein for more in-depthexamination. Specific methods associated with these processes are described in acompanion paper still in draft form. [3] Wealso note that the deceptions in theseexperiments are fully automatic and largelystatic in that the same input sequence fromthe attacker triggers the same responsemechanism in the deception systemthroughout the experiment.

In the first experiment, the systems beingdefended were on the same network as theattackers and were configured to ignorepackets emitted from unauthorized IPaddresses. Forged responses to ARP requestswere used on all IP addresses not otherwise inuse (2) to prevent ARP information fromrevealing real targets and ICMP responses weresuppressed to prevent their use foridentification of real targets.

Subsequent experiments were carried out withvariations on these design principles.

404

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

Fred Cohen

Fred Cohen is helping clientsmeet their informationprotection needs at FredCohen & Associates andSecurity Posture and doingresearch and education as aResearch Professor in theUniversity of New Haven'sForensic Sciences Program.He can be reached by sendingemail to [email protected] or visitinghttp://all.net/

Deanna Koike

Deanna Koike graduated fromUC Berkeley in 2001 with abachelor's degree incomputer science. She iscurrently a graduate studentin computer security at UCDavis. She works at SandiaNational Laboratories as atechnical student intern intheir Center for CyberDefenders student program.

COSE 2205.qxd 01/07/2003 11:51 Page 404

Page 4: Leading attackers through attack graphs with deceptions

Specifically, we created situations in which wecontrolled available information so as to limitthe decision processes of attackers. When wewished to hide things, we made them look likethe rest of the seemingly all false environment,and when we wished to reveal things, we madethem stand out by making them differentiablein different ways.

Unfortunately, we did not have the resourcesnecessary to carry out a fully fledged study inwhich we used the presence and absence ofdeception or more and less well planneddeceptions in order to differentiate specificeffects and associate statistically meaningfulmetrics with our outcomes. We did not evenstrictly speaking have the resources for creatingrepeatable experiments. Unlike our earlierexperiments [2] in which we ran five rounds ofeach experiment with deception enabled anddisabled, we had only one in-house group ofattackers available to us, and of course they aretainted by each experience.

As an alternative, we created a series ofexperiments in which our in-house attackteam was guided, unbeknownst to them, andwith increasing accuracy, through a plannedattack graph. We then carried out an

experiment at a conference in which attackgroups were solicited to win prizes (up to $10000) for defeating defenses. The specificdeception defenses were intended to inducethe attackers to take a particular path throughthe attack graph. All attack groups actedsimultaneously and in competition with eachother to try to win prizes by breaking intosystems and attaining various goals. Norepetitions were possible, and a trainedobserver who knew what was real and whatwas deception followed the attacker activitiesand measured their progress.

Experimental methodology

In each case the experiment began with aplanning session in which defense team

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

405

Figure 2: Attack graph numbering.

Number Node name Level Number Node name Level

0 Start 0 1 Seek Target 0

2 Fail to find false target -1 3 Find false target -1

4 Differentiate (Fake) -1 5 Think Fake (from 4) 0

6 Think Real (Fake) -1 7 Seek Vulnerabilities (Fake) -2

8 Try to enter (Fake) -3 9 Exploit Access (Fake) -4

10 Expand Access (Fake) -4 11 Find Real Target 1

12 Differentiate (Real) 1 13 Fail to find false target (Real) 1

14 Don't Know 0 15 Think Real (Real) 1

16 Seek Vulnerability (Real) 2 17 Try to Enter (Real) 3

18 Think Fake (Real) 0 20 Exploit Access (Real) 4

21 Expand Access (Real) 4 30 Select Arbitrary Target 0

31 No Target 0 32 False Target -1

33 Try Arbitrary Exploit -2 34 Real Target 1

35 Try Arbitrary Exploit 2

Figure 3: Example predictions.

Sequence Comment

0 Start the run

1 Seek target per assignment

2 Fail to find target (missed topology due to concealment)

3 Find false target via open ports

4,5,1 Obvious dazzlements

1,3,4,6 Limited dazzlements easily differentiated

COSE 2205.qxd 01/07/2003 11:51 Page 405

Page 5: Leading attackers through attack graphs with deceptions

members designed a set of specific deceptionsand predicted sequences of steps in the attackgraph that they believed attackers would takein attempting to attack real targets. Theconfiguration was documented andimplemented and the attack sequences werediscussed and put into written form as a seriesof states and transitions in the attack graphdepicted. Numbers were associated with attackgraph locations for convenience ofabbreviation. These locations in the attackgraph can also be roughly associated with thelevels used in our previous experiments ondeception. The numerical values are shown inFigure 2.

A predicted outcome would be in the form ofsequences of node numbers with a note ontransitions and loops indicating the anticipatedevent. For example the first run is shown inFigure 3.

The experiment was run with one of thedefense team members taking notes on thesequence of events in terms of the attackgraphs, identifying associated times. It wasnecessary for this team member to knowspecifically which targets were deceptions andwhich were real in order to accurately identifythe location in the attack graph. With thecombination of knowledge of the attack graph,the configuration, and background ondeception, it is relatively easy to guess whatpaths are likely to occur under whichdeceptions. For this reason it was impossible tohave the observer not know what predictionswere made. Observers were trained in notrevealing information about the situation tothe attackers, however, this is a less than idealsituation. This represents a yet unresolvedexperimental limitation that can easily produceerroneous results because of the lack of anunbiased observer. Note that the modelimplicitly assumes that at any time an attackercan revert to a previous state and that there isa low probability that an arbitrary statetransition (a warp) can occur at any time from

406

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

Figure 4: Experiment 1 – predictions.

Sequence Comment

0 Start the run

1,2,1 Seek target per assignment, fail to find, return to seek

1,3,4,5,1 Find false target via open ports, obvious dazzlements, search on

1,3,4,6 Find false target, limited dazzlements easily differentiated

6,7 or 6,8,7 Obvious things to try don't work

7,8 loop Apparent vulnerability - weak services - some not vulnerable

7,8,9 Locatable vulnerability gives user access with obvious content

9,10 Obvious content not relevant - less obvious apparent but requires privilege

9,8 or 10,8 Internal kill response mechanism kicks user out

6-10,1 or 3-4,1 Rotating IP addresses force search to restart

Figure 5: Experiment 1 - observed behaviors.

Time Sequence Comment

0 Configure network

0:01 1 Passive network sniffing

0:04 1,2 1,13 Designers forget 13 is present when seeking targets

0:40 1,3,4,6 Probe with broadcast ping > consistent ARPs lead to deception box

0:45 6,8 Believe it is a RedHat Linux box, try simple entry, give up too soon

0:46 8,1 Rotating IP addresses force search to restart

0:46 1,3,4.6 Find apparent real (False) target and differentiate rapidly

0:46 6.8.7 Try obvious remote root password, fail, seek vulnerabilities

0:50 7,5,4 Express that this could be a fake box - continue plan

1:01 4,7 See IP addresses changing, associate ssh service

1:20 7,1,2 After group discussion, decide to try other search methods, fail

1:30 8 Using previous results, try to access false target

1:43 8,7 Try other services

1:44 7,8 loop Try various guesses, seek exploits

1:46 8,9 Guess valid password, gain access, see simple false content, read, believe need an exploit to escalate privileges

1:50 9,8 Internal kill response mechanism kicks user out

1:53 8,9 Regain access, identify system more closely, seek exploit

1:55 9,1 Rotating IP addresses force search to restart

1:55 1,3,4.6 Find apparent real (False) target and differentiate rapidly

2:05 6,7 Convinced they need to find ways to escalate privileges, meet to discuss

2:15 1,3,4,6 loop Identify pattern of IP address changes for prediction of next IP address

2:30 6,7 loop Seeking remote root attacks on fake box

2:40 6,7 loop Notice password file (considered but did not try to crack it)

3:10 7,8 Run known remote attack, failed to work

3:15 END Terminated for end of allotted time

COSE 2205.qxd 01/07/2003 11:51 Page 406

Page 6: Leading attackers through attack graphs with deceptions

any location to another. Attack sequenceprediction implicitly assumes this sort ofbacktracking is always possible and it is notnoted unless it is specifically driven as part ofthe experiment.

To help compensate for this, we introducedtwo additional controls. During experiments,we videotape the process so that it can beindependently reviewed. After the sequence of experiments, we review results with those who participated and ask them for their views of whether our depictions wereaccurate.

Experiment 1

In experiment 1, the following prediction wasmade and documented prior to the start of theexperiment (Figure 4).

Experiment 1 proceeded on 2002-04-12 using ateam of seven university students specializing incomputer science and part of the SandiaCollege Cyber Defenders program at SandiaNational Laboratories. These students all havehigh academic credentials, range fromSophomore to Graduate students, and havelimited experience in computer attack, butsubstantial training and experience in defendingcomputers against attacks. They all had severalweeks of previous experience with similardeception defense techniques, practice withavailable tools, and experience in theexperimental environment.

The following measured results were observed(all times relative to experiment start time)(Figure 5).

By comparing sequences we can readily see thatthe predicted sequences occur frequently andthat there are no radical departures from thepaths laid out in the prediction. The followingsummary of predicted and non-predictedsequences clarifies this comparison (Figure 6).

The design seems to have worked as intended,driving attackers through specific sequences of

attack methods and patterns of attack. Forexample, there were no instances ofunanticipated motions from deception to realtargets, no cases in which the attackers foundreal targets instead of deceptions, and despitethe understanding of the potential for deceptionby the attackers, there were no strong efforts toseek out new methods to detect other systems as

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

407

Figure 6: Experiment 1 – results.

Predicted Observed Comment

0 yes Obvious

1,2,1 1,2,1,13 Designers forgot 13 would be present - otherwise correct

1,3,4,5,1 no Attackers were so caught up in 1,3,4,6 that they never returned

1,3,4,6 1,3,4,6 at 0:46, 1:55, 2:15

6,7 or 6,8,7 6,8 6,8,7, 6,7 at 0:45 0:46, 2:05, 2:30 (loop)

7,8 loop 7,8 7,8 loop at 1:44, 3:10

7,8,9 8,9 after 7,8 loops at 1:46 (missed 8,9 loop implied below)

9,10 no never reached

9,8 or 10,8 9,8 at 1:50

6-10,1 or 3-4,1 9,1 7,1 8,1 at 0:46, 1:20, 1:55, 2:15

------------ ------- -------------------------------------------

1,13 Designers forgot to indicate real targets would be missed

8,1 Implicit in all graphs

7,5,4,7 Never anticipated this path (0:50-1:01)

2,8 Use of previous results for 'direct' jump - part of other sequence

Figure 7: Experiment 2 – ‘Outside’ predictions.

Sequence Comment

0 Start - from the outside only active searches will operate

1,2 loop Searches will fail to find real targets

1,13 loop Searches will often fail to find false targets

1,3,4,5 loop Some targets will be declared deceptions

1,3,4,6 A lot of seemingly different false targets will be found, someexplored

6,7 loop Attempted remote exploitation may be tried - unlikely to work

6,8 loop Attempted direct entry may be tried very briefly (guest, guest works on some fakes)

8,9 If they gain entry, they will see obvious content

*,1 Lots of returns to 1 because of IP rotation mechanisms

Likely to move to DMZ or Inside soon

COSE 2205.qxd 01/07/2003 11:51 Page 407

Page 7: Leading attackers through attack graphs with deceptions

long as the mysteries of the already identifiedsystems were still being unraveled. The pathsdescribed by the attack graph were followed asif they were well worn grooves in the attackers’methods. We also note that the deception washighly effective in that the attackers nevermoved toward the positive ‘levels’ of the attackgraph.

Experiment 2

In experiment 2, a more complex scenario was presented involving three networks. The attackers could move from the moredistant network to an apparently more closely located network, to an inside networkwith the provision that once they had movedinward, they would be considered as havinggiven up at the more distant location. There is not a lot of impetus to remain on the outside in this experiment. This thentranslates into three somewhat different butinterrelated experiments. Each of the threeexperimental situations was predicted (Figures 7-9).

Experiment 2 used the same attackers as fromexperiment 1, but they were required to splitinto two teams and work in parallel in the sameroom. The following measured results wereobserved (Figure 10).

It appears that time pressure prevented manyof the potential predicted paths from beingexplored in this example. The exercise was justtoo complex for the time available. While wedon’t yet have a good model for the timeassociated with detecting and defeating various deceptions, it seems clear that the timefactor played a major role in this exercise(Figure 11).

Experiment 3

In experiment 3, we repeated experiment 2under somewhat different conditions. In thiscase, nine hours were provided for the attackers.Attack groups included volunteers at a

408

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

Figure 8: Experiment 2 – ‘DMZ’ predictions.

Sequence Comment

0 Start - from DMZ passive observation will show content

1,2 loop Often fail to find real targets

1,13 loop Often fail to find false targets

1,3,4,5 loop Some targets will be declared deceptions, much traffic will be dismissed

1,3,4,6 A lot of seemingly different false targets will be found, some explored

6,7 loop Attempted remote exploitation may be tried - unlikely to work

6,8 loop Attempted direct entry will be tried (guest, guest works on some fakes)

7,8 loop Try to find other vulnerabilities and exploit them

8,9 If they gain entry, they will see obvious content

9.10 If they gain entry, they may try to autoguess the root password

8,10 Slim chance they will exploit access to stop IP rotations, defeat deception

*,1 Lots of returns to 1 because of IP rotation mechanisms

Likely to move to Inside soon, perhaps correlate results

Figure 9: Experiment 2 – ‘Inside’ predictions.

Sequence Comment

0 Start

1,2 loop Often fail to find real targets

1,13 loop Often fail to find false targets

1,3,4,5 loop Some targets will be declared deceptions, much traffic will be dismissed

1,3,4,6 A lot of seemingly different false targets will be found, some explored

6,7 loop Attempted remote exploitation may be tried - unlikely to work

6,8 loop Attempted direct entry will be tried (guest, guest works on some fakes)

7,8 loop Try to find other vulnerabilities and exploit them

8,9 If they gain entry, they will see obvious content

9.10 If they gain entry, they may try to autoguess the root password

8,10 Slim chance they will exploit access to stop IP rotations, defeat deception

1,11,12,18 Some real target information may be found and dismissed

1,11,12,15 Some real target information may be found and thought real

15,16 loop Real targets may be scanned to find possible entry points

15,17 Simple direct entry attempts may be made, denial of service attempts may be made

16,17 Scanned services will yield complex bypass mechanisms, may be bypassed

15,20 Sniffed content may be accumulated to achieve a goal

30,34,35 loop Denial of service attempts against the network in general may be tried

30,31,32,33 loop Denial of service attempts against the network in general may be tried

*,1 Lots of returns to 1 because of IP rotation mechanisms and real target concealment

COSE 2205.qxd 01/07/2003 11:51 Page 408

Page 8: Leading attackers through attack graphs with deceptions

conference who were attending classes inattacking computer systems and participants ina contest wherein they could win thousands ofdollars in prizes for defeating the defenses. Weused the same predictions for this experiment asfor experiment 2.

The following behaviors were observed over a 9-hour period of attempted entry. Times were not accurately kept because the situationwas less amenable to close observation (Figure 12).

In this experiment, it seems clear that theattackers were less able to make progress. Thisappears to have a great deal to do with thelevel of experience of the attackers against thedefenses in place. Despite having more thantwice the available time, the attackers wereunable to penetrate many of the deceptions atall, and were unable to succeed even againstsimple targets. It took nearly 8.5 hours beforeattackers started taking detailed notes of allthe things they saw in order to try to correlatetheir observations. By comparison, students

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

409

Figure 10: Experiment 2 - observed behaviors.

Time Team 1 Seq Team 2 Seq Comment

0 0 0 Start - Configure networks

0:24 1,2 1,13 loop Ping sweep of network, scripts

1,2,4 Scripts seem to fail - verify tools by testing

0:30 1,2 1,13 loop Arpwatch and ethereal

0:36 1,2 loop 1,2 loop arping, nmap - yield no data

0:49 1,3,4,5 traffic from other group seen and reconciled

1:03 1,3,4,6 1,3,4,6 observe deception traffic and examine

1:15 ==> DMZ team 2 decides to move to DMZ network

1:20 1,3,4,5 dazzled responses to ethereal, arping, ping, tcpdump

1,3,4,6 testing det select returns (traffic actually from team 1)

1:27 ==> DMZ team 1 decides to move to DMZ network

1:27 confusion traffic ceases (other team moved) and confusion occurs

1:30 1,3,4,5 active probing sees fake traffic - actually themselves

1:34 1,3,4,5 seeing lots of content in ethereal - result of team 2's scans

1:39 1,3,4,5 loop 1,3,4,5 loop dazzlement of each by themselves and others

1:42 30,31,33 try arping flood - no reason - no result

1:49 1,3,4,5 loop 1,3,4,5 loop "nmap useless"

2:00 ==> Inside team 1 decides to move to inside network

2:14 1,2 loop confusion by not getting 'fakes' anymore (team 1 gone)

2:29 1,11,12,15 observe real traffic - unsure of what it is

2:47 1,3,4,5 loop found self as only system in network

2:47 ==> Inside team 2 decides to move to Inside network

2:51 1,3,4,6,8,5,1 try to ssh to every IP, mirror ssh to self, return to start

2:56 1,3,4,5 confirmed mirroring behavior, noticed strange packet type

3:05 30,31,33 30,31,33 both teams created 'mirrors' and are mirroring each other into oblivion

3:22 1,3,4,5 loop 1,3,4,5 loop groups seeing different things - starting to talk more, confused

3:45 1,3,4,5 loop hint provides a step forward - into other deceptions

3:50 1,11,12,15 trying DoS against correct network

3:57 1 DoS only partly worked, return to retry

4:00 END

COSE 2205.qxd 01/07/2003 11:51 Page 409

Page 9: Leading attackers through attack graphs with deceptions

in previous experiments who had been trainedin red teaming against deceptions in earlierefforts started taking notes immediately (Figure 13).

The only unpredicted behavior was themovement toward attempts at random exploits (i.e., 30.31.33). It appears that thisresults from frustration in other areas. This isparticularly important because we hadanticipated that such things could happen, butdid not understand the circumstances underwhich it might happen. We now believe thatwe have a better basis for understanding thisand that we will be able to specifically generateconditions that induce or prevent thisbehavior.

Summary, conclusions, andfurther work

It appears, based on this limited set ofexperiments, that in cases wherein attackersare guided by specific goals, the methods we identified in this and previous papers can be used to intentionally guide thoseattackers through desired paths in an attackgraph. Specifically, the combination of directed objectives with the induction and suppression of signals that are interpretedby computer and human group cognitivesystems leads to the ability to induce specificerrors in the group cognitive system leading to guided movement through an attack graph.

The ability to guide groups of human attackers and their tools through deceptionportions of attack graphs and keep them away from their intended targets appears toprovide a new capability for the defense ofcomputer systems and networks. This method appears to operate successfully forperiods of 4-9 hours against skilled humanattack groups with experience in attack anddefense and access to high quality tools andmay operate for far longer periods. The number

410

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

Figure 11: Experiment 2 – results.

Predicted Observed Comment

0 OUTSIDE yes Obvious OUTSIDE 1,2 loop 1,2 loop 0:24, 0:30, 0:36 1,13 loop 1,13 loop 0:24

1,2,4 0:24 - no data led to doubt results 1,3,4,5 loop 1,3,4,5 0:49 1,3,4,6 1,3,4,6 1:03 6,7 loop no never got to it 6,8 loop no never got to it 8,9 no never got to it *,1 yes all the time

0 DMZ yes Obvious DMZ 1:15, 1:27 1,2 loop 1,2 loop 2:14 1,13 loop no never got anywhere anything real 1,3,4,5 loop 1,3,4,5 1:20, 1:30, 1:34, 1:39, 1:49, 2:47 1,3,4,6 1,3,4,6 1:20

30.31.33 1:42 arbitrary action with no direction or effect - blowing off steam

6,7 loop no never got to it 6,8 loop no never got to it 7,8 loop no never got to it 8,9 no never got to it 9.10 no never got to it 8,10 no never got to it *,1 yes all the time

0 INSIDE Obvious INSIDE 2:00, 2:47 1,2 loop no never got to it 1,13 loop no never got to it 1,3,4,5 loop 1,3,4,5 loop 2:56, 3:22, 3:45 1,3,4,6 1,3,4,6 2:51 6,7 loop no never got to it 6,8 loop 6,8,5,1 2:51 7,8 loop no never got to it 8,9 no never got to it 9.10 no never got to it 8,10 no never got to it 1,11,12,18 never got to it 1,11,12,15 1,11,12,15 2:29 15,16 loop no never got to it 15,17 no never got to it 16,17 no never got to it 15,20 no never got to it 30,34,35 loop no never got to it 30,31,33 loop 30,31,33 3:05 *,1 yes all the time

COSE 2205.qxd 01/07/2003 11:51 Page 410

Page 10: Leading attackers through attack graphs with deceptions

of experiments of this sort is clearly limited tothe point where meaningful statistical datacannot be gleaned and further experimentalstudies are called for to further refine theseresults.

One area of particular interest is the ability ofdeceptions of this sort to operate successfullyover extended periods of time. It appears thatthese defenses can operate successfully overtime, but it also seems clear that with ongoingeffort, eventually an attacker will come acrossa real system and penetrate it unless thesedefenses lead to adaptation of the defensivescheme. We foresee a need to generateadditional metrics of time and informationtheoretic results to understand how long suchdeceptions can realistically be depended uponand to what extent they will remain effectiveover time in both static and adaptivesituations.

References[1] Cohen, F., Lambert, D., Preston, C. and Berry, N., Stewart, C.

and Thomas, E., 2001. A Framework for Deception, 2001.

[2] Cohen, F., Marin, I., Sappington, J., Stewart, C. and Thomas,E., 2001, Red Teaming Experiments with DeceptionTechnologies, 2001.

[3] Cohen, F. and Koike, D., 2002. Errors in the Perceptions ofComputer-Related Information, 2002.

Leading attackers through attack graphs with deceptions

Fred Cohen and Deanna Koike

411

Figure 12: Experiment 3 - observed behaviors.

Time Sequence Comment

0 Configure network - start in Outside network 0:10 1,3,4,5 loop Thought they found computers but were confused

1,3,4,6,7 loop Occasionally thought something was real but then not 1,3,4 loop Thought equipment might be bad. 1,2 loop Never found many real targets 1,13 loop Never found several false targets

5:30 ==> DMZ All decide to move to DMZ network 1,2 loop Never found real targets 1,13 loop Never found several false targets 1,3,4,5 loop 1,13,1 loop 1,11,12,18 loop 1,11,12,15,16,18 loop 1,11,12,15,16,8 loop 30,31,33 Frustration led to random attempts at exploits

9:00 END OF TIME

Figure 13: Experiment 3 – results.

Predicted Observed Comment

0 OUTSIDE yes Obvious OUTSIDE 1,2 loop 1,2 loop 1,13 loop 1,13 loop 1,3,4,5 loop 1,3,4,5 loop 1,3,4,6 1,3,4,6 6,7 loop 6,7 loop 6,8 loop no never got to it 8,9 no never got to it *,1 yes all the time

0 DMZ yes Obvious DMZ 1,2 loop 1,2 loop most of the time 1,13 loop 1,13 loop several of them 1,3,4,5 loop 1,3,4,5 loop Much of the time 1,3,4,6 no never got to it

30.31.33 Frustration led to random attempts at exploits 6,7 loop no never got to it 6,8 loop no never got to it 7,8 loop no never got to it 8,9 no never got to it 9.10 no never got to it 8,10 no never got to it *,1 yes all the time

1,11,12,18 loop Unanticipated, but within the attack graph 1,11,12,15,16,18 loop Unanticipated, but within the attack graph 1,11,12,15,16,8 loop Unanticipated, but within the attack graph

COSE 2205.qxd 01/07/2003 11:51 Page 411