rinse: the real-time immersive network simulation ... · sive network simulation environment for...

10
RINSE: the Real-time Immersive Network Simulation Environment for Network Security Exercises Michael Liljenstam 1 , Jason Liu 2 , David Nicol 1 , Yougu Yuan 1 , Guanhua Yan 1 , Chris Grier 1 1 University of Illinois at Urbana-Champaign Coordinated Science Laborabory 1308 W. Main St., Urbana, IL 61801 {mili,nicol,yuanyg,ghyan,grier}@crhc.uiuc.edu 2 Colorado School of Mines Mathematical and Computer Sciences Golden, CO 80401-1887 [email protected] Abstract The RINSE simulator is being developed to support large-scale network security preparedness and training ex- ercises, involving hundreds of players and a modeled net- work composed of hundreds of LANs. The simulator must be able to present a realistic rendering of network behav- ior as attacks are launched and players diagnose events and try counter measures to keep network services operating. We describe the architecture and function of RINSE and outline how techniques like multiresolution traffic model- ing and new routing simulation methods are used to address the scalability challenges of this application. We also de- scribe in more detail new work on CPU/memory models necessary for the exercise scenarios and a latency absorp- tion technique that will help when extending the range of client tools usable by the players. 1. Introduction The climate on the Internet is growing increasingly hos- tile while organizations are increasingly relying on the In- ternet for at least some aspects of day-to-day operations. They are thus being forced to plan and prepare for network failures or outright attacks—how it might affect them and what actions to take. With current system complexity, tools to assist in preparedness evaluation and training are likely to become more and more important. The October 2003 Livewire cyber war exercise [1] con- ducted by the Department of Homeland Security, is one particular instance of preparedness evaluation and training that involved companies across industrial sectors as well as government agencies. More exercises of this type are cur- rently being planned, and based on experiences from the first event, there was a desire for improved tools to auto- matically determine the impact on the network from attacks and defensive actions and the extent to which the network is capable of delivering the services needed. Providing net- work simulation tool support for exercises such as Livewire is particularly challenging because of their scale. Future ex- ercises are expected to involve as many as a couple of hun- dred participating organizations, and will thus involve many “players” and a network of significant size. We are currently developing the Real-time Immer- sive Network Simulation Environment for network security exercises (RINSE) to meet this need and ad- dress the challenges inherent in this type of applica- tion. Hence, the goal for RINSE is to manage large-scale real-time human/machine-in-the-loop network simula- tion with a focus on security for exercises and train- ing. It needs to be extensible so that it can evolve over time, and it needs to be designed with an eye towards secu- rity and resilience to hardware faults since these exercises involve many people and last for several days. The spectrum of approaches to general large-scale net- work modeling being explored in the literature range from hardware emulation testbeds like Emulab [30], network em- ulators like ModelNet [29], to network simulators like IP- TNE [27], GTNetS [6], PDNS [6], and MAYA [31]. Hard- ware emulation excels in application code realism (running the real thing), while simulations tend to be more flexible and have an advantage in terms of scalability. However, the middle ground is increasingly being explored; for instance, through increasing emulation support in simulators. For se- curity exercises we like the flexiblity and scalability of sim- ulation, and the safety of unleashing attacks in a simulated environment rather than on a real network. Several simulators offer similar capabilities, including parallel execution, real-time/emulation support, and dis- crete event/analytic models. However, we believe RINSE is unique in the way it brings together human/machine-in- the-loop real-time simulation support with multiresolution network traffic models, attack models that leverage the ef- ficiency of the multiresolution traffic representations, novel

Upload: phamthien

Post on 27-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

RINSE: the Real-time Immersive Network Simulation Environment for NetworkSecurity Exercises

Michael Liljenstam1, Jason Liu2, David Nicol1, Yougu Yuan1, Guanhua Yan1, Chris Grier1

1University of Illinois at Urbana-ChampaignCoordinated Science Laborabory

1308 W. Main St., Urbana, IL 61801{mili,nicol,yuanyg,ghyan,grier }@crhc.uiuc.edu

2Colorado School of MinesMathematical and Computer Sciences

Golden, CO [email protected]

Abstract

The RINSE simulator is being developed to supportlarge-scale network security preparedness and training ex-ercises, involving hundreds of players and a modeled net-work composed of hundreds of LANs. The simulator mustbe able to present a realistic rendering of network behav-ior as attacks are launched and players diagnose events andtry counter measures to keep network services operating.We describe the architecture and function of RINSE andoutline how techniques like multiresolution traffic model-ing and new routing simulation methods are used to addressthe scalability challenges of this application. We also de-scribe in more detail new work on CPU/memory modelsnecessary for the exercise scenarios and a latency absorp-tion technique that will help when extending the range ofclient tools usable by the players.

1. Introduction

The climate on the Internet is growing increasingly hos-tile while organizations are increasingly relying on the In-ternet for at least some aspects of day-to-day operations.They are thus being forced to plan and prepare for networkfailures or outright attacks—how it might affect them andwhat actions to take. With current system complexity, toolsto assist in preparedness evaluation and training are likelyto become more and more important.

The October 2003 Livewire cyber war exercise [1] con-ducted by the Department of Homeland Security, is oneparticular instance of preparedness evaluation and trainingthat involved companies across industrial sectors as well asgovernment agencies. More exercises of this type are cur-rently being planned, and based on experiences from thefirst event, there was a desire for improved tools to auto-matically determine the impact on the network from attacksand defensive actions and the extent to which the network

is capable of delivering the services needed. Providing net-work simulation tool support for exercises such as Livewireis particularly challenging because of their scale. Future ex-ercises are expected to involve as many as a couple of hun-dred participating organizations, and will thus involve many“players” and a network of significant size.

We are currently developing the Real-time Immer-sive Network Simulation Environment for networksecurity exercises (RINSE) to meet this need and ad-dress the challenges inherent in this type of applica-tion. Hence, the goal for RINSE is to manage large-scalereal-time human/machine-in-the-loop network simula-tion with a focus on security for exercises and train-ing. It needs to be extensible so that it can evolve overtime, and it needs to be designed with an eye towards secu-rity and resilience to hardware faults since these exercisesinvolve many people and last for several days.

The spectrum of approaches to general large-scale net-work modeling being explored in the literature range fromhardware emulation testbeds like Emulab [30], network em-ulators like ModelNet [29], to network simulators like IP-TNE [27], GTNetS [6], PDNS [6], and MAYA [31]. Hard-ware emulation excels in application code realism (runningthe real thing), while simulations tend to be more flexibleand have an advantage in terms of scalability. However, themiddle ground is increasingly being explored; for instance,through increasing emulation support in simulators. For se-curity exercises we like the flexiblity and scalability of sim-ulation, and the safety of unleashing attacks in a simulatedenvironment rather than on a real network.

Several simulators offer similar capabilities, includingparallel execution, real-time/emulation support, and dis-crete event/analytic models. However, we believe RINSEis unique in the way it brings together human/machine-in-the-loop real-time simulation support with multiresolutionnetwork traffic models, attack models that leverage the ef-ficiency of the multiresolution traffic representations, novel

Page 2: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

Data Server SimulatorDatabaseManager

iSSFNet Network Simulator

database

Network ViewerClients

Internet

RINSE backup instance

�� ����

�� ����

� ��

� ����

��������������������

��������������������

������������������������

����������

��������������������

��������������������

������������������������

����������

��������������������

��������������������

�������������� � � � �

!�!�!"�"�"

#�#�##�#�##�#�##�#�#

$�$�$$�$�$$�$�$$�$�$

%�%�%�%%�%�%�%&�&�&&�&�&

'�'�'(�(�(

)�))�))�)

*�**�**�*

+�+�++�+�+,�,�,,�,�,

-�-�-.�.�.

/�//�//�//�/

0�00�00�0

1�1�11�1�12�22�2

3�34�4

5�55�55�55�5

6�66�66�6

7�7�77�7�78�88�8

9�9�9:�:

;�;�;;�;�;;�;�;

<�<�<<�<�<<�<�<

=�=�==�=�=>�>�>>�>�>

?�?@�@

AAAAAAAAAAA

BBBBBBBBBBB

CCCCCCCCCCC

DDDDDDDDDDD

EEEEEEEEEEE

FFFFFFFFFFF

GGGGGGGGGGG

HHHHHHHHHHH

IIIIIIIIIII

JJJJJJJJJJJ

KKKKKKKKKKK

LLLLLLLLLLL

MMMMMMMM

NNNNNNNN

OOOOOOOO

PPPPPPPP

QQQQQQQQQ

RRRRRRRR

SSSSSSSSS

TTTTTTTT

UUUUUUUU

VVVVVVVV

WWWWWWWW

Figure 1: RINSE architecture

models of host/router resources such as CPU and memory,and novel routing simulation techniques. In this position pa-per we provide an overview of RINSE to show how thesetechniques are being brought to bear on the problem at hand.We also detail some specific new contributions:i) a tech-nique for absorbing outside network latency into the simu-lation model andii) models for including CPU and memoryeffects into network simulations.

The remainder of this paper is organized as follows. Sec-tion 2 describes the architecture of RINSE and outlines asimple example scenario that introduces the salient featuresof RINSE, described further in Sections3 to 5. Finally, Sec-tion 6 summarizes and outlines future work.

2. RINSE Architecture

RINSE consists of five components: the iSSFNet net-work simulator, the Simulator Database Manager, adatabase, the Data Server, and client-side Network View-ers, as shown in Figure1. The iSSFNet network simula-tor, formerly called DaSSFNet, is the latest incarnation ofthe C++ network simulator based on the Scalable Sim-ulation Framework (SSF), an Application ProgrammingInterface (API) for parallel simulations of large-scale net-works [3]. iSSFNet runs on top of the iSSF simulationkernel, which handles synchronization and support func-tions. iSSF uses a composite synchronous/asynchronousconservative synchronization mechanism for paral-lel and distributed execution support [18], and has re-cently been augmented to include real-time interactionand network emulation support. iSSFNet runs on paral-lel machines to support real-time simulation of large-scalenetworks.

Each simulation node connects independently tothe Simulator Database Manager, which delivers datafrom the simulator to the database and delivers con-trol input from the database to the simulator. On theuser/player side, the Data Server interfaces with client ap-plications, such as the Java-based application “Network

Viewer”, which allows the user to monitor and control thesimulated network. In the future, we plan to evolve the ar-chitecture towards using the emulation capabilities tosupport direct SNMP interaction with the simulated net-work devices, thus having regular networking utilities andnetwork management tools as clients. In the current de-sign, the Data Server performs authentication for eachuser, distributes definitions of the client’s view of the net-work (using the Domain Modeling Language), and pro-vides a simple way for the client applications to accessnew data in the database through XML-based remote pro-cedure calls. The Network Viewer clients, a screen shotshown in Figure2, provide the users with their local viewof the network (usually only their organization’s net-work) and periodically poll the Data Server for data. TheData Server responds with new data for each client, ex-tracted from the database. The game managers, function-ing as superusers of an entire exercise, also use NetworkViewer clients, but can have a more global view of the net-work.

The Network Viewer clients have a simple commandprompt where the user can issue commands to influence themodel. User commands are sent in the opposite direction ofthe output data path and injected into the simulator. We cur-rently divide the commands into five categories:Attack– the game managers can launch attacks against net-

works or specific servers. RINSE focuses on Denial-of-Service effects on networks and devices, so attacksinclude DDoS and worms.

Defense–attacks can be blocked or mitigated, for instanceby installing packet filters.

Diagnostic Networking Tools– functionality simi-lar to some commonly used networking utilities,such as ping, are supported for the player to diag-nose the network.

Device Control– individual devices, such as hosts androuters, can be shutdown or rebooted.

Simulator Data– commands can be issued to the simula-tor to control the output, turn on or off trace flow froma particular host, etc.

Depending on the type of a command, it may be address-ing the whole simulator, a particular host or router, a par-ticular interface, or a particular protocol or application ona host or router. A command handling infrastructure in thesimulator passes the commands from the clients to the ap-propriate components of the simulation model.

2.1. Example Scenario

Consider a simple scenario where a player is responsiblefor a subnetwork, partially shown in Figure2, containingamong other things a server. Multiple clients are requestinginformation from the server through some form of trans-actions. By transaction we simply mean a request-response

Page 3: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

Figure 2: Network Viewer client screen shot

exchange between the client and the server. Section4 out-lines RINSE’s models for efficient representation of trafficflows and route computation.

A game manager attempts to disrupt the operations ofthe server by launching a DDoS attack against open ser-vices on the server, and the player responsible for the net-work will need to diagnose what is going on and try to takeremedial actions. The game manager launches the attack byissuing an attack command at the Network Viewer client:

ddos attack attacker server 100 2000

Both attacker andserver are symbolic names for theattacker’s host and the targeted host, respectively. Upon re-ceiving the command (via the command handling infras-tructure), the attacker’s host uses a simulated intermedi-ate communication channel (e.g., Internet Relay Chat) tosend attack signals to zombie hosts—hosts under the at-tacker’s control. These zombie hosts then initiate the denial-of-service attack against the targeted victim. The attack is tolast for 100 seconds and each zombie emits traffic at a rateof 2000 kbits/s. RINSE attack models leverage efficienciesin its high volume traffic representations (Section4).

We will assume here that the DDoS traffic simply loadsthe open service daemons and thus induces a large CPU loadon the server. This load disrupts the processing of legiti-mate transactions. As shown in the screen shot in Figure2,the player managing the server can monitor the CPU uti-lization on the server and observe an abnormally high load.Models of host and router resources like CPU and memoryare described in more detail in Section5. After determiningthat the load likely stems from abnormal traffic, the player

attempts to block traffic on a certain port that has been in-advertently left open by issuing the command:

filter server add 0 deny all all * all * 23

to install a filter on the server to deny packets coming in onall interfaces, usingall protocols, from all source IP ad-dresses (“* ”), and all source ports, to all destination IPaddresses (“* ”) and destination port 23. Successful filter-ing blocks packets from reaching the open service daemonsand thus alleviates the load on the server at the expense ofsome processing cost for filtering.

We now proceed to describe aspects of the system men-tioned here in more detail, starting with the real-time simu-lation support.

3. Real-time Simulation Support

In addition to supporting the RINSE Network Viewerclient, we are currently developing support for the Sim-ple Network Management Protocol (SNMP) to allow us tomonitor and control the simulated network devices throughindustry-standard network management tools. For that rea-son, our real-time simulation support must be simple, flex-ible, and be able to accommodate real-time interactions atvarying degrees of intensity, including both human-in-the-loop and machine-in-the-loop simulations. In this section,we describe the real-time support both in the iSSF parallelsimulation kernel and in the network simulator supportedby iSSF.

3.1. Kernel Support

Over the years, we have seen many network emulators,ranging from single-link traffic modulators to full-scale net-work emulation tools, e.g. [25, 29]. Most network emulatorsare time-driven. For example, ModelNet [29] stores packetsin “pipes” sorted by the earliest deadline. A scheduler ex-ecutes periodically (once every 100µseconds) to emulatepacket moving through the pipes. There are two main draw-backs associated with the time-driven approach:i) the ac-curacy of the emulation depends on the time granularity ofthe scheduler, which largely depends on the target machineor the target operating system, andii ) there has not been agood model used by network emulators to accurately char-acterize the background traffic and its impact on the fore-ground transactions (i.e., traffic connecting real-time appli-cations). Simulation-based emulation (also referred to asreal-time network simulation), on the other hand, providesa common framework for real application traffic to inter-act with simulated traffic, and therefore allows us to studyboth network and application behaviors with more realism.Examples of existing real-time network simulators includeNSE [5], IP-TNE [27], MaSSF [16], and Maya [31]. IP-TNE is the first simulator we know that adopts parallel sim-

Page 4: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

ulation techniques for emulating large-scale networks. Thereal-time support in iSSF inherits many features of theseprevious simulation-based emulators. Our approach, how-ever, is unique in several ways, which we elaborate next.

Extending SSF API.The real-time support is designed asan extension to the SSF API, thus making an easy transi-tion for other SSF models that require real-time support. InSSF, aninChannel (or outChannel ) object is definedas a communication port in an entity to receive (send) mes-sages from (to) other entities. We extended the concept ofthe in-channel using it as the conduit for the simulator toreceive events from outside the simulator (e.g., acceptinguser commands arrived at a TCP socket). We extended theAPI so that a newly created in-channel object can be associ-ated with a reader thread. The reader thread converts (exter-nal) physical events into (internal) virtual events and injectsthem into the simulator using theputVirtualEventmethod. A virtual event is created to represent the corre-sponding physical event and is assigned with a simulationtimestamp calculated as a function ofi) the wall-clock timeat which the event is inserted into the simulator’s event-list, and ii ) the current emulation throttling speed (whichwe will elaborate momentarily). The SSF entities receiveevents from the in-channel objects as before, regardless ofwhether they represent special devices that accept exter-nal events. From a modeling perspective, there is no dis-tinction between processing a simulation event and a real-time event. Similarly, we also extended the concept of theout-channel using it as a device to export events (for ex-ample, reporting the network state to a client applicationover a TCP connection). In this case, a writer thread canbe associated with the specialoutChannel object. Thewriter thread invokes thegetRealEvent method to re-trieve events designated for the external device and con-verts the virtual events into physical events. Each of theseevents is assigned with a real-time deadline indicating thewall-clock time at which the event is supposed to happen.The real-time deadline is calculated from the virtual timeand again the emulation throttling speed. The writer threadis responsible for delivering the event upon the deadline.

Throttling Emulation Speed.The system can dynamicallythrottle the emulation speed (either by accelerating or decel-erating the simulation execution with respect to real time).This feature is important for supporting fault tolerance. Forexample, if a simulator fails over a hardware problem, afterfixing the problem, the simulator should be able to restartfrom a previously checkpointed state and quickly catch upwith rest of the system. We can accelerate the emulationspeed and use the same user input logged at the databaseserver to restore the state. In order to regulate the timeadvancement, we modified thestartAll method in theEntity class (which is used to start the simulation in SSF),

adding an optional argument to allow the user to specify theemulation speed as the ratio between virtual time and wall-clock time—for example, a ratio of one means simulationin real-time, “infinity” means simulation as fast as possible,and zero means halting the simulation. AnEntity classmethodthrottle is also added to make it possible to dy-namically change the ratio during the simulation.

Prioritizing Emulation Events: We use a priority-basedscheduling algorithm in the parallel simulation kernel tobetter service events with real-time constraints. In SSF, theuser can cluster entities together as timelines, i.e., logicalprocesses, that maintain their own event-lists. Events on thetimelines are scheduled according to a conservative syn-chronization protocol [18]. In a “pure” simulation scenario,where the simulation is set to run as fast as possible, a time-line can be scheduled to run as long as it has events readyto be processed safely without causing causality problems.For that reason, during the event processing session, the ker-nel executes all safe events of a timelineuninterruptedtoreduce the context switching cost. When we enable emu-lation, however, the timelines that contain real-time eventsmust be scheduled promptly.

To promptly process the events with real-time deadlinesin the system, we adopted a greedy algorithm in iSSF as-signing a high priority to emulation timelines. These time-lines contain real-time objects—special in-channels andout-channels that are used for connecting to the physicalworld. Whenever a real-time event is posted and ready to bescheduled for execution on these emulation timelines, thesystem interrupts the normal event processing session of anon-emulation timeline and makes a quick context switch toload and process the real-time events in the emulation time-line. This priority-based scheduling policy allows the eventsthat carry real-time deadlines to be processed ahead of regu-lar simulation events. Note that, however, since normal sim-ulation events may be on a critical path that affects a real-time event, this method is not an optimal solution. We arecurrently investigating other more efficient scheduling algo-rithms that can promptly process emulation events as wellas events on the critical path, so that the real-time require-ment can be satisfied in a resource-constrained situation.

3.2. Latency Absorption

We realize that the real-time demand not only puts a tightconstraint on how we process events to reduce the chanceof missed deadlines, but also on the connectivity betweenthe simulator and the real applications. For example, con-sider a scenario in which a path is established between aclient machine running theping application and the ma-chine running the network simulator, as shown in Figure3.The client machine, which assumes the role of a host inthe simulated network (with a virtual IP address 10.5.0.12),

Page 5: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

% ping 10.0.1.19PING 10.0.1.19: 56 data bytes

64 bytes from 10.5.0.12: icmp_seq=0

ttl=64 time=0.54 ms

64 bytes from 10.5.0.12: icmp_seq=1

ttl=64 time=0.28 ms…

10.5.0.12

10.0.1.19

physical connections

virtual connections

Simulated

Network

Reader

Thread

Writer

Thread

Figure 3: Emulation of a Ping Application

pings another host at 10.0.1.19. The ping application at theclient machine generates a sequence of ICMP ECHO pack-ets targeting 10.0.1.19. These packets are immediately cap-tured by a kernel packet filtering facility [17] and then sentto the machine running the simulator. A reader thread re-ceives these packets, and converts them to the correspond-ing simulation events. The simulator carries out the simu-lation by first putting the ICMP ECHO packets in the out-put queue of the simulated host 10.5.0.12. The packets arethen forwarded over the simulated network to the desig-nated host 10.0.1.19, which responds with ECHO REPLYpackets. Once the packets return to the host 10.5.0.12, thesimulator exports the events to a writer thread, which sendsthem to the client machine running the ping application. Theclient ping application finally receives the ECHO REPLYpackets and prints out the result. Note that the segment ofthe path between the client application and the simulatedhost does not exist in the model. The problem is that the la-tencies of the physical connection can contribute a signifi-cant portion of the total round-trip delay. Simply on the for-warding path (from the client to the simulator), it may takehundreds of microseconds even on a high-speed local areanetwork, before the emulation packet is eventually insertedinto the simulator’s event-list.1 It can tremendously affectapplications that are sensitive to such latencies.

Our solution to this problem is to hide the latencies dueto the physical connectioninside the simulated network.Since delays are imposed upon network packets transmit-ted from one router to another in simulation, we can mod-ify the link layer model to absorb the latencies by send-

1 The delay includes the time for the sender’s operating system to cap-ture and send the packet, the transmission time of the packet, the timefor the reader thread to receive the packet, and the time for the simu-lator to finally accept the event and insert it into the appropriate event-list.

ing the packet ahead of its due time. The simulator mod-els the link-layer delay of a packet in two parts: the queu-ing time—the time to send all packets that are ahead of thepacket in question, and the transmission time—the time forthe packet to occupy the link before it can be successfullydelivered, which we model as the sum of the link latencyand the transmission delay—the latter is calculated by di-viding the packet size by the link’s transmission rate. As-suming that packets are sent in first-in-first-out (FIFO) or-der, the time required to transmit a packet is known as soonas the packet enters the queue at the link layer. Note that, ifthe FIFO ordering is not observed (e.g., packets are priori-tized according to their types), one cannot predict the packetqueuing time precisely. Furthermore, if we need to provide amore detailed model on lower protocol layers, the link statelayer may play a significant role in determining the packettransmission time as well. In either case, we can still use alower bound of the packet delays in our scheme. In the dis-cussions to follow, we assume the delays are precise for bet-ter exposition.

We use a list to store the packets in the queue togetherwith their precalculated transmission times. LetTnow be thecurrent simulation time andP0 be the last packet transmit-ted over the link.T0 is the simulation time thatP0 startstransmission (T0 ≤ Tnow). Let Pi be the ith packet inthe queue, where0 < i ≤ N and N is the total num-ber of packets currently in the queue. The time to transmitpacketPi is thereforeTi = T0 +

∑i−1j=0(λ + βj), where

λ is the link latency andβj is the transmission delay ofpacketPj . Suppose that an ICMP ECHO packet is cre-ated externally at wall-clock timetR, and the correspond-ing simulation packetPd is injected into the simulator attime t′R. As a result, the packet carries a virtual time deficitof τd = (t′R − tR)/R, whereR is the proportionality con-stant that indicates the emulation speed (i.e., the ratio of vir-tual time to real time). Rather than appending the packet tothe end of the queue, we insert the packet right before packetPk, wherek = max{i|i ≥ 0 andτd <

∑Nj=i(λ+βj)}.2 Af-

ter inserting the packet in the queue, we reduce deficit of thepacket by the total transmission times of all packets behindthe packet in the queue:

∑Nj=k(λ + βj). Further improve-

ment can be made to transmit the emulation packets evenearlier. When a packet with a deficit becomes the head ofthe queue, we can simulate the packet transmission in zerosimulation time. That is, we can further reduce the deficitby the packet’s transmission time. Note that in iSSF the de-lay of the link that connects hosts belonging to two separatetimelines is used to calculate the lookahead for the conser-vative parallel synchronization protocol. It is required thatthe link latencyλ for cross-timeline links must be larger

2 We do this by scanning the list from the packet at the tail of the list.k=0 means that the packet is inserted at the front of the list.

Page 6: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

than zero. In this case, we can only reduce the deficit by asmuch as the expected packet transmission delay.

It is reasonable to insert an event with a time deficitahead of others in the queue. After all, were the physicalconnection latencies not present, the event would have en-tered the queue much earlier. However, in cases where thedeficit is larger than the sum of transmission time of allpackets in the queue (the packet is therefore inserted at thehead of the queue), we can only allow the packet to con-tinue carrying the remainder of the deficit to the next hop,and therefore preempt events at the next hop. The processcontinues until the deficit is reduced to zero, or the packetreaches its destination. Since we do not “unsend” packetsthat have been sent before the emulation packet with thedeficit arrives, this scheme is simply an approximation oncethe deficit is carried to the next hop.

Another issue concerns accommodating the physi-cal connection latencies in the reverse path (from thesimulator to the client application). A simple solu-tion is to assume such latencies in the reverse path tobe the same as in the forwarding path, and use a deficitof the same amount for all packets traveling in that di-rection. The problem with this approach is that the sim-ulated network always tries to make up for the deficitwithin the first few hops, while in fact such a deficit is ex-pected at the last segment of the path from the simulatorto the application client. This means the interactions be-tween the packets with deficits and other packets in sim-ulation do not represent reality. We expect that, since inlarge-network simulations there are much fewer emula-tion packets than simulation packets, the effect of such adistortion may not be significant at all. We plan to quan-tify such effect in our future work.

4. Traffic, Attacks, and Routing

iSSFNet includes several novel techniques for modelingtraffic, network attacks, and routing of traffic flows. A keytechnique employed in iSSFNet to make real-time simula-tion of large networks feasible is multi-resolution represen-tations of traffic whereby the level of detail with which atraffic flow is simulated depends on how interested we arein the detailed dynamics of the flow. Traffic that is “in fo-cus” (foreground traffic) is simulated with high fidelity atpacket-level detail. Traffic that represents other things goingon in the network (i.e.,background traffic) is abstracted us-ing fluid modeling, either using fine grained per-flow mod-els, or coarse time-scale periodic fixed point solutions.

Fluid modeling [11, 15] is being explored also in othernetwork simulators, such as MAYA [31], IP-TN [12], andHDCF-NS/pdns [24]. The models used in iSSFNet arebased on our previous work to develop discrete-event fluidmodeling of TCP and hybrid traffic interaction models such

that the packet and fluid representations can coexist in thesame simulation [21]. Recent work has addressed coarsermodels using fixed point solution techniques [22] of flowcompetition through a network, permitting several orders ofmagnitude speedups [19] and thus making it possible to rep-resent larger networks and more flows in real time.

Attack models in RINSE focus on assets at a networkresource level, i.e., things like network bandwidth, controlover hosts, or computational or memory resources in hosts.Current attack models include DoS attacks, worms, andsimilar large-scale attacks typically involving large numbersof hosts and high intensity traffic flows. We are thus gener-ally only interested in the coarse behavior of the attack traf-fic (a large volume of traffic) rather than the detailed traf-fic dynamics. Consequently, we leverage the coarser multi-resolution traffic models for efficient attack models, includ-ing zombie hosts emitting fluid DoS flows and fixed-pointsolutions of worm scan traffic intensities based on our pre-vious work in multi-resolution worm modeling [14].

Memory and computational demands for routing of traf-fic have been identified as significant obstacles for large-scale network simulation and emulation. Some studies [23,9, 2] start from the premise of shortest path routes and tryto reduce computational and representational complexitythrough spanning tree approximations [9, 2] or lazy evalu-ation [23]. Others have achieved memory reductions in de-tailed protocol models, such as BGP (policy based routing)through implementation improvements [8, 4].

In iSSFNet we have developed a method for on-demand(lazy) computation of policy based routes, as computed byBGP [13]. For efficiency reasons and to ensure that traffic(attack traffic in particular) can address and reach a desti-nation network even if the destination is missing, we needhierarchical addressing. Hence, our routing model is cur-rently being extended to handle route aggregation. We arethus able to preload partial (precomputed) forwarding ta-bles based on a priori known traffic patterns in the model,such as scripted background traffic, and compute routes forother flows as needed.

5. Modeling Device Resources

Earlier exercises of the type RINSE is targeting indicatedthe need to model not only limited network resources, likebandwidth, but also some aspects of constraints on compu-tational resources in hosts and routers. Partly since they maybe targeted for Denial-of-Service, but also to preclude un-realistic defensive strategies. For instance, zero cost packetfiltering allows unrealistically large numbers of filters. Con-sequently, we need “light-weight” models of computationalresources (CPU) and memory in RINSE, a problem that hasnot received much attention in network simulation to datesince simple models have generally sufficed. For instance, a

Page 7: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

uniformly distributed compute delay has been used in stud-ies of simulations of BGP routing [7], or a simple fixedcryptographic cost for S-BGP processing [20]. The sensornetworking community, being very conscious of the con-straints imposed by tiny sensors, are particularly interestedin modeling the power consumption of different compo-nents, including the CPU [26].

5.1. CPU Model in RINSE

In RINSE a fair amount of detail is necessary and weidentified the following requirements on our CPU model:

• Interferencebetween different CPU intensive tasks.

• Traffic delaycould result from high CPU load–in par-ticular during abnormal (attack) conditions.

• Possibility ofpacket lossdue to sustained high load.

• Observable CPU load:the user should be able to mon-itor CPU load to diagnose the system.

• Light weight:we must strive for the simplest possiblemodels that can at least approximately represent thedesired effects.

Thus, we require more behavior detail than many other ap-plications do to be able to capture, at least coarsely, inter-actions between different tasks and traffic flows in terms ofprocessing. This results in significant implementation hur-dles, as will be described, and the situation is also compli-cated by the fact that the multi-resolution representation oftraffic necessitates a multi-resolution representation of com-putational workload (i.e. hybrid discrete and fluid represen-tations).

Interference: to observe interference between differenttasks, we need to model how processing cycles are allo-cated. The generic UNIX process scheduling mechanism3

[28] is based on priority scheduling, where process priori-ties are continuously recomputed to try to achieve good re-sponsiveness and latency hiding for I/O bound tasks.

We do not want to get into the details of the schedul-ing mechanism, but be able to observe competition for re-sources. Within the CPU, a set oftasksare defined, wherea task can be thought of as a process or thread. For in-stance, these could be application layer processes like webclients/servers, a database server, or lower layer functional-ity like a firewall process doing packet filtering on incom-ing packets. Figure4 illustrates how each task services thework it has to do in FCFS order, but cycles are allocatedamong tasks using processor sharing. In this first model wesimplify the problem by assuming that the tasks we con-sider have roughly the same priority (same range), so that

3 It varies somewhat between different flavors. Linux has a slightly dif-ferent mechanism, but for the purposes of this discussion it’s essen-tially the same.

task 1

task 2

task N

1f t 1

pt

2f t 2

pt

Nf t N

p t

1t

FCFS processorsharing

1t

Figure 4: Processing work model where work is handledFCFS by tasks that are allocated “cycles” on the CPU

they are treated equally. The requests (incoming traffic) toeach task may be a mixture of packets and fluid traffic flows.As in the hybrid packet/fluid traffic model in [21], we forma hybrid queue by fluidizing the packet load through esti-mating the packet rate. However, the service model inter-leaving the tasks actually make things even more compli-cated here than most hybrid traffic models since service isnot FIFO. Assume there areN tasks. Letλf

i (t) be the in-coming fluid workload rate for taski (in cycles per second)at timet, and andµ is CPU service rate (i.e. its speed). Apacket has an associated workload,wt in cycles. By esti-mating the the packet arrival rate over a time window[t′, t],we get the estimated packet workload rateλp

i (t). Let the to-tal arriving workload for taski beλi(t) = λf

i (t) + λpi (t).

We need to allocate a service rate to each taskµi(t), deter-mine backlogβi(t) and possibly lost workξi(t). A discreteworkload arrival (packet workload) att is always added tobacklog on arrivalβi(t)← βi(t) + wt. Note, however, thatif no discrete arrivals preceded it in[t′, t], thenλp

i (t) = 0.We consider two cases:

Non-overload, the total incoming workload rate over alltasks is less than or equal to the workload servicerate the CPU can handle, i.e.

∑i λf

i (t) + λpi (t) ≤ µ.

In this case each task is first assigned the fluid ser-vice rate it requiresµi(t) = λf

i (t) + λpi (t). Tasks that

have any backlog (βi(t) > 0), and this applies to anytasks processing packets, are marked asgreedy. Let gbe the number of greedy tasks. Any left-over cyclesγ(t) = µ −

∑i µi(t) are allocated equally to greedy

tasksµi(t) ← µi(t) + γ(t)/g. This ensures that thebacklog gets drained as quickly as possible and thuspackets are processed as quickly as possible. Conse-quently, fluid workload results in a processor utiliza-tion in proportion to the incoming rate, while discreteworkload results in bursts of full utilization.

Overload, the sum of the incoming fluid workload ratesand the averaged packet workload rates is greater thanthe service rate of the CPU. That is, there is a sustained

Page 8: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

overload condition. In this case the tasks are deniedcycles in proportion to their fraction of the total work-load, and what cannot be handled accumulates as back-log.

µi(t) =λi(t) · µ∑

i λi(t)(1)

An arriving discrete workload (packet) that does notyet have an average rate estimate poses a problem inthis case. It is givenµi(t) = 1 (full utilization) with-out affecting other flows. This is unrealistic in that thetotal CPU service rate is now briefly more thanµ, butis a reasonable approximation for occasional packets.If the packet is the first in a series with high averageworkload rate, then the service rates will be correctedthe moment the first arrival rate estimate is calculated.

When a task is defined, a buffer space sizebi can be as-signed to it to limit the backlog and introduce the possi-bility of loss of work if the task cannot keep up. Packetsoccupy buffer space according to their size until serviced.Fluid flows are assumed to have a simple linear relation-ship between the workload rate (cycles/second) and mem-ory used for backlog rate (bytes/second). Modeling loss inhybrid queues is a delicate matter, as pointed out in [21]. Ifa discrete workload (packet) arrives to a back-logged taskqueue such that there is not enough space to fit it in thebuffer we consider the state of the queue. If it is draining,the average arrival rate is less than the service rate, and weassume that it will fit (replacing fluid buffer space with thepacket). If the queue is filling, we give the packet a proba-bility of fitting into the queue equal to its proportion of thetotal task loadp = λp

i (t)/(λfi (t) + λp

i (t)).In an overload condition tasks become coupled through

competition for CPU and through the traffic flow, with fluidloads possibly leading to a cyclic dependency of traffic andCPU work; an unexpected complication. Figure5 illustrateshow we consider the cost of filtering and traffic forward-ing in a firewall router, and if the CPU gets overloadedit needs to report back to the protocol layers so that theycan reduce the traffic rate emitted. However, since the traf-fic flow passes first through filtering (A) and then forward-ing (B) there is a feedback loop in terms of rate adjust-ments. WhenB changes its load to the CPU, it must up-date the serviced load forA. A must then update the trafficrate emitted toB, which must then perform another load up-date to the CPU. Forn tandem tasks, where work is propor-tional to flow rate, the principle of proportional loss (equa-tion 1) limits the feedback. Consider thei:th task. Letfi bethe inflow,λi = ki · fi be the (offered) workload, andcn

i

be the cycles allocated for taski. Initially, flow rate f1 issent through all tasks, so equation1 implies we allocate cy-cles ascn

i = ki/∑n

j=1 kj . Tandem dependencies means

filtering

CPU

forwardingfluid traffic

f 1f 2

f 3

A B1

load

1serviced

Figure 5: CPU control feedback on tasks and fluid flows

thatfi = (cni−1/λi−1) · fi−1, and thus

λi = ki

cni−1

λi−1fi−1 =

ki · ki−1fi−1 · fi−1∑nj=1 kjfi−1ki−1fi−1

=ki∑n

j=1 kj

That is, the required cyclesλi to handle the adjusted in-flow fi equals the fraction of cycles assignedcn

i , so the allo-cation stabilizes immediately. But completely avoiding thisfeedback loop does not appear possible, so we rate limit thefeedback from the CPU to the protocol layers. Through thisrate limiting, we mimic the control delay imposed by thescheduling mechanism and bound the computational costsin the model.

Traffic delay: one difficult issue was how to implementdelays within the protocol stack without incurring signifi-cant overheads and code complexity. iSSFNet uses a proto-col model inspired by the x-kernel design [10], where proto-col sessions have a well defined common interface throughwhich they can be plugged together. These are the pushand pop methods. For maximum efficiency, the program-ming patterns used in the protocol stack are based on event-orientation through timer objects and continuations. Ratherthan switch to process-orientation to support arbitrary sus-pension points for packet processing delays, we opted tolimit the possible suspension points to the push/pop entryinterfaces (the socket API for the application layer). Thus,multiple delays on a packet within one protocol sessionwill be merged into one delay that is not incurred until thepoint where the packet enters the next protocol session. Thepush/pop API’s are good candidate suspension points be-cause the state of processing of a packet (or a fluid flow)is passed in the packet itself along with a small numberof additional parameters. Hence, we can safely assume thatthere are no additional state variables earlier in the execu-tion stack that need saving. So, upon return we continueprocessing from the push or pop call without reconstruct-ing the process stack. Other data structures in the protocolsessions, such as queues of packets that have been delayedpending some condition, evolve over time and thus do notrequire saving.

The accumulated delay for a packet within a protocolsession is stored in the packet and thus detected as thepacket reaches the next push/pop suspension point. Suspen-

Page 9: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

0

10

20

30

40

50

60

70

80

90

100

0 5 10 15 20 25 30

Per

cent

Util

izat

ion

Time [s]

measuredsim., discrete

sim., fluid

Figure 6: Example:scp transfer

sion points can be enabled or disabled through the DMLconfiguration; the idea being to make it easy to aggregatedelays, and thus aggregate events, by having fewer enabledsuspension points. Displacing the suspension point from thepoint in the code where the delay should take place alters thecausal ordering of state modifications in the model, i.e. theinterleaving of updates in simulation time will be slightlyaltered. We believe this will not be a significant issue forthe protocols under consideration here, but more experiencewith the model will be needed to bear this out.

5.2. Example

We illustrate the CPU model through a very simple ex-ample. In an experiment a 41.6 MB file was downloadedfrom a Linux laptop (acting as the server) usingscp (se-cure copy). The CPU load on the data source (server) wasmonitored usingvmstat . Figure6 shows the CPU utiliza-tion during the transfer as “measured”. This scenario wasmodeled in iSSFNet using its packet level TCP model. Aclient host is connected directly to a server host through a100 Mb/s link. When modeling the CPU load, we have twochoices: use a fluid representation of the load on the CPU,or use discrete chunks of work. The fluid representation issimple to use and has very low simulation cost. The draw-back is that it is coarse and will not impose any delay onthe the packets. Discrete work is more expensive to simu-late, but is more fine-grained and delays packets.

Using fluid work, we simply call asetFluid methodon the CPU, as the transfer starts, to set the instruction rateduring the transfer (we simply match the observed utiliza-tion). When the transfer is completed the instruction rateis set back to zero. The result, shown as “fluid load”, indi-cates a shorter transfer time than what was measured. Al-ternatively, we can use discrete workloads. Examining theOpenSSH scp implementation indicates that it transfers datathrough a 2 KB buffer, so we write data to the socket in 2 KBblocks and impose a compute delay on each block for data

transfer and encryption. The computation cost is registeredthrough a call tocpu.use(...) with the number of in-structions used and a pointer to the socket being used. Thesocketsend() code hides a call tocpu.delay(...)causing the socket processing to be suspended and delayed.We also use a timer to add a small idle delay between eachblock to model latencies. After tuning these delays, the re-sult shown as “discrete load”, can be made to match realityfairly well.

There is a significant difference in simulation cost be-tween these two approaches. Using fluid CPU load, no ex-tra events are added by the CPU model, but with the dis-crete workload model, each block requires a resource de-parture event and results in an event for drained backlog.Thus, the total event count increases by a factor of about2.4 and the execution time by a factor of 4. It is up to themodeler to determine when the additional cost is justified.

Aside from approximations arising from implementationdecisions, the current CPU resource model represents manysimplifications. The principle of proportional loss is fre-quently used for fluid traffic and alleviates the allocationfeedback issue mentioned previously. But we see the needfor more emphasis on distinction of task priorities to bettermimic prioritization of processes and threads. For instance,kernel level processes should be largely insulated from de-mands at the user level. We are looking into new allocationpolicies that can prioritize demands.

6. Summary and Future Work

RINSE incorporates recent work oni) real-time inter-action/emulation support,ii) multi-resolution traffic model-ing, iii) efficient attack models,iv) efficient routing simula-tion, andv) CPU/memory resource models, to target large-scale preparedness and training exercises. Described herewere efficient CPU/memory models necessary for the sce-nario exercises, and a latency absorption technique that willhelp when extending the range of client tools usable by theplayers.

Aside from model refinements, our ongoing and futurework includes more fundamental issues such as supportingfault tolerance and efficient real-time scheduling of com-pute intensive tasks like background traffic calculations andmajor routing changes. For example, we would like our sim-ulation framework to permit certain background tasks, suchas background traffic calculations, to be adaptively sched-uled based on higher priority load.

AcknowledgementsThis research was supported in partby DARPA Contract N66001-96-C-8530, NSF Grant CCR-0209144, and DHS Office for Domestic Preparedness award2000-DT-CX-K001. Thus, the U.S. Government retains anon-exclusive, royalty-free license to publish or reproducethis contribution. Points of view expressed are those of the

Page 10: RINSE: the Real-time Immersive Network Simulation ... · sive Network Simulation Environment for network ... nique for absorbing outside network latency into the simu- ... the C++

authors and do not necessarily represent the official posi-tion of the U.S. Department of Homeland Security.

References

[1] T. Bridis. Gov’t simulates terrorist cyberattack. Asso-ciated Press,http://www.zone-h.org/en/news/-read/id=3728 , November 2003.

[2] J. Chen, D. Gupta, K. Vishwanath, A. Snoeren, and A. Vah-dat. Routing in an Internet-scale network emulator. InSymposium on Modeling, Analysis and Simulation of Com-puter and Telecommunication Systems (MASCOTS), October2004.

[3] J. Cowie, D. Nicol, and A. Ogielski. Modeling the global In-ternet. Computing in Science and Engineering, 1(1):42–50,January 1999.

[4] X. Dimitropoulos and G. Riley. Large-scale simulation mod-els of BGP. InSymposium on Modeling, Analysis and Sim-ulation of Computer and Telecommunication Systems (MAS-COTS), October 2004.

[5] K. Fall. Network emulation in the Vint/NS simulator. In4th IEEE Symposium on Computers and Communications(ISCC’99), pages 244–250, July 1999.

[6] R. Fujimoto, K. Perumalla, A. Park, H. Wu, M. Ammar, andG. Riley. Large-scale network simulation – how big? howfast? InSymposium on Modeling, Analysis and Simulationof Computer Telecommunication Systems (MASCOTS), Oc-tober 2003.

[7] T. Griffin and B. Premore. An experimental analysis of bgpconvergence time. In9th International Conference on Net-work Protocols (ICNP), November 2001.

[8] F. Hao and P. Koppol. An Internet scale simulation setup forBGP. ACM SIGCOMM Computer Communication Review,33(3):43–57, July 2003.

[9] P. Huang and J. Heidemann. Minimizing routing state forlight-weight network simulation. InSymposium on Model-ing, Analysis and Simulation on Computer and Telecommu-nication Systems (MASCOTS), August 2001.

[10] N. C. Hutchinson and L. L. Peterson. The x-kernel: An archi-tecture for implementing network protocols.IEEE Transac-tions on Software Engineering, 17(1):64–76, January 1991.

[11] G. Kesidis, A. Singh, D. Cheung, and W. W. Kwok. Feasibil-ity of fluid-driven simulation for atm network. InProceed-ings of IEEE Globecom, November 1996.

[12] C. Kiddle, R. Simmonds, C. Williamson, and B. Unger. Hy-brid packet/fluid flow network simulation. In17th Workshopon Parallel and Distributed Simulation, June 2003.

[13] M. Liljenstam and D. Nicol. On-demand computation of pol-icy based routes for large-scale network simulation. In2004Winter Simulation Conference (WSC), December 2004.

[14] M. Liljenstam, D. Nicol, V. Berk, and R. Gray. Simulatingrealistic network worm traffic for worm warning system de-sign and testing. In2003 ACM Workshop on Rapid Malcode(WORM), October 2003.

[15] B. Liu, D. R. Figueiredo, Y. Guo, J. Kurose, and D. Towsley.A study of networks simulation efficiency: Fluid simulationvs. packet-level simulation. InIEEE Infocom, Anchorage,Alaska, April 2001.

[16] X. Liu, H. Xia, and A. Chien. Network emulation toolsfor modeling grid behavior. In3rd IEEE/ACM Interna-tional Symposium on Cluster Computing and the Grid (CC-Grid’03), May 2003.

[17] S. McCanne and Van Jacobson. The BSD packet filter: a newarchitecture for user-level packet capture. InWinter USENIXConference, pages 259–269, January 1993.

[18] D. Nicol and J. Liu. Composite synchronization in paral-lel discrete-event simulation.IEEE Transactions on Paralleland Distributed Systems, 13(5):433–446, May 2002.

[19] D. Nicol, J. Liu, M. Liljenstam, and G. Yan. Simulationof Large-Scale Networks Using SSF. InWinter SimulationConference (WSC), December 2003.

[20] D. Nicol, S. Smith, and M. Zhao. Evaluation of efficient se-curity for BGP route announcements using parallel simula-tion. Simulation Pratice and Theory Journal, special issueon Modeling and Simulation of Distributed Systems and Net-works, June 2004.

[21] D. Nicol and G. Yan. Discrete event fluid modeling of back-ground TCP traffic. ACM Transactions on Modeling andComputer Simulation, 14:1–39, July 2004.

[22] D. Nicol and G. Yan. Simulation of network traffic at coarsetime-scales. InWorkshop on Principles of Advanced andDistributed Simulation (PADS), 2005.

[23] G. Riley, M. Ammar, and R. Fujimoto. Stateless routing innetwork simulations. InSymposium on Modelling, Analysis,and Simulation of Computer and Telecommunication Sys-tems (MASCOTS), May 2000.

[24] G. Riley, T. Jaafar, and R. Fujimoto. Integrated fluid andpacket network simulations. InSymposium on Modeling,Analysis, and Simulation of Computer and Telecommunica-tion Systems (MASCOTS), October 2002.

[25] L. Rizzo. Dummynet: a simple approach to the evaulation ofnetwork protocols.ACM SIGCOMM Computer Communi-cation Review, 27(1):31–41, January 1997.

[26] A. Savvides, S. Park, and M. B. Srivastava. On modelingnetworks of wireless micro sensors. InSIGMETRICS, June2001.

[27] R. Simmonds and B. Unger. Towards scalable network em-ulation. Computer Communications, 26(3):264–277, Febru-ary 2003.

[28] A. Tanenbaum.Modern Operating Systems, 2nd ed.PrenticeHall, Upper Saddle River, NJ, 2001.

[29] A. Vahdat, K. Yocum, K. Walsh, P. Mahadevan, D. Kos-tic, J. Chase, and D. Becker. Scalability and accuracy in alarge scale network emulator. InProceedings of the 5th Sym-posium on Operating Systems Design and Implementation(OSDI’02), December 2002.

[30] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad,M. Newbold, M. Hibler, C. Barb, and A. Joglekar. An inte-grated experimental environment for distributed systems andnetworks. InFifth Symposium on Operating Systems Designand Implementation, pages 255–270, December 2002.

[31] J. Zhou, Z. Ji, M. Takai, and R. Bagrodia. MAYA: integratinghybrid network modeling to the physical world.ACM Trans-actions on Modeling and Computer Simulation (TOMACS),14(2):149–169, April 2004.