Testing IDS
2/106
Testing IDS
• Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available to test IDS.
• Quantitative IDS performance measurement results are essential in order to compare different systems.
3/106
Testing IDS
• Quantitative results are needed by:– Acquisition managers – to improve the
process of system selection.– Security analysts – to know the likelihood that
the alerts produced by IDS are caused by real attacks that are in progress.
– Researchers and developers – to understand the strengths and weaknesses of IDS in order to focus research efforts on improving systems and measuring their progress.
4/106
Testing IDS
• Quantitatively measurable IDS characteristics:– Coverage– Probability of false alarms– Probability of detection– Resistance to attacks directed at the IDS– Ability to handle high bandwidth traffic– Ability to correlate events– Ability to detect new attacks
5/106
Testing IDS
• Quantitatively measurable IDS characteristics (cont.):– Ability to identify an attack– Ability to determine attack success– Capacity verification (NIDS).
6/106
Testing IDS
• Coverage– Determines which attacks an IDS can detect
under ideal conditions.– For misuse (signature based) systems
• Counting the number of signatures and mapping them to a standard naming scheme.
– For anomaly detection systems• Determining which attacks out of the set of all
known attacks could be detected by a particular methodology.
7/106
Testing IDS
• Coverage (cont.)– The problem with determining coverage of an
IDS lies in the fact that various researchers characterize the attacks by different numbers of parameters.
8/106
Testing IDS• Coverage (cont.)
– These characterizations may take into account the particular goal of the attack (DoS, penetration, scanning, etc.), the software, protocol and/or OS against which it is targeted, the victim type, the data to be collected in order to obtain the evidence of the attack, the use or not of IDS evasion techniques, etc.
– Combinations of these parameters are also possible.
9/106
Testing IDS
• Coverage (cont.)– The consequence of these differences are
coarse granularity attack definitions and finer granularity attack definitions.
– Because of the disparity in granularity, it is difficult to determine attack coverage of an IDS precisely.
10/106
Testing IDS
• Coverage (cont.)– CVE is an attempt to alleviate this problem.– But the CVE approach does not work either, if
multiple attacks are used to exploit the same vulnerability using different approach (for example to evade IDS systems).
11/106
Testing IDS
• Coverage (cont.)– Determining the importance of different attack
types is also a problem when determining coverage.
– Different environments may assign different costs and importance to detecting different types of attacks.
– Example:• An e-commerce site may not be interested in
surveillance attacks, but may be very interested in detecting DDoS attacks.
12/106
Testing IDS
• Coverage (cont.)– Example (cont.):
• A military site may be especially interested in detecting surveillance attacks in order to prevent more serious attacks by acting in their early phases.
– Another problem with coverage is in determining which attacks to cover regarding system updates.
13/106
Testing IDS
• Coverage (cont.)– Example:
• It is worthless to test IDS coverage of the attacks against the defended system in which the measures against these attacks have already been applied (patching, hardening, etc.)
14/106
Testing IDS
• Probability of false alarms– Suppose that we have N IDS decisions, of which:
• In TP cases: intrusion – alarm.• In TN cases: no intrusion – no alarm.• In FP cases: no intrusion – alarm.• In FN cases: intrusion – no alarm.
– Total intrusions: TP+FN
– Total no-intrusions: FP+TN
– N=TP+FN+FP+TN
– Base-rate – the probability of an attack:
N
FNTPIP
15/106
Testing IDS
• Probability of false alarms (cont.)– Events: Alarm A, Intrusion I
• The following rates are defined:– True positive rate TPR
– True negative rate TNR
IAPFNTP
TPTPR
IAPTNFP
TNTNR
16/106
Testing IDS
• Probability of false alarms (cont.) – False positive rate FPR
– False negative rate FNR
IAPTNFP
FPFPR
IAPFNTP
FNFNR
17/106
Testing IDS• Probability of false alarms (cont.)
– This measure determines the rate of false positives produced by an IDS in a given environment during a particular time frame.
18/106
Testing IDS• Probability of false alarms (cont.)
– Typical causes of false positives:• Weak signatures (alert on all traffic to a specific
port, search for the occurrence of a common word such as ”help” in the first 100 bytes of SNMP or other TCP connections, alert on common violations of the TCP protocol, etc.)
• Normal network monitoring and maintenance traffic.
19/106
Testing IDS• Probability of false alarms (cont.)
– Difficulties regarding measuring of false alarm rate:
• An IDS may have a different false positive rate in different network environments, and “standard network” does not exist.
• It is difficult to determine aspects of network traffic or host activity that will cause false alarms.
• Consequence: it is difficult to guarantee that it is possible to produce the same number and type of false alarms in a test network as in a real network.
20/106
Testing IDS• Probability of false alarms (cont.)
– Difficulties regarding measuring of false alarm rate (cont.):
• IDS can be configured in many ways and it is difficult to determine which configuration of an IDS should be used for a particular false positive test.
21/106
Testing IDS
• Probability of detection– This measurement determines the rate of
attacks detected correctly by an IDS in a given environment during a particular time frame.
22/106
Testing IDS
• Probability of detection– Difficulties in measuring probability of
detection:• The success of an IDS is largely dependent upon
the set of the attacks used during the test.• The probability of detection varies with the false
positive rate – the same configuration of the IDS must be used for testing for false positives and hit rates.
23/106
Testing IDS• Probability of detection (cont.)
– Difficulties in measuring probability of detection (cont.):
• A NIDS can be evaded by using the stealthy versions of attacks (fragmenting packets, using data encoding, using unusual TCP flags, encrypting attack packets, spreading attacks over multiple network sessions, launching attacks from multiple sources, etc.)
• This reduces the probability of detection, even though the same attack would be detected if no stealthy version would be applied.
24/106
Testing IDS
• Resistance to attacks directed at the IDS– This measurement demonstrates how
resistant an IDS is to an attacker’s attempt to disrupt the correct operation of the IDS.
25/106
Testing IDS• Resistance to attacks directed at the IDS
– Some typical attacks against IDS:• Sending a large amount of non-attack traffic with
volume exceeding the IDS processing capability – this causes dropping packets by the IDS.
• Sending to the IDS non-attack packets that are specially crafted to trigger many signatures within the IDS – the human operator is overwhelmed with false positives, or an automated analysis tools crashes.
26/106
Testing IDS• Resistance to attacks directed at the IDS
(cont.)– Some typical attacks against IDS (cont.):
• Sending to the IDS a large number of attack packets intended to distract the human operator, while the attacker launches a real attack hidden among these “false attacks”.
• Sending to the IDS packets containing data that exploit a vulnerability within the very IDS processing algorithms. Such vulnerabilities may be consequence of coding errors.
27/106
Testing IDS• Ability to handle high bandwidth traffic
– This measurement demonstrates how well an IDS will function when presented with a large volume of traffic.
– Most NIDS start to drop packets as the traffic volume increases – false negatives.
– At certain threshold, most IDS will stop detecting any attacks.
28/106
Testing IDS• Ability to correlate events
– This measurement demonstrates how well an IDS correlates attack events.
– These events may be gathered from IDS, routers, firewalls, application logs, etc.
– One of the primary goals of event correlation is to identify penetration attacks.
– Currently, IDS have limited capabilities in this area.
29/106
Testing IDS• Ability to detect new attacks
– This measurement demonstrates how well an IDS can detect attacks that have not occurred before.
– Signature-only based systems will have 0 score here.
– Anomaly-based systems may be suitable for this type of measurement. However, they in general produce more false alarms than the signature-based systems.
30/106
Testing IDS• Ability to identify an attack
– This measurement demonstrates how well an IDS can identify the attack that it has detected.
– Each attack should be labelled with a common name or vulnerability name, or by assigning the attack to a category.
31/106
Testing IDS• Ability to determine attack success
– This measurement demonstrates if the IDS can determine the success of attacks from remote sites that give the attacker higher-level privileges on the attacked system.
– Many remote privilege-gaining attacks (probes) fail and do not damage the attacked system.
– Many IDS do not distinguish between unsuccessful and successful attacks.
32/106
Testing IDS• Ability to determine attack success (cont.)
– For the same attack, some IDS can detect the evidence of damage and some IDS detect only the signature of attack actions.
– The ability to determine the attack success is essential for the analysis of attack correlation and the attack scenario.
– Measuring this capability requires the information about both successful and unsuccessful attacks.
33/106
Testing IDS• Capacity verification for NIDS
– The NIDS demand higher-level protocol awareness than other network devices (switches, routers, etc.)
– NIDS inspect more deeply the network packets than the other devices do.
– Therefore, it is important to measure the ability of a NIDS to capture, process and perform at the same level of accuracy under a given network load as it does on a quiescent network.
34/106
Testing IDS• Capacity verification for NIDS (cont.)
– There exists a standardized capacity benchmarking methodology for NIDS (e.g. CISCO has its own methodology).
– The NIDS customers can use the standardized capacity test results for each metric and a profile of their networks to determine if the NIDS is capable of inspecting their traffic.
35/106
Challenges of IDS testing
• The following problems (at least) make IDS testing a challenging task:– Collecting attack scripts and victim software is
difficult.– Requirements for testing signature-based and
anomaly-based IDS are different.– Requirements for testing host-based and
network-based IDS are different.– Using background traffic in IDS testing is not
standardized.
36/106
Challenges of IDS testing• Collecting attack scripts and victim
software.– It is difficult and expensive to collect a large
number of attack scripts.– The attack scripts are available in various
repositories, but it takes time to find relevant scripts to a particular testing environment.
37/106
Challenges of IDS testing• Collecting attack scripts and victim
software (cont.)– Once an adequate script is identified, it takes
approx. one person-week to review the code, test the exploit, determine where the attack leaves evidence, automate the attack and integrate it into a testing environment.
38/106
Challenges of IDS testing• Different requirements for testing
signature-based and anomaly-based IDS– Most commercial systems are signature-
based.– Many research systems are anomaly based.
39/106
Challenges of IDS testing• Different requirements for testing
signature-based and anomaly-based IDS (cont.)– An ideal IDS testing methodology would be
applicable to both signature-based and anomaly-based systems.
– This is important because the research anomaly-based systems should be compared to the commercial signature-based systems.
40/106
Challenges of IDS testing• Different requirements for testing
signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to
cover both type of systems:• Anomaly based systems with learning require
normal traffic for training that does not include attacks.
• Anomaly based systems with learning may learn behaviour of the testing methodology and perform well without detecting real attacks at all.
41/106
Challenges of IDS testing• Different requirements for testing
signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to
cover both type of systems (cont.):• This may happen when all the attacks in a test are
launched from a particular user, IP address, subnet, or MAC address.
42/106
Challenges of IDS testing• Different requirements for testing
signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to
cover both type of systems (cont.):• Anomaly-based systems with learning can also
learn subtle characteristics difficult to predetermine (packet window size, ports, typing speed, command set used, TCP flags, connection duration, etc.) – artificially perform well in the test environment.
43/106
Challenges of IDS testing• Different requirements for testing
signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to
cover both type of systems (cont.):• The performance of a signature based system in a
test will, to a large degree, depend on the set of attacks used in the test.
• Then the decision about which attacks to include in a test may be in favour of a particular IDS – not objective.
44/106
Challenges of IDS testing• Different requirements for testing host-
based and network-based IDS– Testing host-based IDS presents some
difficulties not present when testing network-based IDS:
• Network-based IDS can be tested off-line by creating a log file containing TCP traffic and replaying that traffic to IDS – this is convenient, because there is no need to test all the IDS at the same time.
• Repeatability of the test is easy to achieve.
45/106
Challenges of IDS testing• Different requirements for testing host-
based and network-based IDS (cont.)– Testing host-based IDS presents some
difficulties not present when testing network-based IDS (cont.):
• Host-based IDS use a variety of system inputs in order to determine whether or not a system is under attack.
• This set of inputs is not the same for all IDS.• Host-based IDS monitor a host, not a single data
feed.• Then it is difficult to replay activity from log files.
46/106
Challenges of IDS testing• Different requirements for testing host-
based and network-based IDS (cont.)– Testing host-based IDS presents some
difficulties not present when testing network-based IDS (cont.):
• Since it is difficult to test a host-based IDS off-line, an on-line test should be performed.
• Consequence: problems of repeatability.
47/106
Challenges of IDS testing• Using Background traffic in IDS testing
– Four approaches:• Testing using no background traffic/logs• Testing using real traffic/logs• Testing using sanitized traffic/logs• Testing using simulated traffic/logs.
– It is not clear which approach is the most effective for testing IDS.
– Each of the four approaches has unique advantages and disadvantages.
48/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using no background traffic/logs
• This testing may be used as a reference condition.• An IDS is set up on a host/network on which there
is no activity.• Then, computer attacks are launched on this
host/network to determine whether or not the IDS can detect them.
• This technique can determine the probability of detection (hit rate) under no load, but it cannot determine the false positive rate.
49/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using no background traffic/logs
(cont.)• Useful for verifying that an IDS has signatures for a
set of attacks and that the IDS can properly label each attack.
• Often much less costly than other approaches.• Drawback: tests using this technique are based on
the assumption that an IDS ability to detect an attack is the same regardless of the background activity.
50/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using no background traffic/logs
(cont.)• At low levels of background activity, that
assumption is probably true.• At high levels of background activity, the
assumption is often false since the IDS performances degrade at high traffic intensities.
51/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using real traffic/logs
• The attacks are injected into a stream of real background activity.
• Very effective for determining the hit rate of an IDS given a particular level of background activity.
• Background activity is real – contains all the anomalies and subtleties – realistic hit rates.
• Enables comparison of IDS hit rates at different levels of activity.
52/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using real traffic/logs (cont.)
• Drawbacks:– Repeatable test using real traffic is problematic – it is
difficult to store and replay large amounts of real traffic at rates higher than 100 Mb/s (currently). Possible solution: parallelization – packet sequencing problems.
– The experiments of this kind usually use a small number of victim machines, set up only to be attacked during the test. Some anomaly detection IDS can then artificially elevate their performances during the test.
53/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using real traffic/logs (cont.)
• Drawbacks (cont.):– The real background activity used may contain
anomalies unique to the network, which favour one IDS over another. Example: a test network may heavily use a particular protocol that was processed more deeply by a particular IDS.
– The major problem with testing using real background traffic/logs: it is very difficult to determine false positive rates correctly, because it is virtually impossible to guarantee the identification of all the attacks that naturally occur in the background activity.
54/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using real traffic/logs (cont.)
• Drawbacks (cont.):– It is difficult to publicly distribute the test, since there are
privacy concerns related to the use of real background activity.
– Replay may damage the timings – timestamps should also be kept.
55/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using sanitized traffic/logs
• Sanitizing – removing sensitive information from real data.
• The goal – to overcome the privacy problems of using, analyzing, and distributing real background activity.
• Example: TCP packet headers may be cleansed, and packet payloads may be hashed.
• Real background activity is prerecorded and then all the sensitive data are removed.
56/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using sanitized traffic/logs (cont.)
• Then, attack data are injected within the sanitized data stream:
– By replaying the sanitized data and running attacks concurrently, or
– By separately creating attack data and then inserting these into the sanitized data.
• Advantages:– Test data are freely distributable– The test is repeatable.
57/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using sanitized traffic/logs (cont.)
• Disadvantages:– Sanitization attempts may end up removing much of the
content of the background activity – very unrealistic environment.
– The major problem: Sanitization attempts may fail – accidental release of sensitive data. It is infeasible for a human to verify the sanitization of a large volume of data.
– The injected attacks do not interact realistically with the sanitized background activity. Example: an injected buffer overflow attack may cause a web server to crash, but background activity still requests the web server.
58/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using sanitized traffic/logs (cont.)
• Disadvantages (cont.):– When sanitizing real traffic, it may be difficult to remove
the attacks that existed in the data stream – this causes problems with the false positive rate testing.
– Sanitizing data may remove information needed to detect attacks.
59/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using simulated traffic/logs
• The most common approach to testing IDS.• A testbed network with hosts and network
infrastructure is created.• Background traffic is generated on this network, as
well as the attacks.• The testbed network includes victims of interest
with background traffic generated by means of complex traffic generators that model the actual network traffic statistics.
60/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using simulated traffic/logs (cont.)
• There is also a possibility to employ simpler traffic generators to create a small number of packet types at a high rate.
• Network traffic and host audit logs can be recorded in such a testbed network for later playback.
• There is also possibility to perform evaluations in real time.
61/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using simulated traffic/logs (cont.)
• Advantages:– Data can be distributed freely – it does not contain any
private or sensitive information.– There is a guarantee that the background activity does
not contain any unknown attacks. – IDS testing using simulated traffic is easily repeatable.
62/106
Challenges of IDS testing• Using Background traffic in IDS testing
(cont.)– Testing using simulated traffic/logs (cont.)
• Disadvantages:– It is very costly and difficult to create a simulation.– It is difficult to simulate a high bandwidth environment –
resource constraints.– Different traffic is needed for different types of networks –
academic, e-commerce, military, etc.
63/106
Measuring IDS performances
• In order to compare different IDS, a measure of their performances is needed.
• Of all the measurable characteristics mentioned before, the true positive rate and the false positive rate are the most important for comparing IDS.
• The true positive rate and the false positive rate are included in various sublimation metrics for comparing IDS.
64/106
Measuring IDS performances
• It is important to determine the probability of intrusion, if an alert has been generated.
• This gives rise to a Bayesian probabilistic measure for characterising IDS performances.
• We need the total probability of an alert in order to determine the probability of intrusion given the alert.
65/106
Measuring IDS performances
TP FN
TN FP
I
I
I, I mutually exclusive A=(IA)(IA)
IAPIPIAPIPAP
A
A
Total probability of an alert
66/106
Measuring IDS performances
FNTP
TPTPRIAP
TNFP
FPFPRIAP
N
FNTPIP
IPIP 1
67/106
Measuring IDS performances
• A performance measure: Bayesian detection rate:
• The greater the detection rate, the better the IDS, but…
)|()()|()(
)|()()|(
IAPIPIAPIP
IAPIPAIP
68/106
Measuring IDS performances
• Base-rate fallacy– Even if the false alarm rate P(A|¬I) is very
low, the Bayesian detection rate P(I |A) is still low if the base-rate P(I) is low
– Example 1: if P(A|I) = 1, P(A|¬I) = 10-5, P(I) = 2×10-5, P(I |A) = 66%
– Example 2: if P(A|I) = 1, P(A|¬I) = 10-5, P(I) = 10-1, P(I |A) = 99.99%
– Example 3: if P(A|I) = 1, P(A|¬I) = 10-9, P(I) = 2×10-5, P(I |A) = 99.99%
69/106
Measuring IDS performances
• Conclusion:– If the base-rate is low, the false alarm rate
must be extremely low.
• Example:– KDD cup data set without filtering has a very
high base-rate – no base-rate fallacy.– What is good for the military, it is sometimes
very bad for a non-military environment.
70/106
Measuring IDS performances
• Another performance measure: ROC– Receiver Operating Characteristic– Used widely in systems for detection of
signals in noise (radars, etc.)– TPR vs. FPR curve– An ideal system has TPR=1 and FPR=0.
71/106
Measuring IDS performances
• Example of a ROC curve:
% TPR
% FPR
IDS1
IDS2IDS2
72/106
Measuring IDS performances
• The use of ROC curves for assessing IDS has suffered harsh criticism:– Normally, an IDS would be characterised by a
single point in the coordinates FPR-TPR (However, if a parameter of an IDS is varied, the ROC curve is obtained, instead of a single point).
73/106
Measuring IDS performances
• Example – ROC of the IDS with the relabelling algorithm in which DB index and centroid diameters are implemented.
• The parameter: DeltaDB – varied between 0.2 and 0.45.
74/106
Measuring IDS performances
75/106
Test data sets
• For testing using simulated traffic/logs, a source of simulated traffic in which attacks are injected is needed.
• A widely used simulated traffic data set is the KDD cup ’99 data set.
76/106
Test data sets
• It corresponds to testing IDS carried out by MIT Lincoln Laboratory in 1998 and 1999.
• In 1999, the KDD organized a contest in data mining and the data base used was that generated by Lincoln Laboratory.
77/106
Test data sets
• KDD (SIGKDD) – ACM special interest group on knowledge discovery and data mining.
• The purpose of the KDD CUP ’99 contest was to classify the given data in order to differentiate attack records from the normal traffic records.
78/106
Test data sets
• The KDD Cup 1999 Data– Various intrusions simulated in a military air-
base network environment – 9 weeks of raw tcpdump data for a LAN simulating a typical U.S. Air Force LAN.
– 4,900,000 data instances – vectors of extracted feature values from connection records.
79/106
Test data sets
• The KDD Cup 1999 Data (cont.)– Data were split into 2 parts:
• The raw training data (4Gb of compressed binary tcpdump – 7 weeks of network traffic – approx. 5 million connection records).
• Test data – 2 weeks – approx. 2 million connection records.
– A connection:• A sequence of TCP packets starting and ending at
some well defined time instants, between which data flow to and from a source IP address to a target IP address under a well defined protocol.
80/106
Test data sets
• The KDD Cup 1999 Data (cont.)– Each connection is labelled as either normal
or as an attack, with exactly one specified attack type.
– Each connection record consists of about 100 bytes.
81/106
Test data sets
• Four categories of simulated attacks– DoS – denial of service (e.g. Syn flood). – R2L – unauthorized access from a remote
machine (e.g. guessing password).– U2R – unauthorized access to superuser or
root functions (e.g. various “buffer overflow” attacks).
– Probing – surveillance and other probing for vulnerabilities (e.g. port scanning).
82/106
Test data sets
• The test data do not have the same probability distribution as the training data.
• They include specific attack types not in the training data.
• This made the data mining task more realistic – the distribution of real data and types of possible attacks are normally not known during the training of the learning system.
83/106
Test data sets
• The training data set contains 22 training attack types:– back DoS – buffer_overflow u2r – ftp_write r2l – guess_passwd r2l – imap r2l – ipsweep probe
84/106
Test data sets
• The training data set attack types (cont.)– land dos– loadmodule u2r – multihop r2l – neptune dos – nmap probe – perl u2r
85/106
Test data sets
• The training data set attack types (cont.)– phf r2l – pod dos– portsweep probe – rootkit u2r – satan probe – smurf dos
86/106
Test data sets
• The training data set attack types (cont.)– spy r2l – teardrop dos – warezclient r2l – warezmaster r2l.
• The test data set contains 14 additional attack types.
87/106
Test data sets• 41 higher level traffic features were
defined in order to help distinguishing normal connections from attacks.
• These features are divided into 3 categories:– Basic features of individual TCP connections.– Content features within a connection
suggested by domain knowledge.– Traffic features computed using a 2-second
time window.
88/106
Test data sets• Basic features of individual TCP
connections (host-based traffic features):– Connection records were sorted by
destination host.– Features were constructed using a window of
100 connections to the same host instead of a time window.
– This is useful since some probing attacks scan the hosts (or ports) using a long time interval.
89/106
Test data sets• Content features within a connection
suggested by domain knowledge:– These features look for suspicious behaviour
in the data portions, such as the number of failed login attempts.
90/106
Test data sets• Traffic features computed using a two-
second time window (time based traffic features):– The same host features examine only the
connections in the past two seconds that have the same destination host as the current connection, and calculate statistics related to protocol behaviour, service, etc.
– The same service features examine only the connections in the past two seconds that have the same service as the current connection.
91/106
Test data sets
• Basic features of individual TCP connectionsFeature name
Description Type
duration Length (in sec) of the connection Continuous
protocol_type Type of the protocol, e.g. tcp, udp, etc. Discrete
service Network service on the destination, e.g. http, telnet, etc.
Discrete
src_bytes Number of data bytes from source to destination
Continuous
dst_bytes Number of data bytes from destination to source
Continuous
92/106
Test data sets
• Basic features of individual TCP connections (cont.)
Feature name Description Type
flag Normal or error status of the connection
Discrete
land 1 if connection is from/to the same host/port; 0 otherwise
Discrete
wrong_fragment Number of “wrong” fragments Continuous
urgent Number of urgent packets Continuous
93/106
Test data sets
• Content featuresFeature name Description Type
hot Number of “hot” indicators Continuous
num_failed_logins Number of failed login attempts Continuous
logged_in 1 if successfully logged in; 0 otherwise
Discrete
num_compromised Number of “compromised” conditions
Continuous
root_shell 1 if root shell is obtained; 0 otherwise
Discrete
94/106
Test data sets
• Content features (cont.)Feature name Description Type
su_attempted 1 if “su root” command attempted; 0 otherwise
Discrete
num_root Number of “root” accesses Continuous
num_file_creations Number of file creation operations Continuous
num_shells Number of shell prompts Continuous
num_access_files Number of operations on access control files
Continuous
95/106
Test data sets
• Content features (cont.)
Feature name Description Type
num_outbound_cmds Number of outbound commands in an ftp session
Continuous
is_hot_login 1 if the login belongs to the “hot” list; 0 otherwise
Discrete
is_guest_login 1 if the login is a “guest” login; 0 otherwise
Discrete
96/106
Test data sets
• Time-based traffic featuresFeature name Description Type
count Number of connections to the same host as the current connection in the past 2 seconds
Continuous
The following features refer to so called “same host” connections
serror_rate % of connections that have “SYN” errors
Continuous
rerror_rate % of connections that have “REJ” errors
Continuous
97/106
Test data sets
• Time-based traffic features (cont.)Feature name Description Type
“same host” connections (cont.)
same_srv_rate % of connections to the same service
Continuous
diff_srv_rate % of connections to different services
Continuous
srv_count Number of connections to the same service as the current connection in the past 2 seconds
Continuous
98/106
Test data sets
• Time-based traffic features (cont.)
Feature name Description Type
The following features refer to so called “same service” connections
srv_serror_rate % of connections that have “SYN” errors
Continuous
srv_rerror_rate % of connections that have “REJ” errors
Continuous
srv_diff_host_rate % of connections to different hosts
Continuous
99/106
Test data sets
• Selecting the right set of system features is a critical step when formulating the classification tasks (in this case – intrusion detection algorithm).
100/106
Test data sets
• The 41 features were obtained by means of the following process:– Frequent sequential patterns (frequent
episodes !) from the network audit data were identified.
– These patterns were used as guidelines to select and construct temporal statistical features.
101/106
• Weaknesses of the KDD Cup data set:– Simulated data must be similar to real data –
there is no proof that KDD cup data are similar to real data.
– No anomalous packets that appear in real data.– No failure modes.– Synthetic attacks are not distributed realistically
in the background normal data.– Simulated TCP traffic is not diverse enough
(only 9 types of TCP traffic in KDD cup data set).
Test data sets
102/106
• Stide (Sequence Time Delay Embedding) data set – collections of system calls– Instead of high-level features used in the KDD
CUP ’99 database, low level features are used in order to identify potential intrusions – sequences of system calls.
– In the training phase, stide builds a database of all unique, contiguous system call sequences of a predetermined fixed length occurring in the traces.
Test data sets
103/106
• Stide (Sequence Time Delay Embedding) data set – collections of system calls– During testing, stide compares sequences in the
new traces to those in the database, and reports an anomaly measure indicating how much the new traces differ from the normal training data.
– 13726 traces of normal data were collected at the Computer Science Department, University of New Mexico.
Test data sets
104/106
• PESIM 2005 dataset – Fraunhofer Institute Berlin, Germany– Goal: to overcome the problem with the KDD
Cup 1999 dataset.– A combination of 5 servers in a virtual machine
environment (2 Windows, 2 Linux and 1 Solaris OS).
– HTTP, FTP, and SMTP services.– To achieve realistic traffic characteristics, news
sites were mirrored on the HTTP servers.– File sharing facilities offered on FTP servers.
Test data sets
105/106
• PESIM 2005 dataset (cont.)– SMTP traffic injected artificially:
• 70% mails from personal communication and mailing lists.
• 30% spam mails received by 5 individuals.
– Normal data preprocessed in the following way:• Random selection of 1000 TCP connections from
each protocol.• Attachments removed from the TCP traffic.
– Attacks against the simulated services generated by penetration testing tools.
Test data sets
106/106
• PESIM 2005 dataset (cont.)– Multiple instances of 27 different attacks were
launched against the HTTP, FTP, and SMTP services.
– The origin of the major part of the attacks is from the Metasploit environment.
– Some of the attacks were taken from the Bugtraq and Packet Storm Security lists.
• The problem: this dataset is not publicly available.
Test data sets