definition of response metrics for an ontology-based automated intrusion response systems

13
Definition of response metrics for an ontology-based Automated Intrusion Response Systems q Verónica Mateos a,, Víctor A. Villagrá a , Francisco Romero b , Julio Berrocal a a Dpto. de Ingeniería de Sistemas Telemáticos, Universidad Politécnica de Madrid, E.T.S.I. de Telecomunicación, Madrid 28040, Spain b Telefónica Research and Development (TID), Madrid, Spain article info Article history: Available online 4 July 2012 abstract The main purpose of an AIRS (Automated Intrusion Response System) is to choose and exe- cute the optimum response when the different security-event network detection sources detect security intrusions. The inference of the most suitable response should be made according to a set of response metrics that specify different rules for selecting a specific response according to some context and input parameters and the weight associated with each of them. Furthermore, the Semantic Web Rule Language (SWRL) can be used to specify these response metrics, providing an open and extensible framework for the behavior description of an AIRS, able to be integrated with the increasing number of Semantic Web tools. The aim of this paper is to study and characterize these metrics, as well as defin- ing a set of response metrics for an AIRS, specifying these metrics with SWRL rules and test- ing their execution with Semantic Web current technologies. Finally, some results are shown concerning the inferred responses and performance of this SWRL-based reasoning. Ó 2012 Elsevier Ltd. All rights reserved. 1. Introduction Security in networks is an area that has been widely studied and has been the focus of extensive research over the past few years. The number of security events is increasing, and they are becoming increasingly sophisticated and widespread [1]. Intrusion Detection Systems (IDS) have evolved rapidly and there are now very mature tools based on different paradigms (statistic anomaly-based, signature-based and hybrids [2]) with a high level of reliability. IPSs (Intrusion Prevention Systems) have also been developed by combining an IDS with a basic reactive response, such as resetting a connection. IRSs (Intrusion Response Systems) leverage the concept of IPSs providing the means to achieve specific responses according to some prede- fined rules. Unfortunately, the state of the art in IRSs is not as mature as with IDSs. The reaction against intrusions is slow, and these systems have difficulty detecting intrusions in real time and triggering automated responses. For these reasons, there is a need for intrusion detection and response systems to dynamically adapt so as to better detect and respond to attacks. AIRS (Automated Intrusion Response System) are security systems whose main purpose is to choose and trigger automated responses against intrusions detected by IDSs, in order to mitigate them or reduce their impact [3]. In the intrusion response process, it is necessary to define several metrics that offer a means to measure and weigh up different parameters which are useful for the response selection, such as the IDS confidence, the network activity level, the reliability of intrusion reports, the importance of network components and the complexity/severity/cost/efficiency of responses. 0045-7906/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2012.06.001 q Reviews processed and proposed for publication to Editor-in-Chief by Guest Editor Dr. Gregorio Martinez. Corresponding author. Tel.: +34 91 549 57 00x3024. E-mail addresses: [email protected] (V. Mateos), [email protected] (V.A. Villagrá), [email protected] (F. Romero), [email protected] (J. Berrocal). Computers and Electrical Engineering 38 (2012) 1102–1114 Contents lists available at SciVerse ScienceDirect Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

Upload: julio

Post on 25-Nov-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

Computers and Electrical Engineering 38 (2012) 1102–1114

Contents lists available at SciVerse ScienceDirect

Computers and Electrical Engineering

journal homepage: www.elsevier .com/ locate /compeleceng

Definition of response metrics for an ontology-based AutomatedIntrusion Response Systems q

Verónica Mateos a,⇑, Víctor A. Villagrá a, Francisco Romero b, Julio Berrocal a

a Dpto. de Ingeniería de Sistemas Telemáticos, Universidad Politécnica de Madrid, E.T.S.I. de Telecomunicación, Madrid 28040, Spainb Telefónica Research and Development (TID), Madrid, Spain

a r t i c l e i n f o

Article history:Available online 4 July 2012

0045-7906/$ - see front matter � 2012 Elsevier Ltdhttp://dx.doi.org/10.1016/j.compeleceng.2012.06.00

q Reviews processed and proposed for publication⇑ Corresponding author. Tel.: +34 91 549 57 00x3

E-mail addresses: [email protected] (V. Mateo

a b s t r a c t

The main purpose of an AIRS (Automated Intrusion Response System) is to choose and exe-cute the optimum response when the different security-event network detection sourcesdetect security intrusions. The inference of the most suitable response should be madeaccording to a set of response metrics that specify different rules for selecting a specificresponse according to some context and input parameters and the weight associated witheach of them. Furthermore, the Semantic Web Rule Language (SWRL) can be used to specifythese response metrics, providing an open and extensible framework for the behaviordescription of an AIRS, able to be integrated with the increasing number of SemanticWeb tools. The aim of this paper is to study and characterize these metrics, as well as defin-ing a set of response metrics for an AIRS, specifying these metrics with SWRL rules and test-ing their execution with Semantic Web current technologies. Finally, some results areshown concerning the inferred responses and performance of this SWRL-based reasoning.

� 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Security in networks is an area that has been widely studied and has been the focus of extensive research over the pastfew years. The number of security events is increasing, and they are becoming increasingly sophisticated and widespread [1].Intrusion Detection Systems (IDS) have evolved rapidly and there are now very mature tools based on different paradigms(statistic anomaly-based, signature-based and hybrids [2]) with a high level of reliability. IPSs (Intrusion Prevention Systems)have also been developed by combining an IDS with a basic reactive response, such as resetting a connection. IRSs (IntrusionResponse Systems) leverage the concept of IPSs providing the means to achieve specific responses according to some prede-fined rules. Unfortunately, the state of the art in IRSs is not as mature as with IDSs. The reaction against intrusions is slow,and these systems have difficulty detecting intrusions in real time and triggering automated responses. For these reasons,there is a need for intrusion detection and response systems to dynamically adapt so as to better detect and respond toattacks. AIRS (Automated Intrusion Response System) are security systems whose main purpose is to choose and triggerautomated responses against intrusions detected by IDSs, in order to mitigate them or reduce their impact [3].

In the intrusion response process, it is necessary to define several metrics that offer a means to measure and weigh updifferent parameters which are useful for the response selection, such as the IDS confidence, the network activity level,the reliability of intrusion reports, the importance of network components and the complexity/severity/cost/efficiency ofresponses.

. All rights reserved.1

to Editor-in-Chief by Guest Editor Dr. Gregorio Martinez.024.s), [email protected] (V.A. Villagrá), [email protected] (F. Romero), [email protected] (J. Berrocal).

Page 2: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114 1103

Current AIRSs have a fixed approach to response metrics: they implement some specified metrics, as described in Section2.1, but the metric cannot be dynamically chosen. This paper proposes a more flexible approach: to define an AIRS which isable to interpret response metrics dynamically, so the metric can be changed without any additional modification being re-quired. For that purpose, it is necessary to select a language for metrics specification. The paper proposes the use of SWRL(Semantic Web Rule Language) [4] to formally specify the response metrics, within the scope of an ontology-based AIRS. So,the aim of this paper is to define the response metrics to be used by an AIRS, and specify these metrics by the means of SWRLrules.

The paper is organized as follows: Section 2 reviews the existing state of the art in AIRSs and their approach for the spec-ification of response metrics. Section 3 presents a description of an ontology-based AIRS, which is able to integrate SWRLresponse metrics, while addressing the additional advantages of using Semantic Web technologies. Section 4 proposes sev-eral scenarios for selecting an intrusion response: a damage reduction scenario, a minimum cost scenario and a highestseverity/highest efficiency scenario, and specifies their associated metrics in SWRL. Section 5 details the results of applyingthese defined metrics in an ontology-based AIRS prototype, with some statistics regarding the quality of the responses aswell as system performance. Finally, Section 6 provides some conclusions regarding the presented work.

2. Automated Intrusion Response Systems

AIRSs are security technologies that trigger a dynamic reaction against a detected intrusion. The system infers the suitableresponse and triggers it automatically without needing the system administrator’s participation.

In the last 5 years, several taxonomies of Automated Intrusion Response Systems have been proposed, such as the pro-posed by Stakhanova et al. in [3] or by Shameli-Sendi et al. in [5]. According to these taxonomies, AIRSs can be classifiedin different ways based on various characteristics:

� By ability to adjust: Static and adaptive. In the static AIRSs, the response selection mechanism remains the same duringthe life of the AIRS software. On the other hand, adaptability is a powerful feature that can automatically modify the cho-sen response according to other external factors, such as the effectiveness of previous responses or changes in theenvironment.� By response selection mechanism: Static mapping, dynamic mapping, and cost-sensitive mapping. In recent years

increasing interests in developing cost-sensitive models for response selection have been seen. The primary goal of suchmodels is to ensure adequate response without sacrificing the normal functionality of the system. That is, the systemmust take into account the complexity and cost of the reaction, besides the impact of the intrusion.� By time of response: Proactive and reactive. Proactivity is the ability of the AIRS to react against an intrusion before the

intrusion takes place. A reactive AIRS infers and activates the reaction when the intrusion is detected.� By response cost model: Static cost model, static evaluated cost model, and dynamic evaluated cost model. This refers to

the evaluation mechanism that the AIRS uses to obtain the response cost.

Taking into account this previous work, to achieve the optimal response in the shortest time, it is advisable that an AIRS isautomatic, adaptive (so that it understands the context and adapts the set of responses), cost-sensitive mapping and proac-tive. Also, the system should include a mechanism to evaluate the cost of the response. But there is another feature that is notpresent in these taxonomies which is very important in a heterogeneous intrusion detection environment: semantic coher-ence. This feature is the ability of the system to understand the syntax and the semantic of the intrusion report, indepen-dently from the intrusion source. The Intrusion Response System would understand intrusion notifications with differentsyntaxes from different IDSs, and would be able to determine whether two notifications refer to the same intrusion or todifferent ones.

In recent years, several AIRSs have been proposed. Table 1 maps the functionalities of these systems according to the pre-viously mentioned features.

So except for ADEPTS and Stakhanova’s IRS, there are no adaptive, proactive and cost-sensitive AIRSs. Moreover, there areno AIRSs providing mechanisms to achieve semantic coherence between two different signs of the same incident source

Table 1Functionalities of the existing AIRSs.

Adaptive Proactive Cost-sensitive Evaluated cost model Semantically coherent

AAIRS [6] YES NO NO Static NOADEPTS [7,8] YES YES YES – NOCSM [9] NO YES NO – NOEMERALD [10] NO NO NO – NOStakhanova’s IRS [11] YES YES YES Static NOFAIR [12] NO NO YES Static NOIDAM&IRS [13] NO NO YES Static NONetwork IRS [14] NO NO YES Dynamic NO

Page 3: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

1104 V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114

(ADEPTS relies on specific formats and syntaxes of intrusion notifications). This paper proposes the use of ontologies, formalbehavior specification languages and reasoning mechanisms as a working technology to deal with the semantic coherenceproblem, as proposed in Section 3.

2.1. Response metrics in AIRSs

Existing AIRS use several fixed response metrics to choose the action that the system must execute. This subsection re-views the most relevant metrics used by them.

� AAIRS [15] infers the optimum response as the result of applying three metrics:– IDS confidence metric, depending on the false alarm rate of the IDS.– Attack identification metric, based on the attack type that AIRS has to react to. This metric is divided into: time metric,

session identifier metric and attack type metric.– Response success metric, depending on the efficiency of previous executions of the response.� ADEPTS [7] uses an optimum response metric which depends on response effectiveness and the damage that the response

would cause to the system users. The response that maximizes the metric equation is inferred. The AIRS chooses thegreatest effectiveness and lowest negativity response. This metric uses a cost-sensitive approach and it does not take intoaccount either intrusion damage or response cost.� CSM [9] calculates a suspicion level for each user and uses it to select the optimum reaction. This parameter, called LOS

(Level Of Service), indicates the current belief that the user is performing an intrusive activity. Whenever a user’s LOSexceeds a threshold, it triggers a reaction. CSM has eight sets of responses pre-programmed by the system administrator.Based on the LOS, CSM selects one of the eight sets of responses; for that, CSM uses the existing relation LOS – Set ofresponses.� EMERALD [10] uses two metrics to choose the suitable reaction:

– Threshold metric, which measures the certainty that the detected intrusion is a real intrusion. The higher the result ofthe metric, the more severe the applied reaction is.

– Severity metric: this metric measures the negative effect of the response on the network’s normal operation.– If the certainty that the detected intrusion is a real intrusion is high (the result of the first metric), the system will

choose the reaction whose severity rate is high.� Stakhanova’s IRS [11]: this system is based on different response metrics:

– Damage reduction metric: this metric compares the cost of damage caused by the intrusion against the cost of deploy-ing a response. The aim of this metric is to choose the most suitable set of responses.

– Metric of maximum benefit at the lowest risk, based on the success of previously triggered responses and the responseseverity (the negative impact the response would have on legitimate users). This metric is used to select the optimumresponse from the previous set.

� FAIR [12]: this cost-sensitive AIRS assesses the static and dynamic contexts of the attack. To select the optimum response,the authors propose some metrics which use the following parameters: counter-effects, stopping power, transparency,efficiency and confidence level.� IDAM&IRS [13] calculates an associated static risk threshold for each response. This parameter is based on the positive

and negative effects of the response in the system. At intrusion time, the system obtains a current risk index of the net-work. If the risk index of the network is greater than the response static threshold, the response is allowed to run.� Network IRS uses a metric to select the optimum intrusion response only according to the response impact [14]. Authors

claim that by choosing the optimum response to mitigate a detected intrusion, the most important parameter to consideris the damage that the inferred response could cause to the system. So a mechanism that compares intrusion severitywith the effects of a possible response action is required. The authors suggest an algorithm that evaluates the responseimpact on the network resources according to the network topology and the dependencies between components. TheAIRS applies the response impact metric for all response actions and selects the action with the lowest negative effect.

There has been additional research on intrusion response metrics:

� Zonghua Zhang, Xiaodong Lin and Pin Ho evaluate the system security based on the intrusion impact in the attacked sys-tem and the effect of the responses for the organization [16]. The purpose of the research is to weigh up the intrusion’simpact and the total cost of responses in order to carry out a rational defense, i.e. the system will not execute responsesthat involve high cost for the organization and whose benefits are low. This work suggests the use of the Markov decisionprocesses (POMDP – Partially Observable Markov Decision Process) and Bayes theorem to weigh up the intrusion impactaccording to the reports generated by IDSs and to analyze the response costs and benefits, in order to determine andchoose the most suitable response. The aim is to minimize response total cost, which is based on intrusion impact,response failure cost, maintenance cost and the cost due to a false alarm possibility.� Strasburg et al. [17], propose a set of evaluation metrics for the practical assessment of cost and benefit of a response

based on the following parameters:– Parameters associated with intrusion damage, such as the likelihood and the severity of an intrusion.

Page 4: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114 1105

– Parameters describing the response cost, such as the cost of developing and the deployment response (OC), the impactof the response on the system (RSI), or the response goodness (RG), that depends on the number of possible intrusionsthat the response can potentially address and the amount of resources that the response can protect. The total cost isthe result of this equation: RC = OC + RSI � RG.

All the response metrics analyzed above allow the AIRS to choose the reaction that the system may trigger, but these met-rics are fixed and cannot be dynamically chosen, i.e. the system always uses the same metric, regardless of the intrusion con-text or the state of the system.

3. Ontologies-based Automated Intrusion Response System

The specification of response metrics in a flexible and dynamic way requires the use of a specific language able to expressthese metrics. This paper proposes the use of SWRL as a formal language to express them. The main advantage of this pro-posal is the alignment with the current Semantic Web technologies that are also using this language and the use of generictools and methodologies for the IRS development. This alignment also requires the use of ontologies as the main informationand knowledge model of the AIRS architecture.

3.1. Ontologies and rules languages

Ontologies are used within the scope of Semantic Web to formally represent a set of concepts, their meaning and theinterrelation between them [18]. One of the main advantages of using ontologies is the formalization of the informationsemantics. This is important when dealing with heterogeneous information sources which can represent the same re-source with different formats and syntax. The use of ontologies provides different ways to extract semantics from allthe information definitions and to map these definitions to common ontology classes [19]. Within the scope of this work,using ontologies helps to support inclusion of different heterogeneous IDSs, with different intrusion formats and syntaxes.In this way, AIRS will be able to understand heterogeneous alerts and to know whether these alerts are referring to thesame intrusion or not. Nowadays there are some data format standards for alerts representation, whose aim is to solvethis problem, such as IDMEF (Intrusion Detection Message Exchange Format) [20]. This format defines a data model inthe Extensible Markup Language (XML) which allows the representing, the exchanging and the sharing of informationabout intrusion detection. But IDMEF only provides an exchange format, without any additional knowledge representationthat can be useful for the AIRS in order to correlate this information with additional information, such as network contextand rules.

The use of ontologies provides a simple way to tackle the previous problem, because ontologies formalize the semanticaspects of information within its own definition of the concepts, and it is possible to find some method to match automat-ically concepts defined in heterogeneous information models to the concepts defined in the ontology as defined in [19] orwith ontology mapping technologies, such as D2RQ [21], which allows mapping between relational databases and OWL/RDFS ontologies. In the case of the automated intrusion response process, every concept included in the intrusion report gen-erated by each IDS in the system is mapped to the equivalent concept defined in the ontology. Thus, semantically equivalentbut syntactically different concepts would map to the same concept in the ontology. Another advantage of using ontologiesfor information modeling is the possibility to define the description and the behavior of the objects in the same integratedframework, in a consistent and coherent way.

The main ontology language used in the Semantic Web to formally describe information definitions is OWL (OntologyWeb Language) [22]. OWL is a knowledge definition language, which structures the information into classes and properties(nominal or relation between objects), with hierarchies, and range and domain restrictions. However, the ability of OWL todefine behavior related to the defined information is limited. So it is necessary to use additional rule languages, such asSWRL. SWRL is the most widely used rule language in Semantic Web, and was defined to solve the OWL limitation todefine logical restrictions. In 2009 W3C announced the new version of OWL: OWL2, which introduces some enhancementsin the knowledge representation aspect but still lacks the First Order Logic (FOL) needed for the definition of securitymetrics.

3.1.1. SWRLSWRL is a rule definition language that extends the set of OWL axioms with FOL [4]. SWRL includes a new type of axiom-

called rule or Horn clause, of the form if. . .then. . . A rule axiom consists of an antecedent (body) and a consequent (head),each of which consists of the conjunction of several atoms. Informally, a rule may be read as meaning that if the antecedentholds, then the consequent must also hold. Besides, SWRL allows including built-ins in the rules. SWRL built-ins are addi-tional functions which make it possible to deal with XML Schema data types: mathematical operations, comparisons, logicalnegations, built-ins for Strings, built-ins for date, time and duration, URIs operations and built-ins for lists of elements.

The use of ontologies and rule languages in the area of AIRS solves the problem of semantic coherence. Moreover, due toits great expressiveness and flexibility, these technologies enable AIRS to meet other requirements: adaptability, proactivityand response cost sensitivity.

Page 5: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

Fig. 1. Architecture of the ontologies-based AIRS developed.

1106 V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114

3.2. Ontologies-based AIRS architecture

A proposed architecture [23] of the ontology-based AIRS is shown in Fig. 1. The objective of this architecture is to choosethe optimum response of a set of available responses in the organization. The AIRS receives a set of inputs, including intru-sion reports, context information, previous response success indicators, etc. Then, the Reasoner infers the best reaction giventhose inputs, using the Policies that specify the security metrics and the Intrusion Response Ontology. These policies canspecify different metrics that will be chosen depending on different parameters, such as context and intrusion type. Finally,the Response Executor carries out the inferred response. Once the response has been executed, the Intrusion Response Eval-uation system will calculate the success of the inferred response by using a quantitative approach based on statistics andprobability methods and algorithms.

The ontologies-based AIRS has three main components:

� Reasoner is the main component of the AIRS in charge of inferring the optimum response for a given intrusion. The rea-soner, according to a set of policies and the ontology instances representing the information about domain concepts, exe-cutes the inference process for determining the optimum response. Nowadays, there are several semantic reasoners thatcan be used as the core of the AIRS, such as Bossam,1 Pellet,2 KAON2,3 or RacerPro.4

� Intrusion Response Ontology: this ontology formally defines all the information needed in the intrusion response processcarried out by an AIRS. The ontology defines the most important concepts within our specific domain, such as intrusion,responses, network context, IDSs, response success, and system components, as well as the relationships among them.This is represented in Fig. 2, and it consists of ten classes that are equivalent to each of the independent entities of thisspecific domain. The arrows represent the relationships among classes. The methodology used to define and build theontology is the methodology proposed in [24], known as ‘Methontology’.� SWRL rules: they specify the behavior of the Reasoner module. Starting from these defined rules and the previous knowl-

edge included in the ontology, the Reasoner infers the most suitable response to a specific intrusion. This paper suggestsusing SWRL as the rules definition language, since SWRL is integrated with OWL increasing its expressiveness andenabling definition of complex behavior for an AIRS. These rules are defined by the system administrator and specifythe response metrics, as it is detailed in the following section.

4. Metrics for Automated Intrusion Response Systems

The main purpose of an AIRS is to infer the most suitable response. The inference process of the ontologies-based AIRS isdivided into three phases (Fig. 3):

� Collecting information about network context, system context, and intrusions, and then mapping this information to theequivalent concepts in the ontology. Alert Receiver and Context Receiver are responsible for receiving the intrusion alertscoming from different IDS with different formats and syntaxes, and network and system context information collected byNetwork Context and System Context modules, and mapping the concepts included in these alerts to the equivalent con-cepts defined in the ontology. This middleware layer translates from the IDS specific format (such as syslog or alert fulloutput format of snort IDS) to the OWL language by means of OWL and Jena API.

1 http://www.bossam.wordpress.com2 http://www.clarkparsia.com/pellet3 http://www.kaon2.semanticweb.org4 http://www.racer-systems.com/index.phtml.

Page 6: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

Fig. 2. Intrusion Response Ontology.

Is the first intrusion received

by AIRS?

Map Context Information and Intrusion Alert Infer Recommended

Responses

Number of recommended responses > 1

Assess basic constraints

Discard Recommended Response

(optimumResponse = null)

optimumResponse = recommendedResponse

optimumResponse = min {response i Cost}

(Minimum Cost Metric)

Are there response results

for it?

Successful result?

Check the existing results

optimumResponse = last optimumResponse inferred

YES

NO

YES

NO

YES

NO

Resp.Complexity > Max. AIRS complexity OR

Intru.Impact * IDSconfidence < Resp.Impact

resp.Complexity < Max. AIRS complexity AND

Intru.Impact * IDSconfidence > Resp.Impact(Damage Reduction Metric)

¿Level of important of the compromised

resource?

optimumResponse = {resp1,resp2,...respn} severity > intrusion severity

ANDmin {response i Cost}

optimumResponse = {resp1, resp2,...respn}severity > intrusion severity

ANDmax {response i Severity}(Highest severity metric)

NOYES

LOW MEDIUM HIGH

Fig. 3. Application of the response metrics to the response inference process.

V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114 1107

� Inferring a set of recommended responses according to the previous information. First of all, the system checks if it hasreceived any intrusion alerts before, or if this is the first time that the AIRS executes the inference process. This step isrequired due to the syntax of OWL and SWRL. If this is the first intrusion alert received by the AIRS, there is no previousresults, and the system will infer the set of recommended responses by applying SWRL policies. Otherwise, the systemchecks if there are results for the same intrusion or similar one (attacks are never identical). The system considers thattwo intrusions are similar (almost identical) if the intrusion type, the resource compromised by the intrusion (typeand IP) and the changes in the context at intrusion time are the same. When the AIRS checks that there is a previous resultfor a similar intrusion and the executed response was successfully, the system will select the same reaction. Otherwise,AIRS will infer the set of recommended responses by applying SWRL policies taking into account the intrusion and contextinformation.� Inferring the optimum response, according to the importance of the compromised resource. This phase is explained in

detail below.

Page 7: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

1108 V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114

This paper focuses on the third step. To choose the responses, three response metrics are proposed, which are defined inSWRL and tested in an existing inference engine, proving that they can be used as the reasoning rules of an AIRS. The def-initions of these metrics have been proposed taking into account the related work that was described in Section 2.

4.1. Proposed response metrics

After analyzing the parameters used in the metrics mentioned in the state of the art, the following parameters have beenidentified as the most relevant for choosing the optimum response: the intrusion impact, the severity and the cost of theresponse, the IDS confidence, the level of importance of the affected resources and the response success. Therefore, theseparameters have been included into the definition of the response metrics. Other parameters like attacker type or IDS-man-ufacturer are not relevant for the proposed metrics and do not have influence in the response chosen. According to the rel-evance that the compromised resource has for the organization, the AIRS assigns more or less weight to each one of thoseparameters. For example, if the affected resource is a user workstation, response cost may take priority over its success rate;however, if the attacked resource is the main database server, an AIRS may give more importance to response severity andresponse effectiveness than the high cost of executing the response.

This paper therefore proposes three different response metrics. Each metric assigns a weight to the parameters in a dif-ferent way. Depending on the level of importance of the resource, the system applies one metric or another; this is shown inthe decision diagram of the response inference process, Fig. 3. The following subsections explain the proposed metrics.

4.1.1. Damage reduction metricThe purpose of this metric is to strike a balance between the cost of the damage caused by an ‘‘un-attended’’ attack (the

intrusion impact) and the cost of deploying the response (the response impact). This cost is referred to the negative effectthat executing the response causes in the resources of the organization, e.g. loss of availability of several resources.

AIRS uses this metric regardless of the importance of the component. The AIRS must always apply this metric. This metricsatisfies an equation of the form:

Impactintrus � ConfidenceIDS P Impactrespon ð1Þ

This metric is equivalent to the metric defined by Stakhanova et al. [11].The application of the metric infers the responses, which were included in the set of possible responses, whose impact is

lower than or equal to the product of intrusion impact and IDS confidence. The AIRS discards the responses whose impact isgreater than the intrusion impact.

This metric depends on three parameters, which, in turn, are subject to measurement: the intrusion impact, the IDS con-fidence and the response impact.

The AIRS does not calculate or measure these parameters at inference time. They correspond to the properties of the clas-ses of the defined ontology, and their values are set before the AIRS applies the metric; these parameters are input to theresponse system and must be defined previously:

� Intrusion impact (Impactintrus):– Property intrusionImpact of the ontology class FormattedIntrusion.– It should be set by the alert receiver which generated the formatted intrusion report that the AIRS receives.– It depends on the affected component importance, the severity of the detected intrusion and the exposure factor,

according to the impact metrics equations defined in [16,25].� IDS confidence (ConfidenceIDS):

– Property IDSconfidence of the class IntrusionDetectionSystem.– It is calculated by the AIRS and it is based on the total number of real intrusions and the number of false positives/

negatives generated.– This parameter measures the accuracy or error associated with the IDS that generates the intrusion report. AAIRS [15]

defines a confidence metric to measure the IDS confidence.� Response impact (Impactrespon):

– Property responseImpact of the ontology class Response.– A possible equation to calculate this parameter is equivalent to the impact metric equation. Thus, it depends on the

number of components affected by the deployment of the response, and the level of degradation of every resource.– This parameter represents the cost that the execution of the specific response incurs.

4.1.2. Minimum cost metricThe AIRS applies the minimum cost metric if the affected component is not very relevant in the organization. In other

cases, the response system should not use this metric.The purpose of this metric is to minimize the response total cost. When the AIRS applies this metric, it will trigger the

execution of the lower cost response. This metric does not depend on the response success. The minimum cost metric pro-posed satisfies an equation of the form:

Page 8: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114 1109

CostT response ¼ Impactrespon þ Costd response ð2Þ

The objective is to minimize the previous equation. The response total cost includes the response impact and the responsedeployment cost. The first, Impactrespon, represents the cost that executing the response involves to the organization, in termsof the damage that the response action causes to the resources of the organization. The second, Costd response, represents thecost that the deployment of the response involves to the organization, in terms of the required resources (number of neededrouters, number of backups, etc.).

On the other hand, the lower cost responses are usually the lower complexity responses; thus, if several responses havethe same cost, the AIRS will select the lower complexity response.

As in the damage reduction metric, the AIRS is not in charge of measuring the metric parameters needed: response impactand response deployment cost. Both parameters are set by the system administrator. CostT response is equivalent to the prop-erty responseCost of the ontology class Response.

Nevertheless, the application of this metric does not exclude the application of the damage reduction metric. The re-sponse system must always apply the proposed damage reduction metric.

4.1.3. Highest severity and highest efficiency metricIf the compromised resource is very relevant or critical in the organization’s operation, the response system uses the high-

est severity and highest efficiency metric, whose purpose is to maximize the response severity and success. The result of thismetric is the higher severity response.

This metric depends on the results of previous executions of the specific response against a similar intrusion (responseefficiency), the severity associated with the intrusion and the severity of the response itself.

As in the previous metric, the application of this metric does not exclude the application of the damage reduction metric.First, the AIRS applies the damage reduction metric and then, the system will apply the highest severity metric to the resultsof the previous inference.

The proposed metric satisfies the following equations:

RE � SeverityAbsresponse P Severityintrusion ð3Þ

MaxfRE � SeverityAbsresponseg ð4Þ

The purpose is to satisfy the first condition and to maximize the second equation.The metric depends on three parameters:

� Intrusion severity (Severityintrusion): the IDS sets the intrusion type. According to this parameter and some predefined equi-valences between intrusion type and severity, the AIRS gets the intrusion severity. It is represented with the propertyintrusionSeverity of the ontology class FormattedIntrusion.� Response absolute severity (SeverityAbsresponse): the system administrator previously sets the response absolute severity.

This parameter matches the responseAbsSeverity property of the ontology class Response.� Response efficiency (RE): the AIRS calculates this parameter that measures the success of a response against a particular

intrusion. It corresponds to the property responseEfficiency. The following equation is proposed to obtain the value of thisparameter:

RE ¼1; j ¼ 1

1j�1

Xj�1

i¼1

RPEi; j P 2

8><>:

ð5Þ

where j is the number of times that the evaluated response has been inferred as the optimum response against a particularintrusion and RPEi (Response Partial Efficiency) is the response partial efficiency associated to the ‘‘i’’ execution of the partic-ular response.

On the other hand, to calculate the response partial efficiency after each execution of the response, statistics and prob-ability methods are used to analyze and compare all the data captured from context information (network and systemcontext).

4.2. Usage of SWRL for metrics definition

Section 3 described the advantages of using Semantic Web technologies, such as ontologies, behavior specification lan-guages and reasoning mechanisms, as the main core of the AIRS. The semantic reasoner is the main component of the re-sponse system, and it infers the optimum intrusion response. To do that, the reasoner executes a set of SWRL rulesdefined by the system administrator that model the chosen metrics. This subsection details the specification of the three pre-viously proposed metrics by means of SWRL rules.

Page 9: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

1110 V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114

4.2.1. Damage reduction metricDamage reduction metric can be specified with SWRL in this way:

1

realintrusionImpact(?intrusion, ?intimpact) ^ 2 responseImpact(?respon1, ?resimpact1) ^ 3 swrlb:lessThanOrEqual(?resimpact1, ?intimpact)

This metric does not depend on the different component relevance and the realintrusionImpact parameter is the result ofmultiplying the Impactintru value by ConfidenceIDS value.

4.2.2. Minimum cost metricMinimum cost metric has the following SWRL expression:

1

targetOfIntrusionID(?intrusion, ?targetID) ^ 2 componentID(?component, ?componentid) ^ 3 swrlb:equal(?targetID, ?componentid) ^ 4 componentLevelOfImportance(?component, ?cloi) ^ 5 swrlb:equal(?cloi, ‘‘low’’) ^ 6 responseCost(?respon1, ?respcost1) ^ 7 responseCost(?respon2, ?respcost2) ^ 8 swrlb:lessThanOrEqual(?respcost1, ?respcost2) ^ 9 ? optimumResponse(?intrusion, ?respon1)

To apply this metric, the relevance of the attacked component must be low, i.e. the value of componentLevelOfImportanceproperty of the ontology class SystemComponent must be ‘‘low’’. In the first five lines, that condition is checked. Lines 6–9specify the metric purpose, to minimize the inferred response cost as explained in Section 4.1.2. The AIRS infers the lowercost response as optimum response, taking the set of possible responses against the detected intrusion as input.

4.2.3. Highest severity and highest efficiency metricHighest severity and highest efficiency metric is specified with SWRL with the following expression:

1

targetOfIntrusionID(?intrusion, ?targetID) ^ 2 componentID(?component, ?componentid) ^ 3 swrlb:equal(?targetID, ?componentid) ^ 4 componentLevelOfImportance(?component, ?cloi) ^ 5 swrlb:equal(?cloi, ‘‘high’’) ^ 6 intrusionSeverity(?intrusion, ?intseverity) ^ 7 responseAbsSeverity(?respon1, ?respabsseverity1) ^ 8 responseAbsSeverity(?respon2, ?respabsseverity2) ^ 9 swrlb:greaterThanOrEqual(?respabsseverity1, ?intseverity) ^ 10 swrlb:greaterThanOrEqual(?respabsseverity2, ?intseverity) ^ 11 hasResponseReport(?respon1, ?presreport1) ^ 12 reportIntrusionType(?presreport1, ?rit1) ^ 13 orientedToIntrusionType(?respon1, ?oit1) ^ 14 swrlb:equal(?rit1, ?oit1) ^ 15 hasResponseReport(?respon2, ?presreport2) ^ 16 reportIntrusionType(?presreport2, ?rit2) ^ 17 orientedToIntrusionType(?respon2, ?oit2) ^ 18 swrlb:equal(?rit2, ?oit2) ^ 19 responseRelSeverity(?presreport1, ?rrsev1) ^ 20 responseRelSeverity(?presreport2, ?rrsev2) ^ 21 swrlb:greaterThan(?rrsev1, ?rrsev2) 22 ? optimumResponse(?intrusion, ?respon1)

To apply this metric, the attacked component must be relevant for the organization, i.e. the value of the componentLev-elOfImportance property must be ‘‘high’’. In the first five lines, that condition is checked. The next lines, from 6 to 10, specifythe condition imposed by the metric: response severity must be higher than intrusion severity, as explained in Section 4.1.3.Lines 11–22 are intended to maximize the metric equation of the highest severity and highest efficiency metric.

Page 10: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114 1111

5. Experiments and results

For evaluation of the ontology-based AIRS proposed and the feasibility of using the SWRL language to specify securitymetrics, we have carried out different sets of experiments. The experiments measure the performances of the AIRS, in termsof inference time and success of the inferred response, in different scenarios.

The ad hoc network used in experiments is shown in Fig. 4. The scenario emulates the network topology of a businessinstitution type, with different subnets and types of equipment. As it is shown in the figure, a large part of the scenariohas been created and implemented using a virtualization tool called VNX (Virtual Network over X), [26]. The network con-sists of several subnets interconnected by a router: a DMZ (Demilitarized Zone) subnet with the external servers, such as aweb server, a subnet with the internal servers most important to the organization, which includes the server database andkeys, and two subnets of company employees. There are also three IDSs, whose purpose is to detect intrusion and send thecorrespondent report to the AIRS using a syslog server listening in the udp port 514, and the ontologies-based AIRS proposed.

For testing AIRS, we have used three different intrusions: a port scanning attack using Nmap, a UDP Flood attack (DenialOf Service, DOS) by means of UDPFlood took (a UDP packet sender which sends out UDP packets to the specified IP and portat a controllable rate), and web application attack (SQL injection and privilege escalation).

5.1. Experiment 1: Performance system and overhead according to the number of concurrent intrusions

This experiment has three goals:

– Measuring the response inference process reaction time, from the instant that the IDS detects an intrusion and sends theintrusion report to the AIRS, until the time that the ontology-based AIRS triggers the optimum response. This time is mea-sured by varying the number of concurrent alerts of the same or different type.

– Checking how multiple concurrent reports about the same incident are identified by the AIRS as the same attack. In thatsituation, the AIRS runs the response inference process once.

– Measuring the number of times the AIRS runs the response inference process, when it receives several reports about dif-ferent intrusions, at the same time.

Fig. 4. Ontologies-based AIRS validation scenario.

Page 11: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

1112 V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114

Fig. 5 shows the time in milliseconds when the number of concurrent alerts is varied in two different ways: concurrentalerts of the same attack, and concurrent alerts about different intrusions. The graphics show the time spent in two phases,the ontology loading phase and the inference and reasoning phase. The total time is the sum of both. The time for phase 1(ontology load time) does not depend on the number of reports received by the AIRS in any situation (same intrusion or dif-ferent intrusions), as it is shown in Fig. 5. The time for phase 2 (time of the execution of the inference process) depends onthe number of alerts received but only if the intrusions detected are different. That is, if the intrusion is the same, the IDSsends to the AIRS as many intrusion reports as the number of concurrent attacks detected; then, if all the reports receivedare about the same intrusion, the AIRS executes the inference process only once, and discards the other reports.

Moreover, if the IDS detects different attacks, it sends as many intrusion reports to the AIRS as the number of concurrentattacks detected, but in that situation, the AIRS executes the inference process as many times as the number of reports re-ceived. That is the reason of the time being a linear function that depends on the number of reports received in the secondfigure. These executions are not run in parallel, so the time is very high if more than three different intrusions are detected atthe same time. The response inference time when the intrusions are different is the average time.

The number of intrusions detected by the IDS and the number of executions of the inference process, in both scenarios,are shown in Table 2. It can be seen that the number of times the AIRS runs the response inference process depends directlyon the number of concurrent alerts. If the intrusion reports received are about the same intrusion, the system executes theprocess only once.

5.2. Experiment 2: System success rate according to the compromised resource

Another relevant parameter to be measured is the success rate of the system, i.e. the success of the execution of the re-sponse inference process. The success of this process depends on the metric used by the AIRS: damage reduction metric, min-imum cost metric or highest severity metric. The system uses one metric or another according to the compromised resourceby the intrusion, as we explained before. For that reason, the goal of this experiment is to measure the system success rateaccording to the target of the intrusion. For this experiment the three attacks listed before are used modifying the target ofthe intrusion according to its relevance to the organization; first, the database server, which is of the highest importance isattacked, then the DMZ server, and finally a user host, which is of the lowest importance.

Fig. 6 shows the success rate in percent when the AIRS executes the inference process for each of the previous scenariosfor the first time. It is seen that the success rate is always more than 55% and the highest value is reached when the com-promised resource is the database server. The success rate depends on the selected metric, and the metric used in this sit-uation is the highest severity metric, i.e. the AIRS infers the response with greater severity. This is the most effectiveresponse, except for port scanning. In that case, the system discards the most severe response because the intrusion impactis less damaging than the intrusion response by the application of the damage reduction metric.

Fig. 7 shows the success rate for the different number of times of execution of the inference response process. The metricsused for resources whose relevance is high or medium depend on the response severity parameter, whose value changesaccording to the response efficiency (RE), as we can see in the equation of the highest severity metric. The value of RE is in-creased or reduced after every execution of the inference process. For this reason, the global success rate in the graphics 1and 2 is different with the number of executions. After the fifth run, the value of the RE becomes stable, and since then, thesystem infers the same response if the response toolkit is not modified. Moreover, the metric used for low relevance com-ponents does not depend on the RE; this explains that the success rate for the third graphic has always the same value.

Fig. 5. Time to deploy response in AIRS with varying number of concurrent alerts.

Page 12: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

Table 2Variation of number of intrusion reports received and analyzed for different numbers of concurrent alerts.

Num. ofconcurrentalerts

Intrusion reports created byIDS (same intrusion)

Executions of the inferenceprocess (same intrusion)

Intrusion reports created byIDS (different intrusions)

Executions of the inferenceprocess (different intrusions)

1 1 1 1 12 2 1 2 23 3 1 3 34 4 1 4 45 5 1 5 56 6 1 6 67 7 1 7 78 8 1 8 89 9 1 9 9

10 10 1 10 10

Fig. 6. Success rate of the AIRS with varying of the intrusion type and the compromised resource.

Fig. 7. Success rate of the AIRS with varying number of executions of the inference process.

V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114 1113

6. Conclusions and future work

The paper shows the advantages of response systems able to dynamically adapt the responses to the different types ofattacks and context parameters. According to different scenarios related to the relevance of the compromised resources, thispaper proposes three response metrics used by the AIRS to infer the optimum response against a detected intrusion. TheSWRL language is proposed for the formal definition of these metrics and the automated inference process, aligning thisinformation and behavior modeling with those used in the Semantic Web, such as ontologies and formal reasoningmechanisms.

These rules or policies have been deployed and tested in an ontology-based AIRS prototype based on one of the SemanticWeb tools that can be used: the SWRL Pellet reasoner. The results of these tests show the feasibility of using the SWRLlanguage to specify the response metrics, inferring the expected responses. The main drawback is the reaction time of theresponse inference process. It is expected that the evolution and maturity of the Semantic Web technologies will drastically

Page 13: Definition of response metrics for an ontology-based Automated Intrusion Response Systems

1114 V. Mateos et al. / Computers and Electrical Engineering 38 (2012) 1102–1114

reduce the performance limitations of the current tools, most of which are based directly on research results, and need to beoptimized within the scope of broader commercialization.

As a future work it should be interesting to implement a module which maps IDMEF alerts into the ontology for increasingthe chances of integrating the ontologies-based AIRS with existing systems. Some research works have been done related tothe translation from real IDMEF alerts to an alert ontology, [27]. Also, proactivity is something in which we are working on andwe consider as a future work. Our envisaged approach will be based on identifying multi-step attacks by using predictionalgorithms based on Hidden Markov Models, and being able to infer responses that will manage the predicted next attack.

Acknowledgments

This work has been totally done under the SEGUR@ (http://www.cenitsegura.es) project, subsidized by the Centre for theDevelopment of Industrial Technology, CDTI, Spanish Ministry of Industry and Commerce, under the CENIT framework, ref-erence number CENIT-2007 2004.

References

[1] Symantec. Symantec internet security threat report. Trends for 2010, vol. XVI; 2011.[2] Ali Aydin M, Halim Zaim A, Gökhan Ceylan K. A hybrid intrusion detection system design for computer network security. Comput Elect Eng

2009;35(3):517–26.[3] Stakhanova N, Basu S, Wong J. A taxonomy of intrusion response system. Int J Inform Comput Secur 2007;1(1/2):169–84.[4] Horrocks I, Patel-Schneider PF, Boley H, Tabet S, Grosof B, Dean M. SWRL: a semantic web rule language combining OWL and RuleML. <http://

www.w3.org/Submission/SWRL/> [accessed 28.05.12].[5] Shameli-Sendi A, Ezzati-Jivan N, Jabbarifar M, Dagenais M. Intrusion response systems: survey and taxonomy. Int J Comput Sci Network Secur (IJCSNS)

2012;12(1):1–14.[6] Carver CA. Adaptive agent-based intrusion response. PhD thesis, Texas A&M University; 2001.[7] Foo B, Wu Y-S, Mao Y-C, Bagchi S, Spafford E. ADEPTS: adaptive intrusion response using attack graphs in an E-commerce environment. In:

International conference on dependable systems and networks (DSN’05); 2005. p. 508–17[8] Wu Y-S, Foo B, Mao Y-C, Bagchi S, Spafford E. Automated adaptive intrusion containment in systems of interacting services. Comput Networks

2007;51(5):1334–60.[9] White GB, Fisch EA, Pooch UW. Cooperating security managers: a peer-based intrusion detection system. IEEE Network 1996:20–3.

[10] Porras PA, Neumann PG. Emerald: event monitoring enabling responses to anomalous live disturbances. In: National information systems securityconference (NISSC), Baltimore, MD; 1997.

[11] Stakhanova N, Basu S, Wong J. A cost-sensitive model for preemptive intrusion response systems. In: Proceedings of the 21st international conferenceon advanced networking and applications. AINA’ 07. IEEE Computer Society, Washington, DC, USA; 2007. p. 428–35

[12] Papadaki M, Furnell SM. Achieving automated intrusion response: a prototype implementation. Inform Manage Comput Secur 2006;14(3):235–51.[13] Mu C, Li Y. An intrusion response decision-making model based on hierarchical task network planning. Expert Syst Appl 2010;37(3):2465–72.[14] Kruegel TT, Christopher. Evaluating the impact of automated intrusion response mechanisms. In: Proceedings of the 18th annual computer security

applications conference; 2002.[15] Carver CA, Hill JMD, Pooch UW. Limiting uncertainty in intrusion response. In: Proceedings of the 2001 IEEE workshop on information assurance and

security, United States Military Academy; 2001.[16] Zhang Z, Lin X, Ho P-H. Measuring intrusion impacts for rational response: a state-based approach. In: Second international conference on

communications and networking in China; 2007. p. 317–21[17] Strasburg C, Stakhanova N, Basu S, Wong JS. A framework for cost sensitive assessment of intrusion response selection. In: Proceedings of IEEE

computer software and applications conference; 2009.[18] Staab S, Studer R. Handbook on ontologies. 2nd ed. Germany: Springer; 2009.[19] López de Vergara JE, Villagrá VA, Asensio JI, Berrocal J. Ontology-based network management: study cases and lessons learned. J Network Syst Manage

2009;17(3):234–54.[20] Debar H, Curry D, Feinstein B. RFC 4765: the intrusion detection message exchange format (IDMEF); 2007.[21] Cyganiak R, Bizer C. Garbers J, Maresch O, Becker C. The D2RQ mapping language, v0.8; 2012. <http://http://d2rq.org/d2rq-language> [accesed 28.05.12].[22] Smith K, Welty C, McGuinness DL. OWL web ontology language guide. W3C recommendation; 10 February, 2004.[23] Mateos V, Villagrá VA, Romero F. Ontologies-based automated intrusion response system. In: Proceedings of the 3rd international conference on

computational intelligence in security for information systems (CISIS ’10); November 11–12, 2010.[24] Gómez-Pérez A, Fernández López M, Corcho O. Ontological engineering. Springer Verlag; 2004.[25] Izquierdo VM, López F, et al. Methodology for information systems risk analysis and management (MAGERIT version 2); 2006.[26] Galán F, Fernández D, López de Vergara JE, Casellas R. Using a model-driven architecture for technology-independent scenario configuration in

networking testbeds. IEEE Commun Mag 2010:132–141.[27] López de Vergara JE, Vázquez E, Martin A, Dubus S, Lepareux MN. Use of ontologies for the definition of alerts and policies in a network security

platform. J Networks 2009;4(8):720–33.

Verónica Mateos is a Ph.D. student at the Department of Telematics Systems Engineering at Technical University of Madrid (UPM) since 2008. She receivedthe M.S. degree on Telecommunications Engineering from the Technical University of Madrid (UPM), in 2008. Her research interests are in the area ofsecurity networks, and ontologies and Semantic Web. She also has participated on several research national and international projects.

Víctor A. Villagrá is an associate professor in telematics engineering at the Technical University of Madrid (UPM) since 1992. He has been involved inseveral international research projects related with Network Management, Advanced Services Design and Network Security, as well as different nationalprojects. He is author or co-author of more than 60 scientific papers and is author of a textbook about security in telecommunication networks.

Francisco Romero is an informatic engineer (Universidad Politécnica de Madrid). Researcher at Telefónica Digital in the security area, has been alwaysinvolved in traffic monitoring projects, either QoS assurance, either IPv6 traffic analysis or, as currently, designing anomaly detection tools for Telefónicanetworks, focusing in the fields of critical infrastructures protection, anti DDoS, fraud on ebanking, collaborative security and others.

Julio Berrocal is a Full Professor at Technical University of Madrid, Spain (UPM). He received a Telecommunication Engineer degree in 1983 and a Ph.D. inTelecommunication degree in 1986, both from UPM. He has participated in several R&D projects of the European Union Programmes and Spanish R&DProgrammes. His current research interests are in management of telecommunication networks and services, multimedia networking and network security.