6. ijcc security

Upload: anantnimkar9243

Post on 01-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 6. IJCC Security

    1/22

    Int. J. Cloud Computing, Vol. X, No. Y, xxxx 1

    Copyright 200x Inderscience Enterprises Ltd.

    Intrusion detection system in cloud computingenvironment

    Sudhir N. Dhage*

    Department of Computer Engineering,

    Sardar Patel Institute of Technology,

    Mumbai, 400 058, India

    E-mail: [email protected]

    *Corresponding author

    B.B. Meshram

    Department of Computer Engineering,VJTI (Autonomous Institute),

    Mumbai, 400 019, India

    E-mail: [email protected]

    Abstract: In recent years, with the growing popularity of cloud computing,security in cloud has become an important issue. As prevention is better thancure, detecting and blocking an attack is better than responding to an attackafter a system has been compromised. This paper proposes architecture capableof detecting intrusions, in a distributed cloud computing environment, andsafeguarding it from possible security breaches. It deploys a separate instanceof IDS for each user and uses a separate controller to manage the instances.IDS in this architecture can be signature-based as well as learning-basedmethod.

    Keywords:cloud computing; intrusion detection; intrusion prevention.

    Reference to this paper should be made as follows: Dhage, S.N. andMeshram, B.B. (xxxx) Intrusion detection system in cloud computingenvironment,Int. J. Cloud Computing, Vol. X, No. Y, pp.000000.

    Biographical notes: Sudhir N. Dhage received his BE in ComputerEngineering from Government College of Engineering, Amravati, ME inComputer Engineering from Government Aided Walchand College ofEngineering, Sangli and PhD in Computer Engineering from the VJTI(Autonomous Institute), Mumbai, India. He has participated in more than21 refresher courses to meet the needs of current technology. He is currently anAssociate Professor in the Department of Computer Engineering, Sardar PatelInstitute of Technology, Andheri, Mumbai, University of Mumbai, Mumbai,India. His research interests include networks, network security, parallel anddistributed computing, multimedia storage systems, cloud computing and cloudsecurity. He is Associate Life Member of Computer Society of India (CSI).

    B.B. Meshram is Professor and Head of Computer Engineering Department atVJTI (Autonomous Institute), Matunga, Mumbai, Maharashtra state, India. Heholds a Bachelor, Master and Doctoral degree in Computer Engineering. Hehas participated in more than 16 refresher courses to meet the needs of currenttechnology. He has chair more than ten AICTE STTP programmes. He has

  • 8/9/2019 6. IJCC Security

    2/22

    2 S.N. Dhage and B.B. Meshram

    contributed more than 50 research papers at national and international journals.He is a Life Member of computer society of India and Institute of Engineers.

    His current research interests include multimedia system, distributed system,databases, data warehousing, data mining, and network security.

    1 Introduction

    Cloud computing is one of the emerging technologies in the world. It is an internet-based

    computing technology, where shared resources such as software, platform, storage and

    information are provided to customers on demand. Cloud computing is a technology by

    which dynamically scalable and virtualised resources are provided to the users over the

    internet. Cloud computing customers do not own the physical infrastructure, thereby

    avoiding capital expenditure. They rent resources from a third-party provider, consume

    them as a service and pay only for resources that they use. Cloud computing is becoming

    increasingly associated with small and medium enterprises (SMEs), as they cannot afford

    the large capital expenditure which is required for traditional IT.

    Figure 1 A basic cloud network (see online version for colours)

    1.1 Cloud architecture

    The architecture of cloud (About.com, http://netsecurity.about.com) involves multiple

    cloud components communicating with each other over the application programming

    interfaces (APIs), usually web services. The two most significant components of cloud

    computing architecture are known as the front end and the back end. The front end is the

    part seen by the client, i.e., the customer. This includes the clients network or computer,

    and the applications used to access the cloud via a user interface such as a web browser.

    The back end of the cloud computing architecture is the cloud itself, which comprises

    of various computers, servers and data storage devices.

  • 8/9/2019 6. IJCC Security

    3/22

    Intrusion detection system in cloud computing environment 3

    1.2 Risks in the cloud

    In order to create awareness among the users of cloud computing regarding the serious

    threats and vulnerabilities involved in cloud computing environments, a study on various

    risks is imperative. In the sections below, we discuss the different risks.

    1.2.1 Security risks

    The state of preventing a system from vulnerable attacks is considered as the systems

    security. Security risks involved with the governmental use of cloud computing have

    various risk factors. Seven important identity factors for risk in a cloud computing model

    are: access, availability, network load, integrity, data security, data location and data

    segregation.

    1.2.1.1 Access

    The data in a private organisation allows only the authenticated users to access the data.The access privilege must be provided only to the concerned customers and auditors in

    order to minimise such risks. When there is an access from an internal to external source,

    the possibility of risk is more in case of sensitive data. Segregation of the data is very

    important in cloud computing as the data is distributed over a network of physical

    devices. Data corruption arises if appropriate segregation is not maintained. Currently,

    there are no federal policies addressing how government information is accessed.

    1.2.1.2 Availability

    Availability plays a major role in cloud computing since the needs of the customers

    should be attended on time. A research from the University of California had tracked the

    availability and outages of four major cloud vendors. It was found that overload on the

    system caused programming errors resulting in system crashes and failures. Due to the

    lack of backup recovery Apple, MobileMe, Google Gmail, Citrix and Amazon s3reported periods of unavailability ranging from two to 14 hours in a span of just 60 days.

    This resulted in a loss of confidence among the customers and the vendors. Natural

    disasters can also present significant risks. A lightening strike at one of Amazon.coms

    facilities caused the service to go offline for approximately four hours. This component

    of the cloud was difficult to replace immediately and resulted in delays.

    1.2.1.3 Network load

    Cloud network load can also prove to be detrimental to performance of the cloud

    computing system. If the capacity of the cloud is greater than 80%, then the computers

    can become unresponsive due to high volumes. The computers and the servers crash due

    to high volume motion of data between the disk and the computer memory. The

    percentage of capacity threshold also poses a risk to the cloud users. When the threshold

    exceeds 80%, the vendors protect their services and pass the degradation on to customers.It has been indicated that in certain cases the outage of the system to the users are still not

    accessed.

  • 8/9/2019 6. IJCC Security

    4/22

    4 S.N. Dhage and B.B. Meshram

    Flexibility and scalability should be considered pivotal when designing and

    implementing a cloud infrastructure. Money and time also plays an important role in the

    design of the infrastructure. Customers will always have expectations on the durability

    and the efficiency of the system. Going forward the customers will also demand the need

    of interoperability, ability to switch providers and migration options. Another risk factor

    of cloud computing is the implementation of the APIs.

    1.2.1.4 Integrity

    Data integrity affects the accuracy of information maintained in the system. In a cloud

    computing model data validity, quality and security affects the systems operations and

    desired outcomes. The programme efficiency and performance are addressed by the

    integrity. An apt example for this would be that of a mobile phone service provider who

    stored all the customers data including messages, contact lists, etc., in a Microsoft

    subsidiary. The Provider lost the data and the cloud was unavailable. The customers had

    to wait until they got the necessary information from the cloud and the data was restored.

    1.2.1.5 Data security

    Another key criterion in a cloud is the data security. Data has to be appropriately secured

    from the outside world. This is necessary to ensure that data is protected and is less prone

    to corruption. With cloud computing becoming an upcoming trend, a number of

    vulnerabilities could arise when the data is being indiscriminately shared among the

    varied systems in cloud computing. Trust is an important factor which is missing in the

    present models as the service providers use diversified mechanisms which do not have

    proper security measures. The following sub section describes the risks factors in cloud

    environments.

    1.2.1.6 Data location

    Data location is another aspect in cloud computing where service providers are notconcentrated in a single location but are distributed throughout the globe. It creates

    unawareness among the customers about the exact location of the cloud. This could

    hinder investigations within the cloud and is difficult to access the activity of the cloud,

    where the data is not stored in a particular data centre but in a distributed format. The

    users may not be familiar with the underlying environments of the varied components in

    the cloud.

    1.2.1.7 Data segregation

    Data segregation is not easily facilitated in all cloud environments as all the data cannot

    be segregated according to the user needs. Some customers do not encrypt the data as

    there are chances for the encryption itself to destroy the data. In short, cloud computing is

    not an environment which works in a toolkit. The compromised servers are shut down

    whenever a data is needed to be recovered. The available data is not correctly sent to thecustomer at all times of need. When recovering the data there could be instances of

    replication of data in multiple sites. The restoration of data must be quick and complete to

    avoid further risks.

  • 8/9/2019 6. IJCC Security

    5/22

    Intrusion detection system in cloud computing environment 5

    We examine how cloud computing is assessed in a biomedical laboratory which

    experiences risks due to hackers (Kandukuri et al., 2009). In a biomedical laboratory,

    data is always exposed to threats both internal and external. Less separation is provided

    by the cloud in case of a separate server in a laboratory. The risks include the hacking of

    the hypervisor, where a shared CPU can be easily attacked. The data can be manipulated,

    deleted or destroyed as a result of the attack. Such attacks on biomedical data can have

    serious implications to the end users. Thus, the data base manage system (DBMS) and

    web servers face vulnerability if the infrastructure of the cloud is not properly designed.

    There are certain non-technical risks which arise due to outsourcing of information.

    Encrypting the data from the technical aspect is important to ensure that the data is not

    hacked or attacked. Strong encryption is needed for sensitive data and this would mean

    increased costs. Table 1 provides a summary of the security mechanisms provided by

    major cloud service providers.

    Table 1 Security mechanisms of service providers

    Security issues Results90% use common services.Password recovery

    10% use sophisticated techniques.

    40% use SSL encryption.

    20% use encryption mechanism.

    Encryption mechanism

    40% utilise advanced methods like HTTP.

    Data location 70% of data centres are located more than one country.

    40% indicate data loss.Availability history

    60% indicates data availability is good.

    Proprietary/open 10% have open mechanism.

    70% provide extra monitoring services.

    10% uses automatic techniques.

    Monitoring services

    20% are not open about the issue.

    1.2.2 Privacy risks

    Several complex privacy and confidentiality issues are associated with cloud computing.

    In this section, we dwell on some of these different privacy risks involved in cloud

    computing environments. There are no laws that block a user from disclosing the

    information to the cloud providers. This disclosure of information sometimes leads to

    serious consequences. Some business users may not be interested in sharing their

    information, but such information is sometimes placed in the cloud and this may lead to

    adverse impacts on their business. For example, recently when Facebook changed its

    terms of service, the customers were not informed about it. This made it possible to

    broadcast the information of the Facebook customers to others if the privacy options were

    not set accordingly. This amplifies the importance of reading and understanding the terms

    of service and the privacy policy of the cloud providers before placing any information inthe cloud. If it is not possible to understand the policy or it does not satisfy the needs of a

    user, the user can and must always opt for a different cloud provider.

  • 8/9/2019 6. IJCC Security

    6/22

    6 S.N. Dhage and B.B. Meshram

    Several organisations have analysed the issues of privacy and confidentiality in the

    cloud computing environment. These analyses have been published by a privacy

    commissioner (Pfleeger, 2006), an industry association (Mazzariello et al., 2010) and a

    commercial publisher (Wang et al., 2009).

    Domestic clouds and trans-border clouds are two distinct cloud structures. Certain

    privacy issues are specific to each cloud structure. In a domestic cloud structure, the

    complete cloud is physically located within the same territory. This gives rise to fewer

    privacy issues such as whether the data is collected, used and stored in an appropriate

    manner and whether the data is disclosed to authorise recipients only.

    Another privacy issue in the domestic cloud structure is related to the rights possessed

    by the data owners to access their data. The circumstances under which the data owner

    can access and correct the data should be defined clearly. The above privacy issues can

    also be extended to all other cloud computing environments in general.

    Trans-border cloud structures have their cloud transferred across the borders. This

    gives rise to more privacy issues. The best example for a trans-border cloud operator is

    the Google Docs. People from different parts of the world store data in Google Docs.When data is transferred between different organisations located at different countries,

    serious privacy issues could occur. The privacy principles regulating trans-border

    dataflow defined by the different countries should be given importance by the cloud

    providers. For example Australias National Privacy Principle 9 deals with trans-border

    data flows and is different from privacy regulations of other nations (Nurmi et al., 2009).

    Another example is where a health care provider uses a trans-border cloud computing

    product to store and/or process patient data, they would have to ensure that the transfer is

    permitted under the relevant privacy law (Data Security).

    1.2.3 Consumer risks

    The use of cloud computing services can cause risks to consumers. Before using a cloud

    computing product, the consumers should familiarise themselves with the product,

    confirm whether the product satisfies their needs and understand the risks involved in

    using the product. However, it is not possible for the consumers to understand about all

    the risks involved in using a cloud computing product. The supply of consumer cloud

    computing products is governed by contracts drafted by the providers with no input from

    the consumers. Sometimes the provider makes changes to the terms on which the product

    is provided and the consumers remain unaware about it. These sudden changes without

    informing the consumers can lead to major consumer risks. In order to avoid both the

    privacy and consumer risks, the consumers need to familiarise with the cloud computing

    product they will be using. For example, when using Google Docs, one must read at least

    the following terms:

    universal terms of service

    additional terms

    programme policies

    privacy policy

    copyright notices

    Comment [t1]: Author: Pleaseconfirm the correct year of publica(whether 2006 or 2002).Reference entry: Pfleeger, C.P. (20Security in Computing, Prentice HaBoston.

  • 8/9/2019 6. IJCC Security

    7/22

    Intrusion detection system in cloud computing environment 7

    By examining the above documents, several interesting features become apparent. First,

    to use any of Googles services, a consumer has to agree to be bound by a range of terms

    unilaterally decided by Google and those terms may be unilaterally changed by Google

    without specific notification (Denning, 1987). Google also states that they will treat a

    consumers use of their services as an acceptance of the terms included in Googles

    contract. Further, it can be noted that, when Google disables access to a consumers

    account, the consumer may be prevented from accessing content from their account. This

    is serious in relation to Google Docs services.

    2 Security issues in cloud computing

    Now, technical security issues arising from the usage of cloud services and especially by

    the underlying technologies used to build these cross-domain internet-connected

    collaborations have been discussed. At first, major technologies used in context of cloud

    computing and security are outlined. Then a set of security-related issues that apply todifferent cloud computing scenarios are discussed. Each issue is briefly described and

    complemented with a short sketch on countermeasure approaches that are both sound and

    applicable in real-world scenarios.

    2.1 XML signature

    A well known type of attacks on protocols using XML signature for authentication or

    integrity protection is XML signature element wrapping. At first, a SOAP message is sent

    by a legitimate client. The SOAP body contains a request for the file me.jpg and was

    signed by the sender. The signature is enclosed in the SOAP header and referred to the

    signed message fragment using an XPointer to the Id attribute with the value body. If an

    attacker eavesdrops such a message, he can perform the following attack. The original

    body is moved to a newly inserted wrapping element (giving the attack its name) inside

    the SOAP header, and a new body is created. This body contains the operation theattacker wants to perform with the original senders authorisation, here the request for the

    file cv.doc. The resulting message still contains a valid signature of a legitimate user,

    thus the service executes the modified request.

    2.2 Browser security

    Modern web browsers with their AJAX techniques (JavaScript, XMLHttpRequest,

    Plugins) are ideally suited for I/O. But what about security? A partial answer is given in,

    where different browser security policies (with the notable exception of TLS) are

    compared for the most important browser releases. If we additionally take into account

    TLS, which is used for host authentication and data encryption, these shortcomings

    become even more obvious. Web browsers cannot directly make use of XML signature or

    XML encryption: data can only be encrypted through TLS, and signatures are only used

    within the TLS handshake. For all other cryptographic datasets within WS-security, thebrowser only serves as a passive data store. Some simple workarounds have been

    proposed to use, e.g., TLS encryption instead of XML encryption, but major security

    problems with this approach have been described in the literature and working attacks

    were implemented as proofs-of concept.

  • 8/9/2019 6. IJCC Security

    8/22

    8 S.N. Dhage and B.B. Meshram

    2.2.1 The legacy same origin policy

    With the inclusion of scripting languages (typically JavaScript) into web pages, it becameimportant to define access rights for these scripts. A natural choice is to allow read/write

    operations on content from the same origin, and to disallow any access to content from a

    different origin. This is exactly what the legacy same origin policy does, where origin is

    defined as the same application, which can be defined in a web context by the tuple

    (domain name, protocol, port).

    2.2.2 Attacks on browser-based cloud authentication

    The realisation of these security issues within browser-based protocols with cloud

    computing can best be explained using federated identity management (FIM) protocols:

    Since the browser itself is unable to generate cryptographically valid XML tokens (e.g.,

    SAML tokens) to authenticate against the cloud, this is done with the help of a trusted

    third party. The prototype for this class of protocols is Microsofts passport, which has

    been broken by Slemko. If no direct login is possible at a server because the browser doesnot have the necessary credentials, an HTTP redirect is sent to the passport login server,

    where the user can enter his credentials (e.g., username/password). The passport server

    then translates this authentication into a Kerberos token, which is sent to the requesting

    server through another HTTP redirect. The main security problem with passport is that

    these Kerberos tokens are not bound to the browser, and that they are only protected by

    the SOP. If an attacker can access these tokens, he can access all services of the victim.

    2.3 Cloud integrity and binding issues

    A major responsibility of a cloud computing system consists in maintaining and

    coordinating instances of virtual machines (IaaS) or explicit service implementation

    modules (PaaS). On request of any user, the Cloud system is responsible for determining

    and eventually instantiating a free-to-use instance of the requested service

    implementation type. Then, the address for accessing that new instance is to becommunicated back to the requesting user.

    Generally, this task requires some metadata on the service implementation modules,

    at least for identification purposes. For the specific PaaS case of web services provided

    via the cloud, this metadata may also cover all web service description documents

    related to the specific service implementation. For instance, the web service description

    document itself (the WSDL file) should not only be present within the service

    implementation instance, but also be provided by the cloud system in order to deliver it to

    its users on demand.

    2.3.1 Cloud malware injection attack

    A first considerable attack attempt aims at injecting a malicious service implementation

    or virtual machine into the cloud system. Such kind of cloud malwarecould serve any

    particular purpose the adversary is interested in, ranging from eavesdropping via subtledata modifications to full functionality changes or blockings. This attack requires the

    adversary to create its own malicious service implementation module (SaaS or PaaS) or

    virtual machine instance (IaaS), and add it to the cloud system. Then, the adversary has to

    trick the cloud system so that it treats the new service implementation instance as one of

  • 8/9/2019 6. IJCC Security

    9/22

    Intrusion detection system in cloud computing environment 9

    the valid instances for the particular service attacked by the adversary. If this succeeds,

    the cloud system automatically redirects valid user requests to the malicious service

    implementation, and the adversarys code is executed.

    2.3.2 Metadata spoofing attack

    As described in, the metadata spoofing attackaims at maliciously reengineering a web

    services metadata descriptions. For instance, an adversary may modify a services

    WSDL so that a call to a deleteUser operation syntactically looks like a call to another

    operation, e.g., setAdminRights. Thus, once a user is given such a modified WSDL

    document, each of his deleteUser operation invocations will result in SOAP messages

    that at the server side look like and thus are interpreted as invocations of the

    setAdminRights operation. In the end, an adversary could manage to create a bunch of

    user logins that are thought to be deleted by the applications semantics, but in reality are

    still valid, and additionally are provided with administrator level access rights. For static

    web service invocations, this attack obviously is not so promising for the adversary, as

    the task of deriving service invocation code from the WSDL description usually is done

    just once, at the time of client code generation. Thus, the attack here can only be

    successful if the adversary manages to interfere at the one single moment when the

    service clients developer leeches for the services WSDL file. Additionally, the risk of

    the attack being discovered assumably is rather high, especially in the presence of sound

    testing methods.

    2.3.3 Flooding attacks

    A major aspect of cloud computing consists in outsourcing basic operational tasks to a

    cloud system provider. Among these basic tasks, one of the most important ones is server

    hardware maintenance. Thus, instead of operating an own, internal data centre, the

    paradigm of cloud computing enables companies (users) to rent server hardware on

    demand (IaaS). This approach provides valuable economic benefits when it comes to

    dynamics in server load, as for instance day-and-night cycles can be attenuated by having

    the data traffic of different time-zones operated by the same servers. Thus, instead of

    buying sufficient server hardware for the high workload times, cloud computing enables

    a dynamic adaptation of hardware requirements to the actual workload occurring. Though

    the feature of providing more computational power on demand is appreciated in the case

    of valid users, it poses severe troubles in the presence of an attacker. The corresponding

    threat is that of flooding attacks, which basically consist in an attacker sending a huge

    amount of non-sense requests to a certain service. As each of these requests has to be

    processed by the service implementation in order to determine its invalidity, this causes a

    certain amount of workload per attack request, which in the case of a flood of

    requests usually would cause a denial of service to the server hardware.

  • 8/9/2019 6. IJCC Security

    10/22

    10 S.N. Dhage and B.B. Meshram

    3 Service level agreement

    3.1 The service level agreement

    The clouds have different architecture based on the services they provide. The data is

    stored on to centralised location called data centres having a large size of data storage.

    The data as well as processing is somewhere on servers. So, the clients have to trust the

    provider on the availability as well as data security. The service level agreement (SLA) is

    the only legal agreement between the service provider and client. The only means the

    provider can gain trust of client is through the SLA, so it has to be standardise.

    A SLA is a document which defines the relationship between two parties: the

    provider and the recipient. This is clearly an extremely important item of documentation

    for both parties. If used properly it should:

    identify and define the customers needs

    provide a framework for understanding

    simplify complex issues

    reduce areas of conflict

    encourage dialog in the event of disputes

    eliminate unrealistic expectations.

    3.2 SLA has to discuss how the following security risks are handled

    1 privileged user access

    2 regulatory compliance

    3 data location

    4 data segregation

    5 recovery

    6 investigative support

    7 long-term viability.

    3.3 Typical SLA contents

    1 Definition of services:This is the most critical section of the agreement as it

    describes the services and the manner in which those services are to be delivered.

    2 Performance management:Monitoring and measuring service level performance.

    Essentially, every service must be capable of being measured and the results

    analysed and reported.

    3 Problem management:Its purpose is to minimise the adverse impact of incidents and

    problems.

  • 8/9/2019 6. IJCC Security

    11/22

    Intrusion detection system in cloud computing environment 11

    4 Customer duties and responsibilities:The customer must arrange for access,

    facilities and resources for the suppliers employees who need to work on-site.

    5 Warranties and remedies:This section of the SLA typically covers the following key

    topics: service quality indemnities third part claims remedies for breaches exclusions

    force majeure.

    6 Security:Security is a particularly critical feature of any SLA. The customer must

    provide controlled physical and logical access to its premises and information.

    Equally, the supplier must respect and comply with the clients security policies and

    procedures.

    7 Disaster recovery and business continuity.

    8 Termination:This section of the SLA agreement typically covers the following key

    topics:

    termination at end of initial term

    termination for convenience

    termination for cause

    payments on termination.

    3.4 Present SLAs

    The SLA is incorporated into the master service agreement and applicable to all services

    delivered directly to customers of cloud service provider. The SLA is not applicable to

    unrelated third parties or third parties lacking privity of contract with that particular cloud

    service provider. The uptime guarantees and the resulting SLA credits are applied in

    monthly terms unless specified otherwise. All SLA guarantees and information listed

    below:

    SLA credit claim:To properly claim an SLA credit due, a customer user must open a

    Sales ticket by sending an e-mail to sales within seven days of the purported outage.

    SLA claim fault:False or repetitive claims are also a violation of the terms of service

    and may be subject to service suspension. Customers participating in malicious or

    aggressive internet activities thereby causing attacks or counterattacks do not qualify

    for SLA claims and shall be in violation of the acceptable use policy.

    Public network:All public network services include redundant carrier grade internet

    backbone connections, advanced intrusion detection systems, denial of service

    mitigation, traffic analysis, and detailed bandwidth graphs.

    Private network:All private network services include access to the secure VPN

    connection, unlimited bandwidth between servers, unlimited uploads/downloads to

    servers, access to contracted services, traffic analysis, and detailed bandwidth

    graphs.

    Redundant infrastructure:All computer equipment and related services are served

    by redundant UPS power units with backup onsite diesel generators.

  • 8/9/2019 6. IJCC Security

    12/22

    12 S.N. Dhage and B.B. Meshram

    Hardware upgrades:Hardware upgrades must be scheduled and confirmed in

    advance through the online ticketing system.

    4 Intrusion detection systems

    An intrusion detection system (IDS) monitors network traffic and monitors for suspicious

    activity and alerts the system or network administrator. In some cases the IDS may also

    respond to anomalous or malicious traffic by taking action such as blocking the user or

    source IP address from accessing the network.

    IDS come in a variety of flavours and approach the goal of detecting suspicious

    traffic in different ways. There are network-based (network intrusion detection systems,

    NIDS) and host-based (host intrusion detection systems, HIDS) intrusion detection

    systems. There are IDS that detect based on looking for specific signatures of known

    threats similar to the way antivirus software typically detects and protects against

    malware and there are IDS that detect based on comparing traffic patterns against abaseline and looking for anomalies. There are IDS that simply monitor and alert and there

    are IDS that perform an action or actions in response to a detected threat. We will cover

    each of these briefly.

    4.1 Network intrusion detection systems

    Network intrusion detection systems are placed at a strategic point or points within the

    network to monitor traffic to and from all devices on the network. Ideally, you would

    scan all inbound and outbound traffic, however doing so might create a bottleneck that

    would impair the overall speed of the network.

    4.2 Host intrusion detection systems

    Host intrusion detection systems are run on individual hosts or devices on the network. AHIDS monitors the inbound and outbound packets from the device only and will alert the

    user or administrator of suspicious activity is detected.

    4.3 Signature-based

    A signature-based IDS will monitor packets on the network and compare them against a

    database of signatures or attributes from known malicious threats. This is similar to the

    way most antivirus software detects malware. The issue is that there will be a lag between

    a new threat being discovered in the wild and the signature for detecting that threat being

    applied to your IDS. During that lag time your IDS would be unable to detect the new

    threat.

    4.4 Anomaly-based

    An IDS which is anomaly-based will monitor network traffic and compare it against an

    established baseline. The baseline will identify what is normal for that network what

    sort of bandwidth is generally used, what protocols are used, what ports and devices

  • 8/9/2019 6. IJCC Security

    13/22

    Intrusion detection system in cloud computing environment 13

    generally connect to each other and alert the administrator or user when traffic is

    detected which is anomalous, or significantly different, than the baseline. Intrusion

    prevention is the process in which first intrusion detection is performed and then attempts

    are made to stop the detected possible intrusions. Intrusion detection and prevention

    systems (IDPS) are a combination of intrusion detection and intrusion prevention

    systems. The primary objectives of IDPS are to identify possible intrusions, logging

    information about them, attempting to stop them, and reporting them to security

    administrators.

    5 IDS in the cloud

    In classical enterprise systems, an IDS is normally deployed on dedicated hardware at the

    border of the networking infrastructure that is to be defended, in order to protect it

    from external attacks. In a cloud computing environment, where computing and

    communication resources are shared among several users on an on-demand, pay-per-usebasis, such strategy will not be successful and effective. Therefore, a proper defence

    strategy in the cloud needs to be properly distributed so that it can detect and prevent the

    attacks that originate within the cloud itself and also from the users using the cloud

    technology from different geographic locations through the internet.

    For both cloud users and cloud provides, deploying IDS sensors in the cloud

    infrastructure is highly motivated. The cloud users need IDS to detect attacks on their

    services. Additionally, the cloud user needs to know if the used services or hosts are used

    to attack other victims. It could be useful to separate the IDS for each user from the actual

    component being monitored. If the IDS is used to monitor a VM host on the host itself, it

    cannot be guaranteed that the IDS (e.g., a HIDS) works properly when the host is

    compromised, as the attacker could have modified it not to send any reports. The cloud

    user needs a separate set of rules and thresholds to configure their private IDS. Each user

    may have different requirements and rule sets for the concrete IDS running to secure its

    infrastructure, as the user running only Linux VMs with ssh service can simply drop therules used to detect web-based attacks and Windows-based attacks in the related NIDS.

    The cloud providers need to detect attacks on their cloud infrastructure, i.e., attacks on

    the infrastructure itself as well as attacks on the services of the users. These attacks can

    be performed by an external attacker or by the user itself who may or may not be

    compromised. Furthermore, the providers need to know if their infrastructure is being

    used by attackers to penetrate other victims, e.g., a DDoS attack conducted from a cloud

    provider may affect i ts reputation and can be easily disturbed by the provider itself. To

    secure the IDS itself and to optimise the efficiency, the IDS needs to be separated from

    the target being monitored. Additionally, the cloud provider can use VMM functions to

    monitor virtual machines. This increases the efficiency of the detection a lot.

    6 Methods to enforce security

    A security policy defines secure for a system or a set of systems. Security policies can

    be informal or highly mathematical in nature. We studied two security models; the

    Bell-LaPadula model and the Biba integrity model.

  • 8/9/2019 6. IJCC Security

    14/22

    14 S.N. Dhage and B.B. Meshram

    6.1 The Bell-LaPadula model

    The Bell-LaPadula Model corresponds to military-style classifications. It has influenced

    the development of many other models and indeed much of the development of computer

    security technologies.

    The simplest type of confidentiality classification is a set of security clearances

    arranged in a linear (total) ordering. These clearances represent sensitivity levels. The

    higher the security clearance, the more sensitive the information (and the greater the need

    to keep it confidential). A subject has a security clearance; an object has a security

    classification. When we refer to both subject clearances and object classifications, we use

    the term classification. The goal of the Bell-LaPadula security model is to prevent read

    access to objects at a security classification higher than the subjects clearance.

    The Bell-LaPadula security model combines mandatory and discretionary access

    controls. In what follows, S has discretionary read (write) access to O means that the

    access control matrix entry for S and O corresponding to the discretionary access control

    component contains a read (write) right. In other words, were the mandatory controls not

    present, S would be able to read (write) O.

    Let L(S) = lsbe the security clearance of subject S, and let L(O) = l obe the security

    classification of object O. For all security classifications li, i = 0,...,k 1, l i< li + 1.

    simple security condition:S can read O if and only if lo

  • 8/9/2019 6. IJCC Security

    15/22

    Intrusion detection system in cloud computing environment 15

    1 s S can write to o O if and only if i(o)

  • 8/9/2019 6. IJCC Security

    16/22

    16 S.N. Dhage and B.B. Meshram

    6.3 Access control list

    An access control list is prepared for every object, and the list shows all subjects who

    should have access to the object and what that access is. As shown in Figure 2, subjects

    A, B and C will have access to object abc.txt. The operating system will maintain just one

    access list for abc.txt, showing access rights for A, B and C.

    Figure 2 Access control list

    7 Architecture of IDS

    IDS tries to find out if certain conditions cause intrusions based on some models

    of intrusion. A model classifies those conditions as good or bad states. Three

    types of models are proposed. In anomaly model (Bishop, 2002; About.com,

    http://netsecurity.about.com), states or actions that are statistically bad are considered as

    bad. In misuse model (Bishop, 2002), states or actions are compared with those states

    that are considered as intrusions and if they match then those states are considered as

    bad. Specification model (Bishop, 2002) classify states that violate the specifications as

    bad. Here, the models can be adaptive which may use neural networks to learn the

    different kinds of intrusions or they may be static based on some fixed pattern to

    determine intrusions. In this architecture, we use static model which is combination of

    anomaly and misuse model.

    IDS in cloud computing environment can be both network- and host-based.Network-based IDS analyses traffic flowing through a network segment by capturing

    packets in real time and checking them against certain patterns while host-based IDS

    resides on single host and analyses traffic to and fro from that host and also monitors

  • 8/9/2019 6. IJCC Security

    17/22

    Intrusion detection system in cloud computing environment 17

    activities that only administrator is allowed to do on that host. The IDS which is shown in

    Figure 3 is both network-based and host-based.

    Figure 3 Basic architecture of IDS

    7.1 Architecture

    In the architecture shown in Figure 3, IDS is deployed in cloud computing environment.

    Here, we have one single IDS controller for a cloud service provider. It can be seen that

    when there is only one IDS in entire network, load on it increases as the number of hosts

    increases. Apart from that it is difficult to keep track of different kinds of attacks or

    intrusions which are acting on each of the host present in the network. In order to

    overcome this limitation, we propose an architecture in which mini IDS instances are

    deployed between each user of cloud and the cloud service provider. As a result, the load

    on each IDS instance will be lesser than that on single IDS and hence that small IDS

    instance will be able to do its work in a better way. For e.g., The number of packetsdropped will be less due to the lesser load which single IDS instance will have.

  • 8/9/2019 6. IJCC Security

    18/22

    18 S.N. Dhage and B.B. Meshram

    Each of these IDS instances will be provided by the IDS controller. Whenever any

    user wants to access any service which is provided by cloud service provider, it is duty of

    the IDS controller to provide IDS instance to that user. Each of the users activities will

    be monitored by that instance and when the user ends session, a log of that session will be

    sent by IDS instance to IDS controller which will be stored in cloud logs. The next time

    when the user starts session, IDS controller will query the knowledge base. Knowledge

    base is stored in cloud and contains information about the pattern of users activities

    based on the information stored in cloud logs.

    This knowledge base can use neural networks to learn new pattern or can be static.

    Each time when an IDS instance is provided to a particular user, information about its

    previous activities will be queried by IDS controller from knowledge base. This pattern

    of activities can be used to detect any intrusions from the working pattern of user. It is

    also possible to apply different rules for different users so that all the features of Intrusion

    detection System are not required to be deployed when only few are required by

    particular user. The above IDS instance needs to be deployed on each of the three layers,

    namely system, platform and application.In the above discussed architecture, there is one to one relationship between every

    user and IDS instance assigned to him, i.e., every user will be assigned only one IDS

    instance by the IDS controller. But there is many to many relationships between IDS

    instance and node controller in cloud, i.e., one IDS instance can be connected to many

    node controllers and one node controller can connect through many IDS instance. Thus if

    any user uses more than one service in cloud, then he will be connected to many node

    controllers through only one IDS instance. The advantage of this kind of cardinality is

    that all patterns of a particular user will be monitored by one single IDS and hence it will

    be easier to detect intrusions. Similarly, even if multiple users connect to the same node

    controller, their activities will be monitored by different instances of IDS and hence one

    users activities will not affect other user.

    Figure 4 Cardinality in the system

    In the architecture shown in Figure 5, we define few more terms; namely agents, directors

    and notifiers (Bishop, 2002).

    Agents are present inside IDS instance and they provide information from data

    sources like log files, another process or network, etc., to directors which are present in

    IDS controller. The director itself reduces the incoming log entries to eliminate

    unnecessary and redundant records and acts as an analyser. It then uses an analysis

    engine by querying knowledge base to determine if an attack is possible. If an attack is

    expected to occur then notifier takes some action in order to respond to that attack. So the

    main task of analysing of the different input patterns of user based on its activities takes

    place in IDS controller.

  • 8/9/2019 6. IJCC Security

    19/22

    Intrusion detection system in cloud computing environment 19

    Figure 5 Internal architecture of IDS

    7.2 Defining subjects and objects for our architecture

    Any security model defines subjects and objects. In the cloud computing environment,

    we define our subjects to be the user of the cloud, attacker trying to gain control, the

    cloud controller, cluster controllers, node controllers, and processes running on various

    node controllers as well as our system itself. The objects are files, programmes,

    resources, etc., available with cloud, machine images, user database and cloud

    components (CLC, CC, NC, etc.), the knowledge base and our system. Now after

    defining objects and subjects for our architecture, we can use the same security models

    that were discussed above in Section 6 for enforcing confidentiality and integrity in our

    system.

    8 Distributed intrusion detection in clouds using mobile agents

    8.1 IDS model

    The aim is to build up a robust distributed hybrid model for intrusion detection in Cloud

    which covers the flaws of the other traditional models while uses their useful features.

    The proposed hybrid model design is inspired by two models namely peer to peer IDS

    based on mobile agents and distributed intrusion detection using mobile agents

    (DIDMA). DIDMA is enhanced by adding new components (such as data mining, heart

    beat, etc.) and applied to each subnet of network while the peer to peer model is used to

    connect all subnets together.

    8.2 Components of the IDS in a subnet

    The fundamental design of the proposed hybrid model in each subnet of Virtual

    Machines consists of four main components namely IDS control centre (IDSCC), agency,

    application specific static agent detector and specialised investigative mobile agent.

  • 8/9/2019 6. IJCC Security

    20/22

    20 S.N. Dhage and B.B. Meshram

    Static agents (SA) should generate an alert whenever they detect suspicious activities,

    then save those activities information in a log file and send alerts ID to IDSCC. Then,

    IDSCC will send investigative task-specific mobile agent to every agency that sent

    similar alerts.

    MA will visit and investigate all those VMs, collect information, correlate it and

    finally send or carry back the result to IDSCC. Consequently, alerting console in IDSCC

    will analyse the coming information and compare and match with intrusion patterns in

    IDSCC database.

    8.2.1 Main components

    1 IDS agency:Mobile agents need an environment to become alive which is called

    agency. An agency is responsible for hosting and executing agents in parallel and

    provides them with environment so that they can access services, communicate with

    each other, and migrate to other agencies. An agency also controls the execution of

    agents and protects the underlying VMs from unauthorised access by malicious

    agents. In addition, since virtualisation creates a level of isolation, the physical

    machine resources can be protected by executing agents on VE.

    2 Application specific static agent detectors (SAD):SAD act like VM monitors,

    generating ID events whenever traces of an attack is detected, and these events are

    sent in the form of structured messages to IDSCC (Data Security). SAD is capable of

    monitoring the VM for different classes of attacks. The SAD is responsible for

    parsing the log files, checking for intrusion-related data pattern in log files,

    separating data related to the attack from the rest of the data, and formatting the data

    as required by the investigative MA.

    3 Specialised investigative mobile agent (IMA):IMA are responsible for collecting

    evidences of an attack from all the attacked VM for further analysis and auditing.

    Then, they have to correlate and aggregate that data to detect distributed attacks.

    Each IMA is only responsible for detecting certain types of intrusions.

    4 IDSCC:An IDSCC is a central point of IDS components administration in each

    subnet. It includes all the components that a normal VM does and also following

    components:

    a Databases:There should be a database of all intrusion patterns which can be

    used by alerting console to raise the alarm if patterns matched with the detected

    suspicious activities. All events IDs which reported by SAD are stored in

    another database. In addition, IDSCC should keep an updated status of VMs. A

    VM in our system can have three statuses as: normal, compromised, migrated.

    b Alerting console:This component compares the spotted suspicious activity with

    intrusions database and raises the alarm if they are matched.

    c Agent generator:generate task specific agent for detecting intrusions (SAD and

    IMA) even new ones by using knowledge that is generated by data mining

    inference engine or obtained from previous experiences.d Mobile agent dispatcher:it dispatches investigative mobile agents to the VMs

    based on the ID of event or suspicious activity received from their SADs. In

    addition, it determines list of compromised agencies (LCA) for IMAs.

  • 8/9/2019 6. IJCC Security

    21/22

    Intrusion detection system in cloud computing environment 21

    e Data mining inference engine:uses machine learning to deduce knowledge to

    detect new intrusions from system databases which contains detected intrusion

    and system logs and coming information from SADs.

    f Trust level manager:defines trust level for all IDS agencies in the subnet,

    furthermore it keeps the trust level of the other IDSCC in the same

    neighbourhood of networks. There are three trust level:

    1 normal

    2 suspicious

    3 critical.

    Trust level changes based on SA and MA investigation results.

    8.3 Neighbourhood watching scenario for detecting intrusion in IDSCC

    In this approach, all neighbours have the task to watch out for each others. As a result,

    whenever suspicious behaviours are spotted by a neighbour, all interested parties such aspolice will be informed. In order to apply that strategy to our IDS application, the first

    step is to build a virtual neighbourhood where all IDSCCs are peers in the same

    neighbourhood. When any new IDSCC enters into the system, it has to be assigned a

    virtual neighbourhood. The configuration of this neighbourhood system is not fixed and

    can be dynamic. The initial configuration encompasses a graph of nodes and their

    location in the network defines the neighbourhood. In order to get the efficient

    performance the number of neighbours in each neighbourhood should not exceed a

    predefined upper bound. In this neighbourhood watch approach, all IDSCC are

    considered to be equal. Every IDSCC will perform intrusion detection for other IDSCC in

    its neighbourhood. In a neighbourhood, each control centre stores data about its

    neighbours mainly the description of normal behaviour of the neighbours and information

    such as checksums of critical operating system files.

    References

    About.com, Introduction to Intrusion Detection Systems (IDS), available athttp://netsecurity.about.com/cs/hackertools/a/aa030504.htm.

    Bishop, M. (2002) Computer Security: Art and Science, ISBN 0-201-44099-7, Addison Wesley,Canada.

    Data Security, available at http://www.exforsys.com/tutorials/cloudcomputing/cloud-computing-security.html.

    Denning, D.E. (1987) An intrusion-detection model, IEEE Transactions on SoftwareEngineering, February, Vol. SE-13, No. 2, pp.222232.

    Kandukuri, B.R., Paturi, R. and Rakshit, V.A. (2009) Cloud security issues, scc, 2009 IEEEInternational Conference on Services Computing, pp.517520.

    Mazzariello, C., Bifulco, R. and Canonico, R. (2010) Integrating a network IDS into an opensource cloud computing environment, Sixth International conference on Information

    Assurance and Security.Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli, G., Soman, S., Youseff, L. and Zagorodnov, D.

    (2009) The eucalyptus open-source cloud-computing system, ccgrid, 9th IEEE/ACMInternational Symposium on Cluster Computing and the Grid, pp.124131.

    Comment [t2]: Author: Pleaseprovide the exact title of the web mthat appears upon visiting the proviweb address (see our Note in the m

  • 8/9/2019 6. IJCC Security

    22/22

    22 S.N. Dhage and B.B. Meshram

    Pfleeger, C.P. (2002) Security in Computing, Prentice Hall, Boston.

    Wang, C., Wang, Q., Ren, K. and Lou, W. (2009) Ensuring data storage security in cloudcomputing, 17th International Workshop on Quality of Service, IEEE, Vol. D, No. 978,pp.19.

    Comment [t3]: Author: Pleaseconfirm the correct year of publica

    (whether 2006 or 2002).