pervasive computing - security & trust

49
AGENT APPROACHES TO SECURITY, TRUST AND PRIVACY IN PERVASIVE COMPUTING

Upload: shannon-ross

Post on 05-Sep-2015

233 views

Category:

Documents


3 download

DESCRIPTION

Pervasive Computing

TRANSCRIPT

  • AGENT APPROACHES TO SECURITY, TRUST AND PRIVACY IN PERVASIVE COMPUTING

  • THE VISIONPervasive Computing: a natural extension of the present human computing life styleUsing computing technologies will be as natural as using other non-computing technologies (e.g., pen, paper, and cups) Computing services will be available anytime and anywhere.

  • PERVASIVE COMPUTINGThe most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it Mark Weiser

    Think: writing, central heating, electric lighting, Not: taking your laptop to the beach, or immersing yourself into a virtual reality

  • TODAY: LIFE IS GOOD.

  • TOMORROW: WE GOT PROBLEMS!

  • YESTERDAY: GADGET RULES

  • TODAY: COMMUNICATION RULES

  • TOMORROW: SERVICES WILL RULEThank God! Pervasive Computing is here

  • THE BRAVE NEW WORLDDevices increasingly more {powerful ^ smaller ^ cheaper}People interact daily with hundreds of computing devices (many of them mobile):CarsDesktops/LaptopsCell phonesPDAsMP3 playersTransportation passes

    Computing is becoming pervasive

  • SECURING DATA & SERVICESSecurity is critical because in many pervasive applications, we interact with agents that are not in our home or office environment.Much of the work in security for distributed systems is not directly applicable to pervasive environmentsNeed to build analogs to trust and reputation relationships in human societiesNeed to worry about privacy!

  • SECURITY CHALLENGESExample 1ABC Industries Inc.ABC Industries Inc., BaltimoreABC Industries Inc., New YorkABC Industries Inc., LAWhat if someone from the New York office visits the LA office ? How are his rights for access to resources in the LA Office decided ?Company wide directory ? Needs dynamic updating by sysadmin, Violates minimality principles of security, Not scalable

  • SECURITY CHALLENGESExample 2ABC Industries Inc., BaltimoreXYZ Inc, Seattle`Rights ?How does the ABC system decide what rights to give a consultant from XYZ Inc ?How does the ABC system decide what rights to give a manager from XYZ Inc ?

  • SECURITY CHALLENGESExample 2 cont.Company directory cannot be usedCross organizational roles may be meaninglessIssues specific to pervasive environmentsCentral access control is not scalableForeign users or visitorsNot possible to store their individual access rights

    The role of policies.

  • AN EARLY POLICY FOR AGENTS1 A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2 A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.-- Handbook of Robotics, 56th Edition, 2058 A.D.

  • ON POLICIES, RULES AND LAWS The interesting thing about Asimovs laws were that robots did not always strictly follow them.This is a point of departure from more traditional hard coded rules like DB access control, and OS file permissionsFor autonomous agents, we need policies that describe norms of behavior that they should follow to be good citizens.So, its natural to worry about issues likeWhen an agent is governed by multiple policies, how does it resolve conflicts among them?How can we define penalties when agents dont fulfill their obligations?How can we relate notions of trust and reputation to policies?

  • THE ROLE OF ONTOLOGIESWe will require shared ontologies to support this frameworkA common ontology to represent basic concepts: agents, actions, permissions, obligations, prohibitions, delegations, credentials, etc.Appropriate shared ontologies to describe classes, properties and roles of people and agents, e.g.,any device owned by TimFininany request from a faculty member at ETZOntologies to encode policy rules

  • AD-HOC NETWORKING TECHNOLOGIESAd-hoc networking technologies (e.g. Bluetooth)Main characteristics:Short rangeSpontaneous connectivityFree, at least for now

    Mobile devicesAware of their neighborhoodCan discover others in their vicinityInteract with peers in their neighborhoodinter-operate and cooperate as needed and as desiredBoth information consumers and providers

    Ad-hoc mobile technology challenges the traditional client/server information access model

  • PERVASIVE ENVIRONMENT PARADIGMPervasive Computing Environment

    Ad-Hoc mobile connectivitySpontaneous interaction

    PeersService/Information consumers and providersAutonomous, adaptive, and proactive

    Data intensive deeply networked environmentEveryone can exchange informationData-centric modelSome sources generate streams of data, e.g. sensors

    Pervasive Computing Environments

  • MOTIVATION CONFERENCE SCENARIOSmart-room infrastructure and personal devices can assist an ongoing meeting: data exchange, schedulers, etc.

  • IMPERFECT WORLDIn a perfect worldeverything available and done automatically

    In the real worldLimited resourcesBattery, memory, computation, connection, bandwidthMust live with less than perfect resultsDumb devicesMust explicitly be told What, When, and HowForeign entities and unknown peers

    So, we really wantSmart, autonomous, dynamic, adaptive, and proactive methods to handle data and services

  • SECURING AD-HOC NETWORKSMANETs underlie much of pervasive computingThey bring to fore interesting problems related toOpenDynamicDistributed SystemsEach node is an independent, autonomous routerHas to interact with other nodes, some never seen before. How do you detect bad guys ?

  • NETWORK LEVEL : GOOD NEIGHBORAd hoc networkNode A sends packet destined for E, through B.B and C make snoop entry (A,E,Ck,B,D,E).B and C check for snoop entry.Perform MisrouteABCDE

  • GOOD NEIGHBORNo BroadcastHidden terminalExposed terminalDSR vs. AODVGLOMOSIM

  • INTRUSION DETECTIONBehaviorsSelfishMaliciousDetection vs. ReactionsShunning bad nodesCluster VotingIncentives (Game Theoretic)Colluding nodesForgiveness

  • SIMULATION IN GLOMOSIMPassive Intrusion DetectionIndividual determinationNo results forwardingActive Intrusion DetectionCluster SchemeVotingResult flooding

  • GLOMOSIM SETUP16 nodes communication4 nodes sources for 2 CBR streams2 nodes pair CBR streamsMobility 0 20 meters/secPause time 0 15sNo bad nodes

  • SIMULATION RESULTS

  • PRELIMINARY RESULTSPassiveFalse alarm rate > 50%Throughput rate decrease < 3% additionalActiveFalse alarm rate < 30%Throughput rate decrease ~ 25% additional

  • CHALLENGES IS THAT ALL? (1)Spatio-temporal variation of data and data sources

    All devices in the neighborhood are potential information providers Nothing is fixedNo global catalogNo global routing tableNo centralized control

    However, each entity can interact with its neighborsBy advertising / registering its serviceBy collecting / registering services of others

  • CHALLENGES IS THAT ALL? (2)Query may be explicit or implicit, but is often known up-front

    Users sometimes ask explicitlye.g. tell me the nearest restaurant that has vegetarian menu items

    The system can guess likely queries based on declarative information or past behaviore.g. the user always wants to know the price of IBM stock

  • CHALLENGES IS THAT ALL? (3)Since information sources are not known a priori, schema translations cannot be done beforehand

    Resource limited devices so hope for common, domain specific ontologies

    Different modes:Device could interact with only such providers whose schemas it understandsDevice could interact with anyone, and cache the information in hopes of a translation in the future.Device could always try to translate itselfPrior work in Schema Translation, Ongoing work in Ontology Mapping.

  • CHALLENGES IS THAT ALL? (4)Cooperation amongst information sources cannot be guaranteed

    Device has reliable information, but makes it inaccessible

    Devices provides information, which is unreliable

    Once device shares information, it needs the capability to protect future propagation and changes to that information

  • CHALLENGES IS THAT ALL? (5)Need to avoid humans in the loopDevices must dynamically "predict" data importance and utility based on the current context

    The key insight: declarative (or inferred) descriptions helpInformation needsInformation capabilityConstraintsResourcesDataAnswer fidelity

    Expressive Profiles can capture such descriptions

  • 4. OUR DATA MANAGEMENT ARCHITECTUREMoGATU

    Design and implementation consists ofDataMetadataProfilesEntitiesCommunication interfacesInformation ProvidersInformation ConsumersInformation Managers

  • MOGATU METADATAMetadata representationTo provide information aboutInformation providers and consumers,Data objects, andQueries and answersTo describe relationshipsTo describe restrictionsTo reason over the information

    Semantic languageDAML+OIL / DAML-S

    http://mogatu.umbc.edu/ont/

  • MOGATU PROFILEProfileUser preferences, schedule, requirementsDevice constraints, providers, consumersData ownership, restriction, requirements, process model

    Profiles based on BDI modelsBeliefs are factsabout user or environment/contextDesires and Intentions higher level expressions of beliefs and goals

    Devices reason over the BDI profiles Generate domains of interest and utility functionsChange domains and utility functions based on context

  • MOGATU INFORMATION MANAGER (8)ProblemsNot all sources and data are correct/accurate/reliableNo common sensePerson can evaluate a web site based on how it looks, a computer cannotNo centralized party that could verify peer reliability or reliability of its dataDevice is reliable, malicious, ignorant or uncooperative

    Distributed BeliefNeed to depend on other peersEvaluate integrity of peers and data based on peer distributed beliefDetect which peer and what data is accurateDetect malicious peersIncentive model: if A is malicious, it will be excluded from the network

  • MOGATU INFORMATION MANAGER (9)Distributed Belief ModelDevice sends a query to multiple peersAsk its vicinity for reputation of untrusted peers that responded to the queryTrust a device only if trusted before or if enough of trusted peers trust itUse answers from (recommended to be) trusted peers to determine answerUpdate reputation/trust level for all devices that respondedA trust level increases for devices that responded according to final answerA trust level decreases for devices that responded in a conflicting way

    Each devices builds a ring of trust

  • A: D, where is Bob?A: C, where is Bob?A: B, where is Bob?

  • C: A, Bob is at work.D:A, Bob is home.B: A, Bob is home.

  • A:B: Bob at home,C: Bob at work,D: Bob at homeA: I have enough trust in D. Whatabout B and C?

  • A: Do you trust C?C: I always do.D: I dont.B: I am not sure.E: I dont.F: I do.A:I dont care what C says.I dont know enough about B, but I trust D, E, and F. Together,they dont trust C, so wont I.

  • A: Do you trust B?C: I never do.D: I am not sure.B: I do.E: I do.F: I am not sure.A:I dont care what B says.I dont trust C, but I trust D, E, and F. Together,they trust B a little, so will I.

  • A: I trust B and D,both say Bob is homeA:Increase trust in D.A:Decrease trust in C.A:Increase trust in B.A:Bob is home!

  • MOGATU INFORMATION MANAGER (10)Distributed Belief Model

    Initial Trust FunctionPositive, negative, undecided

    Trust Learning FunctionBlindly +, Blindly -, F+/S-, S+/F-, F+/F-, S+/S-, Exp

    Trust Weighting FunctionMultiplication, cosine

    Accuracy Merging FunctionMax, min, average

  • EXPERIMENTSPrimary goal of distributed beliefImprove query processing accuracy by using trusted sources and trusted data

    ProblemsNot all sources and data are correct/accurate/reliableNo centralized party that could verify peer reliability or reliability of its dataNeed to depend on other peersNo common sensePerson can evaluate a web site based on how it looks, a computer cannot

    SolutionEvaluate integrity of peers and data based on peer distributed beliefDetect which peer and what data is accurateDetect malicious peersIncentive model: if A is malicious, it will be excluded from the network

  • EXPERIMENTSDevicesReliable (Share reliable data only)Malicious (Try to share unreliable data as reliable)Ignorant (Have unreliable data but believe they are reliable)Uncooperative (Have reliable data, will not share)

    ModelDevice sends a query to multiple peersAsk its vicinity for reputation of untrusted peers that responded to the queryTrust a device only if trusted before or if enough of trusted peers trust itUse answers from (recommended to be) trusted peers to determine answerUpdate reputation/trust level for all devices that respondedA trust level increases for devices that responded according to final answerA trust level decreases for devices that responded in a conflicting way

  • EXPERIMENTAL ENVIRONMENTHOW:Mogatu and GloMoSim

    Spatio-temporal environment:

    150 x 150 m2 field50 nodesRandom way-point mobilityAODVCache to hold 50% of global knowledgeTrust-based LRU50 minute each simulation run800 questions-tuplesEach device 100 random unique questionsEach device 100 random unique answers not matching its questionsEach device initially trusts 3-5 other devices

  • EXPERIMENTAL ENVIRONMENT (2)Level of Dishonesty0 100%Dishonest deviceNever provides an honest answerHonest deviceBest effortInitial Trust FunctionPositive, negative, undecidedTrust Learning FunctionBlindly +, Blindly -, F+/S-, S+/F-, F+/F-, S+/S-, ExpTrust Weighting FunctionMultiplication, cosineAccuracy Merging FunctionMax, min, avgTrust and Distrust ConvergenceHow soon are dishonest devices detected

  • RESULTSAnswer Accuracy vs. Trust Learning Functions

    Answer Accuracy vs. Accuracy Merging Functions

    Distrust Convergence vs. Dishonesty Level

  • ANSWER ACCURACY VS. TRUST LEARNING FUNCTIONSThe effects of trust learning functions with an initial optimistic trust for environments with varying level of dishonesty. The results are shown for ++, --, s, f, f+, f-, and exp learning functions.

  • ANSWER ACCURACY VS. TRUST LEARNING FUNCTIONS (2)The effects of trust learning functions with an initial pessimistic trust for environments with varying level of dishonesty. The results are shown for ++, --, s, f, f+, f-, and exp learning functions.

  • ANSWER ACCURACY VS. ACCURACY MERGING FUNCTIONSThe effects of accuracy merging functions for environments with varying level of dishonesty. The results are shown for(a) MIN using only-one (OO) final answer approach(b) MIN using {\it highest-one} (HO) final answer approach(c) MAX + OO, (d) MAX + HO, (e) AVG + OO, and (f) AVG + HO.

  • DISTRUST CONVERGENCE VS. DISHONESTY LEVELAverage distrust convergence period in seconds for environments with varying level of dishonesty.The results are shown for ++, --, s, and f trust learning functions with an initial optimal trust strategy and for the same functions using an undecided initial trust strategy for results (e-h), respectively.

  • *