academic course: 03 autonomic multi-agent systems

39
Autonomic Multi-Agent Systems Awareness and Self-Awareness in Autonomic Systems Academic Course; September; 2012 Jeremy Pitt Imperial College London

Upload: fet-aware-project-self-awareness-in-autonomic-systems

Post on 12-Jan-2015

232 views

Category:

Technology


0 download

DESCRIPTION

Awareness and Self-Awareness in Autonomic Systems by Jeremy Pitt

TRANSCRIPT

Page 1: Academic Course: 03 Autonomic Multi-Agent Systems

Autonomic Multi-Agent SystemsAwareness and Self-Awareness in Autonomic Systems

Academic Course; September; 2012

Jeremy Pitt

Imperial College London

Page 2: Academic Course: 03 Autonomic Multi-Agent Systems

Lecture Objectives and Outcomes

I Objectives

I objectives ...

I Outcomes – lecture attendees will:

I Understand the fundamental properties of autonomicmulti-agent systems, di↵erent approaches, and howcategorise/evaluate them

I Understand how agents, embedded in an environment, can useawareness of that environment for self-organisation (a form ofself-aware autonomics)

Page 3: Academic Course: 03 Autonomic Multi-Agent Systems

Lecture Structure

I Multi-Agent Systems

I Autonomic Systems

I Representative Approaches

I In detail: Dynamic norm-governed systems

I Voting (awareness in autonomic systems)

Page 4: Academic Course: 03 Autonomic Multi-Agent Systems

Multi-Agent Systems

Two key considerations

I Level of abstraction

I Type of communication

Page 5: Academic Course: 03 Autonomic Multi-Agent Systems

Abstraction (1)

Multi-Agent abstraction

I Distributed systemsI physical distribution of data and methodsI tightly coupledI location transparency

I Multi-agent systemsI logical distribution of responsibility and controlI loosely coupledI location significant

I Organization of collective intelligence for:I functionality and knowledge encapsulation (ontology)I cooperation, delegation and negotiationI planning and decision making wrt. own goals

Only self-modify the logical layer/virtual machine

Page 6: Academic Course: 03 Autonomic Multi-Agent Systems

Abstraction (2)

Intelligent Agent abstraction

I Highest level of abstractionI ownership: delegated responsibility for taskI internal states: large number exhibit some form of ‘intelligent’

behaviourI asynchrony: communication and coordination with other

agents/users

I Lowest level of abstractionI embedded software processI encapsulates some notion of stateI communicates by message passing

The nature of the message passing is absolutely critical

Page 7: Academic Course: 03 Autonomic Multi-Agent Systems

How To Do Things . . .

(. . . if you absolutely have to . . .)

I . . . with physical objectsI Change the state of the physical worldI Given the ‘ideal’ physics (physical capability)

I . . . with software objectsI Change the state of the objectI Given ‘appropriate’ programming language semantics and the

semantics of the call

I . . . with wordsI Change the state of the the conventional worldI Given ‘validity’ of the of the action (institutionalised power)

Page 8: Academic Course: 03 Autonomic Multi-Agent Systems

Institutionalised Power

(Not to be confused with real or reactive power)

I Searle: counts as in Speech act Theory

I Jones and Sergot (1996): Formal Characterisation of . . .I . . . with software objects

I Change the state of the objectI Given ‘appropriate’ programming language semantics and the

semantics of the call

I A standard feature of any norm-governed system wherebydesignated agents, when acting in specified roles, areempowered by the system to create or modify facts of specialsignificance conventionally agreed within the context aninstitution

I This matters, especially in socio-technical systems, electronicinstitutions, and self-organising systems

Page 9: Academic Course: 03 Autonomic Multi-Agent Systems

Agent-Oriented Software Engineering

Many methodologies, languages and platforms

I GAIA, etc.

I PRS, Jack, Jam, Jason, etc.

I Jade, etc.

FIPA

I FIPA ACL: could not be more wrong

Page 10: Academic Course: 03 Autonomic Multi-Agent Systems

Autonomic Systems: An Overview

Systems are expected to show some kind of ‘agency’ /autonomic behaviour

I Sensor networks – monitor environment, respond to events,situations, . . .

I Data centres – monitor and respond to changing/expectedworkloads

I Power systems – maintain frequency

Systems need to exhibit some sort of ‘autonomic’ properties

I Improve some facet of behaviour

I . . . which must be the expected facet

I . . . and which must change in an expected way

Page 11: Academic Course: 03 Autonomic Multi-Agent Systems

Adaptation

Complexity of built systems

I Change dynamics (due to Bertrand Meyer)I Linear – cost proportional to size of change being madeI Non-linear – cost proportional to size of system being changed

I Complexity and inter-dependence make all changes non-linear

I Become too expensive to change in any significant respect

Incompatible with human management

I Change too fast, too frequent, too far away, too tangled, . . .

I Responses need to be guaranteed . . .

I . . . But may need to represent human artefacts – contracts,laws, conventional rules, etc.

Page 12: Academic Course: 03 Autonomic Multi-Agent Systems

Autonomic Systems

Originally an IBM notion focusing on TCO

I Inspiration from the biological autonomic nervous system:things keep breathing, (almost) no matter what

I Since broadened to include systems with sensorised,feedback-driven management

I Use technology to manage technology

Significant Approaches

I IBM Autonomic Toolkit

I Morphogen-gradient methods use biological inspiration tomanage dynamic (especially mobile) services

I Dynamical systems models treat adaptation as movementwithin an adaptive space

I Multi-agent systems using qualitative knowledgerepresentation and reasoning for multi-criteria optimisation

Page 13: Academic Course: 03 Autonomic Multi-Agent Systems

Autonomic Control Loop

Stages in Autonomic Control

I Collect data from sensors or instrumented managed elements

I Analyse using some model of the system being managed

I Decide on actions taken to accomplish control goal

I Act on decision, possibly with learning, feedback, etc

Page 14: Academic Course: 03 Autonomic Multi-Agent Systems

Example: Data Centre Manager

Power-saving

I Assign agents to monitorand manage components

I Global utility function fordi↵erent configurations

I Combine aspects ofsystem into a single utilitywhich is then maximised

I Single (Global)Variable/Diverse (Local)Objectives

Page 15: Academic Course: 03 Autonomic Multi-Agent Systems

(Some More) Self-* properties

“Things the machine can (should) do (for/to) itself”

I Self-configuring select configuraation options, policies andinteraction with other components automatically (or at leastmake suggestions)

I Self-managing perform ‘housekeeping’ operationsautomatically

I Self-optimising reflect on own behaviour and adjustconfiguration accordingly

I Self-healing respond to component failures or attacks tominimise damage

I Self-organisation manage the orchestration betweencomponents adaptively

Page 16: Academic Course: 03 Autonomic Multi-Agent Systems

Representative Approaches

I Max-flow Networks

I Unity

I OMACS (Organisational Model for Adaptive ComputationalSystems)

I Adaptive Decision-Making Frameworks

I Dynamic Argument Systems

I Organic Computing

I Law-Governed Interaction

I Dynamic Norm-Governed Systems

Page 17: Academic Course: 03 Autonomic Multi-Agent Systems

Analytic Framework

I What is adapted – identifying the specific changeablecomponents.

I Why is adaptation performed – the motivating cause(s), inresponse to exogenous or endogenous stimuli, which promptadaptation.

I How is adaptation performed – detailing the specificmechanisms or processes used to bring about the adaptation.

I Evaluation of adaptation – how the adapted system isevaluated as an improvement (or not) on the previousconfiguration.

Page 18: Academic Course: 03 Autonomic Multi-Agent Systems

Dynamic Norm-governed Systems

Dynamic Norm-Governed Multi-Agent Systems

I Social ConstraintsI Physical power, institutionalised power, and permissionI Obligations, and other complex normative relationsI Sanctions and penaltiesI Roles and actions (communication language)

I Communication ProtocolsI Protocol stack: object-/meta-/meta-meta-/etc. level protocolsI Transition protocols to instigate and implement change

I Specification SpaceI Identify changeable components of a specification

(Degrees of Freedom: DoF)I Define a ‘space’ of specification instances, and a notion of

distanceI Define rules about moving between instances

Page 19: Academic Course: 03 Autonomic Multi-Agent Systems

Social Constraints

I Three types of ‘can’I Physical capabilityI Institutional power

I The performance by a designated agent, occupying a specificrole, of a certain action, which has conventional significance,in the context of an institution

I A special kind of ‘certain action’ is the speech act

I Permission (& obligation)I Can have (physical or institutional) power with/without

permissionI Sometimes power implies permission

I Sanctions and enforcement policies

I Right, duty, entitlement, and other more complex relations

I Social constraints can be adapted for intentional, run-timemodification of the institution

Page 20: Academic Course: 03 Autonomic Multi-Agent Systems

Communication Protocols

...object protocol

rule modificationlevel 1 protocol: voting

level k-1 protocol: voting

level 0 protocol: resource-sharing

object protocol initialisation

I Any protocol for norm-governed systems can be in level 0.

I Any protocol for decision-making over rule modification canbe in level n, n > 0.

I Attention is also payed to the transition protocols: theprocedures with which a meta-protocol is initiated.

Page 21: Academic Course: 03 Autonomic Multi-Agent Systems

Specification Space

I We define the Degrees of Freedom (DoF) of a protocol.

I A protocol specification with n DoF creates an n-dimensionalspecification space, where each dimension corresponds to aDoF.

I A specification point represents a complete protocolspecification — a specification instance — and is denoted bya n-tuple, where each element of the tuple expresses the valueof a DoF.

DoF1

DoF2

DoF2

DoF1

Page 22: Academic Course: 03 Autonomic Multi-Agent Systems

Awareness and Self-healing

Congruence of appropriation/provision rules and state

I Interleave rules of social-{order | exchange | choice}I Brute facts (resource level in past and current state)I Individual beliefs (resource level in future states)I Common beliefs formed by gossiping (opinion formation)I Expressed preferences mapped to collective choiceI Collective choice represented as institutional fact

!"#$#%$&

'($#)*&

+%$,-.$/.&

0$12)32%$45635.1&

0$12)32%$45&74/)1&

8#$$.9&:.).9;#$42%$&

<.)=%-&>#?949*&

@A"9.11.-&B9.C.9.$/.1&

@D.$)&+45/E&F49942D.&

!"

#"

$"

%"

Page 23: Academic Course: 03 Autonomic Multi-Agent Systems

Voting Protocol: Informal Description

I Informal specification of a decision-making procedureaccording to Robert’s Rules of Order (Newly Revised)

I a committee meets and the chair opens a sessionI a committee member requests and is granted the floorI that member proposes a motionI another member seconds the motionI the members debate the motionI the chair calls for those in favour to cast their voteI the chair calls for those against to cast their voteI the motion is carried or not, according to the standing rules of

the committee

Page 24: Academic Course: 03 Autonomic Multi-Agent Systems

Voting Protocol: Graphical Description

I Various options for graphical representationI UML Sequence diagramsI State diagrams

!"#$%&'(&)#

!*#$+,$,-%'#

!.#-%/,&'%'#

!0#1,2&)#

!3#1,4%'#

!5#+%-,61%'#

17#1,4%#

/7#'%/68+%#/7#/6,-%9:866,4#1(7#$+,$,-%# /7#,$%&9:866,4#1;7#-%/,&'#

17#+%1,<%#17#8:-48(&#

!"#-(=&)>-?@"#

/7#,$%&9-%--(,&# /7#/6,-%9-%--(,&#

!*#-(=&)>-?@"#

1,2&)#-(=&)>-?@*#

I Note certain simplifications to RONR specificationI No floor request, debate or agendaI Voting, changing of votes etc., concurrently

Page 25: Academic Course: 03 Autonomic Multi-Agent Systems

An Event Calculus Specification

I Basic Items: Events and Fluents

I Institutional Powers

I Voting and Counting Votes

I Permission and Obligation

I Sanctions

I Objection

Page 26: Academic Course: 03 Autonomic Multi-Agent Systems

Actions

Action Indicating. . .

open session(Ag , S) open and close a sessionclose session(Ag , S)propose(Ag ,M) propose and second a motionsecond(Ag ,M)

open ballot(Ag ,M) open and close a ballotclose ballot(Ag ,M)vote(Ag ,M, aye) vote for or against a motion,vote(Ag ,M, nay) abstain or change voteabstain(Ag ,M)revoke(Ag ,M)

declare(Ag ,M, carried) declare the result of a votedeclare(Ag ,M, not carried)

Page 27: Academic Course: 03 Autonomic Multi-Agent Systems

Fluents

Fluent Range

sitting(S) booleanstatus(M) {pending , proposed , seconded

voting(T ), voted , resolved }votes(M) N ⇥ N

voted(Ag ,M) {nil , aye, nay , abs}resolutions(S) list of motionsqualifies(Ag ,R) booleanrole of (Ag ,R) booleanpow(Ag ,Act) booleanper(Ag ,Act) booleanobl(Ag ,Act) booleansanction(Ag) list of integers

Page 28: Academic Course: 03 Autonomic Multi-Agent Systems

Institutional Power

I Recall: an empowered agent performs a designated action incontext which creates or changes an institutional fact.

I We want to express the e↵ects of the designated protocol(speech) actions, in particular:

I voteI open session and open ballotI declare

I For the specification of the e↵ects of these actions, it isimportant to distinguish between:

I the act of (‘successfully’) casting a vote, andI the act by means of which the casting of the vote is signalled

(e.g. sending a message of a particular form via a TCP/IPsocket connection).

Page 29: Academic Course: 03 Autonomic Multi-Agent Systems

Institutional Power

I Institutional power to open the ballot on a motion:

pow(C , open ballot(C ,M)) = true holdsat T status(M) = seconded holdsat T ^role of (C , chair) = true holdsat T

I Institutional power to cast a vote:

pow(V , vote(V ,M, )) = true holdsat T status(M) = voting( ) holdsat T ^role of (V , voter) = true holdsat T ^not role of (V , chair) = true holdsat T ^voted(V ,M) = nil holdsat T

Page 30: Academic Course: 03 Autonomic Multi-Agent Systems

E↵ects of Institutional Power (1)

I Chair performs open ballot(C ,M)

open ballot(C ,M) initiates votes(M) = (0, 0) at T pow(C , open ballot(C ,M)) = true holdsat T

open ballot(C ,M) initiates voted(V ,M) = nil at T pow(C , open ballot(C ,M)) = true holdsat T ^role of (V , voter) = true holdsat T

open ballot(C ,M) initiates status(M) = voting(T ) at T pow(C , open ballot(C ,M)) = true holdsat T

I Now voters have power to cast votes

Page 31: Academic Course: 03 Autonomic Multi-Agent Systems

E↵ects of Institutional Power (2)

I Casting and counting votes

vote(V ,M, aye) initiates votes(M) = (F1,A) at T pow(V , vote(V ,M)) = true holdsat T ^votes(M) = (F ,A) holdsat T ^F1 = F + 1

vote(V ,M, aye) initiates voted(V ,M) = aye at T pow(V , vote(V ,M, )) = true holdsat T

I Power to revoke vote now granted (revocation without votewas ‘meaningless’)

I Power also used to advance status of motion, perform roleassignment, etc.

Page 32: Academic Course: 03 Autonomic Multi-Agent Systems

Permission

I ‘Right’ aspect of enfranchisementI Agents have the power to voteI Agents have the permission to vote

I In this case (although not always) power implies permissionI Nobody should stop them from exercising their power

I Therefore the chair’s power to close the ballot is not alwayspermitted

pow(C , close ballot(C ,M)) = true holdsat T status(M) = voting holdsat T ^role of (C , chair) = true holdsat T

per(C , close ballot(C ,M)) = true holdsat T role of (C , chair) = true holdsat T ^status(M) = voting(T 0) holdsat T ^ T > T 0 + 10

Page 33: Academic Course: 03 Autonomic Multi-Agent Systems

Obligation

I ‘Entitlement’ aspect of enfranchisementI ‘Access’ to ‘voting machine’ is a ’physical’ issueI Correct vote count: as aboveI A ’fair’ outcome: obligation to declare the result correctly: e.g.

a simple majority vote

obl(C , declare(C ,M, carried)) = true holdsat T role of (C , chair) = true holdsat T ^status(M) = voted holdsat T ^votes(M) = (F ,A) holdsat T ^F > A

Page 34: Academic Course: 03 Autonomic Multi-Agent Systems

Sanction

I The chair always has the power to close a ballot

I It has permission to exercise the power only after some timehas elapsed

I If it closes the ballot early, it may be sanctioned

close ballot(C ,M) initiates sanction(C ) = [(102,M)|S ] at T role of (C , chair) = true holdsat T ^per(C , close ballot(C ,M)) = false holdsat T ^sanction(C ) = S holdsat T

I The sanction results in penalty only if someone objects

I Feature of RONR: ‘anything goes unless someone objects’

Page 35: Academic Course: 03 Autonomic Multi-Agent Systems

The Event Calculus: Implementation Routes

I EC has been mainly used for narrative assimilation:I Given a narrative, check that it is consistentI Given a consistent narrative, check what holds when

I There are many EC variants and implementations, fordi↵erent purposes or requirements

I Discrete Event Calculus Reasoner, for proving properties &planning

I Cached Event Calculus, for e�cient narrative assimilationI etc. . . .

I We will show the use of the Simplified EC for narrativeassimilation

I Pre-process directly into Prolog

Page 36: Academic Course: 03 Autonomic Multi-Agent Systems

The Event Calculus: Narrative Assimilation

!"#$%&

'()*+)+,&

Initial Social State!initially( role_of(cAgent,chair) = true ). !!

initially( role_of(cAgent,voter) = true ). !!

initially( role_of(pAgent,voter) = true ).!

…!

Social Constraints!…!

holdsAt( obl(C, declare(C,M,carried))=true, T ) :-!!holdsAt( role_of(C,chair)=true, T ),!

!holdsAt( status(M)=voted, T ),!!holdsAt( votes(M)=(F,A), T ),!

!F > A.!…!

Narrative!happens( open_session(cAgent, sesh), 1).!happens( propose(pAgent, m1), 2).!happens( second(sAgent, m1), 3).!

happens( open_ballot(cAgent, m1), 4).!happens( vote(pAgent, m1, aye), 5).!

happens( vote(sAgent, m1, nay), 6).!happens( vote(vAgent, m1, nay), 7).!happens( revoke(sAgent,m1), 8).!

happens( vote(sAgent, m1, aye), 9).!happens( close_ballot(cAgent, m1), 10).!

happens( declare(cAgent, m1, not_carried), 11).!happens( close_session(cAgent, sesh), 12).!

Resulting Social State!roles!powers !!

permissions !!

obligations!sanctions!

Page 37: Academic Course: 03 Autonomic Multi-Agent Systems

Example: The Voting Protocol

I EC Specification pre-processed into Prolog program

I Process narratives for consistency and ‘what holds when’agent roles powers permissions obligations sanctions

cAgent chair voter close ballot close session close ballot

pAgent voter proposer

sAgent voter proposer vote vote

vAgent voter

happens(vote(sAgent,m1, aye))cAgent chair voter close ballot close session close ballot close ballot

pAgent voter proposer

sAgent voter proposer

vAgent voter

happens(close ballot(cAgent,m1))

cAgent chair voter declare close session declare(carried) declare(carried)

pAgent voter proposer

sAgent voter proposer

vAgent voter

happens(declare(cAgent,m1, not carried))

cAgent chair voter close session close session 102

pAgent voter proposer propose propose

sAgent voter proposer propose propose

vAgent voter

Page 38: Academic Course: 03 Autonomic Multi-Agent Systems

Summary

I Given an overview of multi-agent systems and intelligentagents

I Introduced the requirement for some multi-agent systems toshow awareness and autonomic properties, e.g. formulti-criteria control in decentralised systems withheterogenous components

I Given an overview of approaches and an analytic framework

I Presented one approach in detail: dynamic norm-governedsystems

I Specified a voting protocol for specification change

I Voting allows (self-)aware agents to adapt systemspecifications at run-time to meet dynamic environmentalconditions (autonomics with conventional rules)

Page 39: Academic Course: 03 Autonomic Multi-Agent Systems

Bibliography I

I Open Systems:I Hewitt, C. 1991. Open information systems semantics for

distributed artificial intelligence. Artificial Intelligence 47, 79–106.

I Norm-Governed Systems:I J.-J. Meyer and R. Wieringa. Deontic Logic in Computer Science:

Normative System Specification. J. Wiley and Sons, 1993.I A. Jones and M. Sergot. A formal characterisation of

institutionalised power. Journal of the IGPL, 4(3):429–445, 1996.I Makinson, D. 1986. On the formal representation of rights

relations. Journal of Philosophical Logic 15, 403–425.I Moses, Y. and Tennenholtz, M. 1995. Artificial social systems.

Computers and Artificial Intelligence 14, 6, 533–562.I Sergot, M. 2001. A computational theory of normative positions.

ACM Transactions on Computational Logic 2, 4, 522–581.I Singh, M. 1998. Agent communication languages: rethinking the

principles. IEEE Computer 31, 12, 40–47.