how to implement social policies. a deliberative agent architecture roberto pedone rosaria conte...

25
How To Implement Social Policies. A Deliberative Agent Architecture Roberto Pedone Rosaria Conte IP-CNR Division "AI, Cognitive and Interaction Modelling" PSS (Project on Social Simulation) V.LE Marx 15, 00137 Roma. June 5-8, 2000

Upload: cassandra-wood

Post on 29-Dec-2015

222 views

Category:

Documents


2 download

TRANSCRIPT

How To Implement Social Policies. A Deliberative Agent Architecture

Roberto Pedone Rosaria Conte

IP-CNRDivision "AI, Cognitive and Interaction Modelling"

PSS (Project on Social Simulation) V.LE Marx 15, 00137 Roma.

June 5-8, 2000

The Problem... • Multiple agents in common environments face problems

posed – by a finite world (e.g., resource scarcity), and therefore

– by social interference.

• These are problems of social interdependence– Shared problems

– Requiring a common solution = multi-agent plan (multiple actions for a unique goal)

• Necessity of common solutions adopted by the majority. • How to obtain that interdependent but autonomous

agents apply common solutions?

Autonomous agents?

• Self-sufficient: By definition, impossible!Self-interested: what does it mean?

• SelfishHave own criteria to filter external inputs

– Beliefs: they may not perceive/understand • Problem (cognitive biases)

• As common (“)

• Solution (“)

• Complementarity (“)

– Goals: they accept requests when these are useful for some of their goals

• BUT: difference between interests and goals...

How To Achieve Common Solutions Among Autonomous Agents?

Two approaches• Bottom-up: Emergent,

spontaneous processes among adaptive agents

• Top-down: Designed, incentives and sanctions modifying the preferences of rational agents

– Acquisition of solutions

– Enforcing mechanisms

– Violation

Spa>pb

pa>pb

pa>pb

S

BUT...

• Evolutionary processes and Adaptive Agents: socially acceptable outcomes are doubtful!

• Rational agents require unrealistic conditions (specified, severe and certain sanctions).

Bottom UP. Adaptive Agents

• Analogy between biological evolution and social processes – Fitter individuals survive

– Social processes spread and evolve through imitation of fitter agents.

• But – how can agents perceive the fitness of a strategy? “How do

individuals get information about average fitness, or even observe the fitnesses of other individuals?” (Chattoe, 1998)

- How to make sure that what propagates is what is socially desirable, without the intervention of some deus ex machina (the programmer)?

Top Down. Rational Agents• Socially desirable effects are deliberately pursued

through incentives and sanctions. • Incentives and sanctions induce rational agents to act

according to global interest. • Rational agents take decisions by calculating the

subjective expected value of actions according to their utility function.

• A rational decider will comply with the norm if utility of incentive (or of avoiding sanction) is higher than utility of transgression.

Effects• Rational deciders will violate a norm ni as soon as one

or more of the following conditions applies:- Sanctions are not imposed

- Sanctions are not specified

- The sanction for violating ni is lower than the value of transgression with equal probability of application of the sanction (1/2).

- The sanction for violating an incompatible norm is higher.

– The sanction for violating the norm is not or rarely applied.

• Fast decline in norm compliance is likely to follow from any of the above conditions.

• Then, what?

Top-Down and Bottom-UP• Top down

– agents acquire S– agents decide to accept S– have S even if they do not apply

it

• Bottom up– infer S from others– communicate S to one another– control each other.

• Difference from previous approaches:– S is represented as a specific

object in the mind– Can travel from one mind to the

other.

SDo you know that S?

She believes SShould I?

Theseguys believe S. Will

theyact accordingly?

Shut up, otherwise Iknock you down!

But How To Tell S?

• Sanction– may be sufficient (but

unnecessary) for acceptance– but it is insufficient (and

unnecessary) for recognition.

Shut up, otherwise Iknock you down!

This guy is crazy, better to do as he likes!

Believed Obligations

• May be insufficient for acceptance

• But necessary & sufficient for recognition!

Shut up!

NO!

You ought to!

I know,but I don’t care!

This requires• This requires a cognitive

deliberative agent:– To communicate, infer, control:

meta-level representation (beliefs about representations of S)

– To decide whether to accept/reject a believed: meta-meta-level representation (decision about belief about representation of S).

When the content of representation is a Norm, we have Deliberative Normative Agents.

S BEL(S)GOAL(S)BEL(GOAL(S))

cognitivedeliberative

Deliberative Normative Agents

• Are able to recognize the existence of norms

• Can decide to adopt a norm for different reasons (different meta-goals)

• Can deliberately follow the norm

• or violate it in specific cases

• Can react to violations of the norm by other agents

Require more realistic conditions!

Deliberative Normative Agents

Can reason about norms Can communicate norms

• Can negotiate about norms

This implies

• Have norms as mental objects

• Hve different levels of representations and links among them

A Deliberative Agent Architecture With Norms (Castelfranchi et al., 1999)

DESIRE I (Brazier et al., 1999)

A g e n tIn te ra c tio n

M a n a g e m e n t

M a in te n a n c eo f W o r ld

In fo rm a tio n

M a in te n a n c eo f A g e n t

In fo rm a tio n

A g e n t

M a in te n a n c eo f S o c ie ty

In fo rm a tio n

O w nP ro c e s sC o n tro l

W o r ldIn te ra c tio n

M a n a g e m e n tin c o m in g c o m m u n ic a tio n

in c o m in g o b s e rv a tio n re s u lts

in te n d e d c o m m u n ic a ti o n

in te n d e d a c tio n s

o b s e rv e d s o c ie ty in fo rm a tio no b s e rv e d a g e n t in fo rm a tio n

o b s e rv e d w o r ld in fo rm a tio n

c o m m u n ic a te dw o r ld

in fo rm a tio n

c o m m u n ic a te da g e n t

in fo rm a tio n

c o m m u n ic a te ds o c ie ty

in fo rm a tio n

o u tg o in g c o m m u n ic a tio n

in itia te d a c tio n s a n d o b s e rv a tio n s

b e lie v e d s o c ie ty in fo rm a tio n

b e lie v e d a g e n tin fo rm a tio n

b e lie v e d w o r ld in fo rm a tio n

DESIRE IIMain Components

Coomunication• Agent Interaction Management,

Perception• World Interaction Management, • Maintenance of Agent Information = agent models• Maintenance of World Information = world models• Maintenance of Society Information = social world

models • Own Process Control = mental states & processing: goal

generation & dynamics, belief acceptance, decision-making.

ATAL99 8

Representation at differentmeta-levels

• 1. Object level– believes like:

has_norm(society1,you_ought_to_drive_on_the_right)

• 2. Meta-level– communicated_by

(has_norm(self, you_ought_to_drive_on_the_right),positive_assertion, agent_B)

• 3. Meta-meta-level– adopted_own_process_goal(you_ought_to_drive_on

_the_right)

DESIRE III

DESIRE IVOwn Process Control

GoalManagement

Own Process Control

StrategyManagement

NormManagement

PlanManagement

belief info to nm

normative meta goals

goal control

plan control

belief info to gm

evaluation info

monitor information

selected goals

goal information

selected actions

belief info to sm

norms

DESIRE VOwn Process Control Components

information

• Norm Management (Norms Belief)

• Strategy Management (Candidate Goals)

• Goal Management (Selected Goals)

• Plan Management (Plan Chosen)

(Action)

DESIRE VI:Norms and goals

• non-adopted norms:

• useful for coordination (predict the behaviour of the other agents)

• adopted norms:• impact on goal generation; among the possible ‘sources of

goals’ -> normative goals

• impact on goal selection by providing criteria about how to select among existing goals; e.g., preference criteria.

DESIRE VII: Norms and plans

• Norms may generate plans

• Norms may select plans

• Norms may select actions

E.g. the norm “be kind to colleagues” may lead to a preferredplan to reach a goal within an organisation.

To sum up

• Adaptive agents: fit = socially acceptable?

• Rational agents are good enough if sanctions are severe,

effective, and certain.

Otherwise, collapse in compliance...

With Deliberative Agents

• Acquisition of norms online

• Communication and negotiation (social monitoring and control)

• Flexibility:– agents follow the norm whenever possible– agents violate the norm (sometimes)– agents violate the norm always if possible

• But graceful degradation with uncertain sanctions!

Work in Progress DESIRE is used for simulation-based experiments on the role of

deliberative agents in distributed social control– Market with heterogeneous, interdependent agents

• Make contracts in the morning (resource exchange under survival pressure)

• Deliver in the afternoon (norm of reciprocity)

• Violations can be found out and

– The news spread through the group

– The event denounced

– Both.

– Objectives: • What are the effects of violation? (Under given enviornmental conditions, norms

may be non adaptive…)

• When and why agents react to violation?

• What are the effects of reaction?

What To Do Next• DESIRE is complex

– Computationally: too few agents. Can simpler languages implement meta-level representations?

– Mentally: too much deliberation• Emotional/affective enforcement?

NB -> E (moral sense, guilt) -> NG -> NA.

• Emotional shortcut

others’ evaluations -> SE (shame) -> NA

implicit norms implicit n-goals.

• Affective computing! But integrated with meta-level representations.