social intelligence among autonomous agents

26
Computational & Mathematical Organization Theory 5:3 (1999): 203–228 c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands Social Intelligence Among Autonomous Agents ROSARIA CONTE Division of AI, Cognitive and Interaction Modelling—PSS (Project on Social Simulation), IP/Cnr, V.LE Marx 15-00137, Roma email: [email protected] Abstract This paper presents a view of social intelligence as a multiple and inter-agent property. On one hand, some fundamental requisites for a theory of mind in society are presented in the paper. On the other, the role of objective social consequences of social action are argued to multiply agents’ mental properties. Starting from the problems posed by social situatedness the main mental ingredients necessary for solving these problems are identified. After an operational definition of a socially situated agent, a variety of tasks or demands will be shown to impinge on socially situated agents. The specific cognitive requirements needed for individual agents to accomplish these tasks will be identified. However, these cognitive requirements are shown insufficient to answer the social demands previously identified. In particular, the effective execution of individual social action seems to produce a number of interesting social consequences which extend to and empower the individual action. The follow-up hypothesis is that further cognitive properties consequently arise at the individual level, and contribute to reproduce and reinforce multiple agents’ intelligence. Keywords: agents, bounded autonomy, social situatedness “...toutes les raisons que nous avons de croire qu’ils y a des esprits unifi´ es aux corps des hommes qui nous parlent sont qu’ils souvent nous donnent de pens´ ees nouvelles , que nous n’avions pas d’auparavant, ou qu’ils nous obligent a changer ceux que nous avions d´ ej` a.” (Cordemoy, Discours physique de la parole , 1668:252) 1. Introduction In this paper, specific mental capacities and characteristics that are essential for intelligent agents to act adaptively in a social environment will be examined at a theoretical level. The analysis is based upon the following main theses: 1. Socially situated but autonomous agents evolve specific cognitive capacities (i.e., in- dividual social intelligence) to meet the demands (avoid the costs and take advantage) of their social environment. It is impossible to understand social intelligence without resorting to a general theory of intelligent individual action, as is provided by the science of artificial and especially by the AI discipline.

Upload: rosaria-conte

Post on 03-Aug-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Social Intelligence Among Autonomous Agents

Computational & Mathematical Organization Theory 5:3 (1999): 203–228c© 1999 Kluwer Academic Publishers. Manufactured in The Netherlands

Social Intelligence Among Autonomous Agents

ROSARIA CONTEDivision of AI, Cognitive and Interaction Modelling—PSS (Project on Social Simulation), IP/Cnr, V.LE Marx15-00137, Romaemail: [email protected]

Abstract

This paper presents a view of social intelligence as amultiple and inter-agentproperty. On one hand, somefundamental requisites for a theory of mind in society are presented in the paper. On the other, the role of objectivesocial consequences of social action are argued to multiply agents’ mental properties. Starting from the problemsposed by social situatedness the main mental ingredients necessary for solving these problems are identified. Afteran operational definition of a socially situated agent,a variety of tasksor demands will be shown to impinge onsocially situated agents. The specific cognitive requirements needed for individual agents to accomplish thesetasks will be identified. However, these cognitive requirements are shown insufficient to answer the social demandspreviously identified. In particular, the effective execution of individual social action seems to produce a numberof interesting social consequences which extend to and empower the individual action. The follow-up hypothesisis that further cognitive properties consequently arise at the individual level, and contribute to reproduce andreinforce multiple agents’ intelligence.

Keywords: agents, bounded autonomy, social situatedness

“...toutes les raisons que nous avons de croirequ’ils y a des esprits unifies aux corps des hommes qui nous parlent

sont qu’ils souvent nous donnent de pensees nouvelles,que nous n’avions pas d’auparavant,

ou qu’ils nous obligent a changerceux que nous avions deja.”(Cordemoy, Discours physique de la parole, 1668:252)

1. Introduction

In this paper, specific mental capacities and characteristics that are essential for intelligentagents to act adaptively in a social environment will be examined at a theoretical level. Theanalysis is based upon the following main theses:

1. Socially situated but autonomous agents evolve specific cognitive capacities (i.e., in-dividual social intelligence) to meet the demands (avoid the costs and take advantage)of their social environment. It is impossible to understand social intelligence withoutresorting to a general theory of intelligent individual action, as is provided by the scienceof artificial and especially by the AI discipline.

Page 2: Social Intelligence Among Autonomous Agents

204 CONTE

2. However, in order to understand social action it is necessary to go beyond the individuallevel of analysis to reach a multi-agent notion of social intelligence. Individual intelli-gence is insufficient: social intelligence is indeed amultiple agent property, namely aproperty which applies to a set of intelligent autonomous agents in a common world. So-cial intelligence implements individual agents’ mental properties and capacities thanksto the objective effects of social action which extend the powers and means of individualagents, and which in turn demand higher-level cognitive complexity. Social intelligenceincludes (a) objective effects of social action and (b) cognitive properties of individualsocial action, and (c) the relationships between the two.

3. The more autonomous the socially situated agents, the more complex the interplay be-tween social effects and cognitive requirements and the higher the demand for cognitivecomplexity.

The whole argument will be exposed as follows:

1. Socially situated autonomous agents evolve a capacity to filter external (social) inputs andadapt to and take advantage of them (social responsiveness, as distinct from reactivity);

2. Social responsiveness requires individual social intelligence, that is, the capacity toreason on others’ mental states, that is their beliefs and goals (social reasoning) andallows for social action (action based upon social reasoning and planning), and socialcommitment. (First level of cognitive emergence).

3. But social commitment in turn produces objective effects of one’s action on the socialenvironment, mainly consisting of others’ powers (e.g., entitlement) and one’s account-ability (social responsibility). (First level of social emergence).

4. More complex social action, i.e. group and collective action, produce further cognitiveproperties. At the cognitive level, group exchange is based upon social commitment (toact for a given recipient) and group commitment (to act for and before all members ofthe group). Analogously, participation in a collective action implies social, group, andcollective commitment (to facilitate and watch upon one another’s task accomplishmentfor the accomplishment of a common plan). (Second level of cognitive emergence).

5. This has further objective effects. For example, group exchange extends the agentobjective responsibilities within the group to the consequences of its actions on indirectbeneficiaries. Collective action, in its turn, further enlarges agent responsibilities andgives rise to collective obligations. (Second level of social emergence).

In the next section, an operational definition of a socially situated agent will be provided,anda variety of tasksor demands will be shown to impinge on socially situated agents.In Section 3,individual social intelligencewill be analysed in its specific cognitive re-quirements, which are needed for individual agents to form a social goal and undertake asocial action. In Section 4,multiple agent intelligencewill be investigated. In particular,more complex forms of social action, such as group exchange and collective action, will beexamined, and their respective cognitive ingredients and objective effects will be identified.In Section 5, the necessity of accounting for cognitive complexity in order to understand-ing social organisational phenomena will be discussed. In the final section, the argumentpresented throughout the paper will be recapitulated and some conclusions will be drawn.

Page 3: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 205

Figure 1. Agent as intermediary level.

2. Socially Situated Agents

What is a socially situated agent? There are several answers to this question which pre-suppose different views and models of the agent. In the present paper, agent is meantas autonomous, that is, as endowed with individual criteria by means of which it selectsexternal outputs (including social ones), and which are largely responsible for individuals’variability.

In which sense can we say that an autonomous agent is socially situated?Social situatedness implies a complex network of social relationships (see figure 1),

in which each agent may find social resources for its goals, and exercise influence, sendrequests, ask for help, etc.; each agent may turn out to be a resource for someone else’sgoals, and undergo others’ influence by receiving requests and inputs of any sort.

As we shall endeavour to show in this paper, an autonomous intelligent agent selectsexternal inputs to avoid/reduce the costs and even take advantage of its social surroundingsand, given such surroundings, act adaptively and in an intelligent way. This causes the agentto form specific mental objects (for example, social goals) which are the individual mentaleffects of social situatedness and the cognitive conditions for socially situated action.

2.1. Social Demands and Models of Agents

What are the problems encountered by socially situated autonomous agents? Which de-mands does a common world pose to intelligent autonomous agents? Which solutionsshould they develop to answer those problems, and which tasks are they expected to ac-complish to meet those demands?

Cooperation, contribution to public goods and social dilemmas. The various domains ofinterest and applications have pointed out different specific problems and demands. In thesocial scientific fields strongly influenced by decision and game-theory, the main problem atstake is, why should self-interested agentscooperateandcontribute to the public goods(cf.Hardin, 1968; Hardin, 1982; Margolis, 1982; Olson, 1965; Taylor, 1987; Axelrod, 1984)?On the other hand, a more general problem is posed by the simulation studies on socialdynamics (for a review and an example, cf. Hegselmann, 1996), namely how given social

Page 4: Social Intelligence Among Autonomous Agents

206 CONTE

patterns (spatial configurations, social segregation, fairness and solidarity) emerge fromautonomous automata in a common artificial world. The answer most generally providedwithin these studies is that social phenomena emerge spontaneously from local agents ininteraction with no need for social capacities, abilities and competencies of the agentsto evolve. Simulation studies based upon game-theory and more generally utility theorypropose a fundamentallystaticview of the agent (for such an argument against game-theory,see Axelrod, 1995).

A different answer to the classical question of cooperation-coordination is often encoun-tered in the social dilemmas literature (cf. Shroeder, 1995; Liebrand and Messick, 1996).For example, does social experience enhance the agents’ inclination to cooperate (Brewer,1986), and if so, how?Pro-social dispositions and motivationshave been hypothesisedby several authors (Edney, 1980; Kramer and Brewer, 1986; Messick, 1974; Kramer andGoldman, 1995) in order to account for experimental findings concerning problem-solvingin social dilemmas.

Social control and group compliance.Homans (1951, 1954) explains compliance withthe group’s obligations in terms of a built-in need for approval and a learned capacity toexchange approval for approval. This theory has been applied to account for social influenceand control (Macy and Flache, 1995). Essentially, this theory explains social behaviour interms of reinforcement laws andbuilt-in social motivations.

Social power and organisations.Social power in organisations is explained in terms ofresource dependence and social exchange. In this approach (Cook and Emerson, 1978;Pfeffer, 1980), agents are not mentally shaped, butobjectivelyaffected by the network ofsocial relationships they are involved in. In other more complex approaches to the studyof organisations (Carley and Prietula, 1994; Carley and Newell, 1994), agents’ cognitivecapacities are affected by the organisational tasks. Agents are in fact endowed with mentalmodels of the oganisations and of their role within the organisations.

Collective action.Philosophers of the mind, social philosophers and Multi-Agent scien-tists have often raised the issue of how collection action is possible. The answer providedis again two-fold:

1. We-intentions (cf. Searle, 1990; this is the formal correlate of the pro-social motivationshypothesised by social psychologists and social dilemmas scientists).

2. Shared goals plus mutual beliefs (cf. Tuomela, 1992; this is the logic-based correlate ofthe social beliefs).

To sum up, social pressures have been said to affect agents in one or other of the followingways and degrees:

1. Strategic agents: agents endowed with beliefs about others’ moves and strategies(Forward-looking explanation:). Agents have strategic capacities and beliefs; they areable to predict and take into account the effects of their actions (see Binmore, 1994).This answer implies a minimal social intelligence, namely the capacity to take into ac-count (cf. figure 2) one’s belief about the interdependence between one’s and anotheragent’s action payoffs. A strategic agent takes into account payoffs’ interdependence

Page 5: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 207

Figure 2. The strategic agent.

to maximise its own utility. But this is a rather weak sense of sociality: the agent doesnot need to take into account the others’goals. All it needs to do is choose the optionwhich maximises its own utility relative to the interdependent payoffs matrix, which isgiven. The strategic agent is indifferent to the move itself. Instead, usually, agents areconcerned not only withpayoffs’, but also withactions’ interdependencies.

2. Social motivations.Pro-social dispositions and motivations(cf. figure 3) have beenhypothesised by several authors (Edney, 1980; Kramer and Brewer, 1986; Messick, 1974;Kramer and Goldman, 1995) in order to account for experimental findings concerningproblem-solving in social dilemmas. Here, agents’mentalmechanisms are viewed asessentially shaped by the social practice.

3. Social learning, which is equal to social motivations (e.g., need for approval) plus imita-tion (backward-looking explanation:). On the grounds of past experience, agents modifytheir rules and heuristics when they fail to maximise utility (winners-stay-losers-change).Throughsocial learning, they import either the most frequent or the most successfulheuristics and rules applied by other agents in the same domain, or finally those shownby their more or less immediate neighbours (cf. Weibull, 1996). Here, agents have afundamental capacity for socialperceptionandimitation (cf. figure 4).

None of these pictures gives a satisfactory account of social intelligence. The strategicagent is an application of rationality to socialbeliefs. It accounts for coordination, exploita-tion, but not for cooperation (cf. Castelfranchi and Conte, 1998). The social agent provides

Figure 3. The social agent.

Page 6: Social Intelligence Among Autonomous Agents

208 CONTE

Figure 4. The socially learning agent.

a pre-established solution to the problem, while the social learning theory views agents asshaped by behavioural laws which strongly reduce the role of their mental capacities. Letus consider now another interesting type of social agent:

4 Task- and socially-bounded intelligence. In (Carley and Prietula, 1994; Carley andNewell, 1994), social action is viewed as essentially influenced by the agent’s mentalmodels of the tasks prescribed by the organisations. In this approach, the intelligence ofthe agent, its capacity for belief-based and goal-governed action is extended to the socialdomain, and social action is governed by social, namely organisational, goals. This viewapplies the principles and ideas of problem-solving and planning to the organisationalcontext. Therefore, organisational goals and role-goals are hierarchically subordinateto more general goals. In such a model, on one hand, social action isderived fromproblem-solving and planning: there is no need to hypothesise built-in social goals. Onthe other,intelligentsocial action is viewed as both belief- and plan-based: social actionis not reduced to imitation and reinforcement laws.

However, some fundamental aspects of social intelligence are still implicit in this view.First, what are the more elementary ingredients, the micro foundations of social intelli-gence? Secondly, the cognitivemechanismsand rules accounting for the generation ofsocial goals and action are not explicit. What are the mechanisms which allow social, andmore specifically role-, goals to be generated, why do agent forms role-goals?

3. Individual Social Intelligence

In this section, a model of autonomous intelligent agent will be presented and shown to beendowed with cognitive ingredients allowing it to answer some fundamental social demandsin an adaptive way. The model of the agent that will be discussed here could be formallydescribed as a BDI agent (see Rao and Georgeff, 1991), i.e. as an agent endowed withBeliefs, Desires and Intentions and the capacity to manipulate them. However, furtherproperties of this fundamental agent representation will be identified, which did not yetreceive an adequate formalisation in the BDI literature.

Page 7: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 209

Figure 5. The autonomous agent.

This model shares some features with the task- and socially bounded intelligent agentseen above: it is endowed with internal (symbolic) representations and mechanisms ofproblem-solving and planning. However, the autonomous agent we will speak about nowis also provided with mechanisms for generating new goals and selecting external influence(social responsiveness). Let us see them with some detail.

3.1. Social Responsiveness

Let us start with a model of an autonomous intelligent agent. In figure 5, one1 fundamentalingredient of such an architecture is displayed: where B stands for beliefs, B-IG stands forbuilt-in goals, and NG for new goals. In our terms, an agent is a self-regulated system,that is, a system endowed with internal criteria (e.g. explicit representations) for acting.A thermostat is a self-regulated system in a cybernetic sense (endowed with a set-point).However, a self-regulated system may be poorly responsive to the environment. A self-regulated but flexible system may change as a function and in response to the requirementsof a changing world. In the models seen above (figures 2–4), agents are allowed to changethroughlearningandevolution, rather than throughinternal decision.

Three types of agent representation meet the demand for higher flexibility and capabilityto respond to the environment:

1. Learningsystems (learning by reinforcement); essentially, these are stimuli-responsesystems, either symbolic (e.g. Learning Classifer Systems, cf. Watkins, 1989) or sub-symbolic (e.g. neural nets).

2. Evolvingsystems (changing by mutation and selection; e.g., Evolutionary ReinforcementLearning, Classifier Systems used for adaptive agents (Holland, 1992), allowed theacquisition of new (social) beliefs, and the emergence of new strategies and agentscould be studied (cf. Holland, 1995). For example, the application of evolutionarymodels takes properties of the agents, their strategies, beliefs, etc., asmutationsto beselected by their advantageous effects.

3. Reasoningsystems (changing by knowledge-based reasoning; e.g. AI agents endowedwith the capacity to acquire, revise, and reason about their mental states). BDI architec-tures (cf. Rao and Georgeff, 1992) are allowed to some extent2 to modify their mental

Page 8: Social Intelligence Among Autonomous Agents

210 CONTE

states; i.e., the process from desires/goals to intentions has been analysed. To enhancetheir flexibility, BDI agents should be provided with what has been calledgoal dynamicsmechanisms (cf. Castelfranchi, 1996), i.e., the capacity toacquire newgoals;changethe value of those already existing; andallow them to be integrated with other types ofmental states (e.g., goals and emotions).

3.1.1. Reasoning Agents: Advantages.Natural agents are often a combination of all theabove architectures. But what are the differences among them and the advantages of oneover the others? More specifically, while the advantage of evolving systems over learningsystems obviously consist in the former’s capacity for innovation, what is the advantage ofthe last type of architecture on the previous two?

1. Proactive adaptation:proactive adaptation is fundamental in order to prevent the (in-dividual) losses implied by learning through reinforcement. Learning systems acquirebehavioural patterns through imitation thanks to reinforcement: they discard the be-haviours that are not (or are negatively) reinforced, and retain those that are (positively)reinforced. Evolving systems, in their turn, acquire new behavioural patterns thanksto mutation, (recombination) and selection of existing ones in terms of reinforcing ef-fects. None of them is able to predict and adapt to future events, which may or notbe a consequence of one’s or others’ actions (proactive adaptation). A typical effect ofthis capacity is responsibility: agents are accountable for (some of) the consequences oftheir actions because they are (perceived to be) able to foresee them! A system endowedwith knowledge-based reasoning capacities and especially with explicit representationsfor actions’ consequences, may have a good performance in proactive adaptation.

2. Innovation without recombination and elimination. Learning systems’ capacity forinnovation is poor. This is not the case in evolving systems, which may show innovationby recombination of existing features, provided a sufficient variety is available, andelimination (selection) of the unfit features. A knowledge-based reasoning system maybe endowed with knowledge-based adaptive capacities. Think of adaptive planning(Altermann, 1988). Indeed, adaptive planning is cheaper than selection, and does notrequire much variety. It does not imply the elimination of existing strategies and rulesor the recombination of existing ones, but it adapts old strategies to novel situations andapplications. (In principle, an adaptive planner might operate on a one-plan library). Insocial life, agents tailor their behaviours, and their goals, on existing social models.

3. Goal-driven innovation:evolving systems allow for random innovation. Knowledge-based reasoning systems are allowed to decide to innovate. Knowledge acquisition andrevision, on one hand, and goal-generation, on the other, are decided upon processes.On the one hand, bounded rationality and one’s liability to errors and failures endangersthe system to some extent. On the other hand, local, individual utility of innovationincreases: agents do not acquire new patterns, or features which, if fit, will be fixed, butthose which are needed to solve existing problems.

4. Socially intelligent action:This is the most intersting advantage in the context of thepresent discussion. In both learning and evolving systems, action is essentially a responselearned or evolved to a verified condition. In social terms, this type of action may

Page 9: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 211

account for primitive forms of social behaviour, e.g. imitation, predation, etc. Theseprimitive behaviours imply nosocialintelligence, no specifically social mental capacityor representation. They take as an input an observable state of the world, includingthe presence of a predator, and gives as an output a change of the observable state ofthe world (including an increased distance of prey from predator). There is no specificprocessing of social information: the “other” (predator) plays the role of any other eventin the natural world, like rain, or the fall of a tree (cf. Castelfranchi, 1998). A socialintelligent action is an action which either takes among its inputs a mental state of anotheragent (for example its goals), in order to adapt to them, or is aimed to give as an outputa change in the mental state of any given agent (for example, give a message). In thepresent terms,

a social intelligent action is based upon one’s capacity to reason about another’s mentalstates(social reasoning).

This is why, in the present terms, an action which computes payoffs interdependencies isnot yet necessarily social.

In the following, we will try to show that social reasoning is crucial in social action sinceit allows to combine agent’s flexibility with their autonomy.

3.1.2. Autonomous Reasoning Agents.Here, we will provide a notion of autonomousagent with reasoning capacity.

An autonomous reasoning agent is one endowed with a more or less complex architectureallowing it to receive and filter inputs from the external (physical and social) world,acquire, modify and abandon internal representations (beliefs and goals) and the valuesassociated to them, manipulate them in an integrated way, and act adaptively.

Which hints can such a concept of autonomy provide to a model of socially situated agents?Autonomy is an intrinsically relational, if not social, notion. Autonomous agents are

neither refractory, nor slavish: they are responsive, flexible agents, likely to being modified,exposed but not prone to extenal influence, laid open but not helpless. They are protectedby their internal endowment.

Autonomy should be clearly distinguished from self-sufficiency. An autonomous agentmay not be able to achieve its own goals and depend on others to achieve (a subset of)its goals. In figure 6, a network of interdependence is shown: any arrow starting fromone given agent and pointing to another indicates a dependence relationship in which theformer agent depends from the latter to achieve one of its goals. The boxed letters stand forsymbolic representations of agents’ goals. Multiple dependence links are also displayed:for example, agent’s Aand-depends on both P and E to achieve its goalsp, while it or-depends on F and G to achieve its goals. Bilateral and indirect dependence relationshipsare displayed as well. Which mechanisms, rules or other device allow these interdependentagents to achieve their goals by means of others’ actions and remain autonomous, althoughlimitedly so?

Page 10: Social Intelligence Among Autonomous Agents

212 CONTE

Figure 6. Interdependent agents.

3.2. Socially Reasoning Agents

Autonomous agents require, by definition, an internal architecture, since they imply internalrepresentations and the capability of manipulating them. Indeed, they need some “filters”allowing them to select external (e.g., social) inputs. As seen in figure 5, autonomous agentsare able to form new goals from the interaction between some already existing goal andsome (new) beliefs (cf. Dignum and Conte, 1997). Two follow-up questions arise:

1. Which are the consequences of this architectures for autonomous agents?

Autonomous goal-generation. An agent is autonomous if whatever goal q it comes tohave, there is at least a goal p of that agent, to which q is believed by that agent to beinstrumental.

2. Which are the consequences of the principle of autonomy formulated above for sociallysituated agents? A special case of goal-generation is goal-adoption.

(Social corollary of autonomous goal-generation) Autonomous goal-adoption. An agentx is socially autonomous if whatever goal q of another agent yx comes to have, there isat least a goal p of x’s, to which q is believed by x to be instrumental.

Autonomous adoption is asocial filter, which allows socially situated agents to be au-tonomous. This social filter, which we have called Goal-Adoption Rule (GAR) can beinformally expressed as follows:

Page 11: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 213

Figure 7. The socially autonomous agent.

IF an agent has a goal, andbelieves that another agent has a different goal, and alsobelieves that if the other obtains her goal then he will obtain his own

THEN that agent will want that the other obtain her goal.

Therefore, (social) beliefs provide reasons for (social) goals. In figure 7, the goal adoptionrule (GAR) is displayed as a social specification of the goal-generation rule (GGR): a subsetof social beliefs (SB), that is, beliefs about other agents’ mental states (OM) whether built-in or learned, interacts with existing goals (B-IG) thanks to GAR, thereby giving rise to asubset of new goals (NG), namely social goals (SG).

3.3. Social Reasoning: Reasoning upon Others’ Minds

So far, we have seen that socially situated agents must be endowed with a social filter, amechanism for selecting external inputs and autonomously generating new, namely social,goals. What is a social goal?

A social goal is a goal which mentions another agent’s mental state, i.e. a belief or agoal or an intention, or finally an emotion of the latter

in either a positive (pro-social) or negative (aggressive) or neutral sense. It may mention awould-be mental state (as whenx wants to induce a given mental state iny), or an ongoingone (as whenx aims at exploiting an action performed byy). It may be addressed andeven communicated toy (as whenx decides to give help to or attacky), or not (as whenxfacilitatesy’s action to take advantage of it for its own interests). In all these cases,x’s goal

Page 12: Social Intelligence Among Autonomous Agents

214 CONTE

is either based upon some beliefs and predictions abouty’s mind (its goals, beliefs, plans,and emotions) or is an attempt tomanipulate y’s mind (or a subset of it), that is, to modifyit by introducing or suppressing given representations (beliefs or goals), by modifying theirmental status (e.g., activating or de-activating an existing goal, frustrating or achieving agoal pursued byy, etc.), by intervening on the interaction among existing representations(e.g., modifying the value of credibility of the agent’s beliefs).

Whenx adopts a goal ofy, x will form a new goal, i.e. the goal thaty obtains its own, or,more explicitly, that the status of such a goal iny’s mind (not (yet) achieved) be modified(turned into achieved).

3.4. Planning upon Others’ Mind

How much can the mechanism of social autonomy, and the consequent notion of socialgoal provided above, account for social action? Not very much, indeed. The ingredientsidentified are necessary but insufficient. They tell us how agents select3 inputs (requests andcommands), and may account for some basic ingredients of important social phenomena(for example, help giving or aggression). But what about other typical forms of socialaction, in which agents ask for help, exchange actions and resources, cooperate, negotiate,influence one another, delegate (sub-)tasks to one another, etc.? In most cases, agents notonly form goals about others’ minds, but plan through others’ actions and therefore theirminds. Social agents produce plans in which others’ actions are included. Indeed, theyproducemulti-agentplans whenever they, ask for help, cooperate, exchange, delegate, etc.A large subset of social action entails the production and accomplishment of multi-agentplans.

For example, when a given agent resorts to someone else’s action, two multi-agent sub-plans arise in its mind:

1. Its original plan includes a sub-plan to be accomplished by another agent (cf. figure 8).Supposex wants to raise a sofa (this is the classical example discussed by the philosophersof mind; for a comprehensive discussion, see Searle, 1990) in order to check whether thepipe he has lost is below. Asx knows, it is impossible to get this done without someone’shelp, or it is highly inconvenient to do so (the sofa is too heavy and uncomfortable). Thisbelief generates inx a social4 goal, namely that another agent, or actually any memberof a given set of agents (perhaps those who are willing to help and capable to do so,

Figure 8. A multi-agent plan in one agent’s mind.

Page 13: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 215

Figure 9. A multi-agent plan with an influencing sub-plan.

cf. Miceli et al., 1995), accomplishes a share of the plan, namely raise the sofa underverified conditions (that they both do so at a given time).

2. The original multi-agent plan generates as a sub-plan a further multi-agent plan, namelyto gety to accomplish its share of P, the global plan (cf. figure 9): Sp2(x) is not alwaysnecessary:x needs knowledge abouty’s mind (see also Castelfranchi and Falcone,1998) even only to profit from her unintentional help. For example,x may know thatyis leaning to accomplish Sp1(y) on her own (she is lifting one side of the sofa to pusha carpet underneath). In this case,x may take advantage ofy’s action, but to do soxmust predicty’s plan and checky’s action. In this case,x is not modifying, but onlyreasoning upony’s plan.

In other cases, namely wheny is not supposed to accomplish Sp1(y) on its own,x mustsee that

1. y generates Sp1(y); x may choose an indirect way, for example turn up the edge of thecarpet so thaty (maniac with order) will sooner or later raise the sofa to set the carpetproperly; orx can givey appropriate reasons (for example, tellingy that it should checkhow dusty the floor is underneath); interestingly,x may even obtainy’s help by gettingy to have a goal for whichx will offer its own help!

2. y adopts it:x may choose a direct way, and decide to ask for a higher-level, probablymore suitable, collaboration. This leadsx to see that (a)y knows P(x): x may decideto communicate it toy whatx wantsy to do, namely Sp1(y) as a means for P(x). But,as we know, this is insufficient sincex knows thaty is also an autonomous agent, andknowing whatx’s goals are is not enough fory to adopt them, unlessy, a loving spouse,is always inclined to adoptx’s goals provided she knows them (even in such a case, sheacts out of a her autonomous positive disposition towardsx). In all other cases, (b)xmust givey reasons for such an adoption (for example, tellingy that x will return y’shelp); in this casex plans upony’s socialreasoning.

3.5. Social Inter-action

Now, our social agent has a multi-agent plan, including some sub-plans to induce the otheragent(s) involved to share it in all or in part. Is this sufficient for social interaction to be

Page 14: Social Intelligence Among Autonomous Agents

216 CONTE

carried out? Not yet. Supposey, a loving wife, adoptsx’s plan, and possibly evenx’sfurther goal to find his pipe. While holding the sofa,x (or y) perceives the missing pipe onthe bookcase. He then lets the sofa fall heavily down to the floor possibly on the feet of hispartner and walks to the bookcase. From his point of view,x’s action is perfectly suitable:he has no individual reason to persist in his initial intention, since a fundamental conditionfor individual commitment, namely that the goal in pursuit is not yet realised (see Cohenand Levesque, 1990), is no more verified. However,x’s decision is not acceptable from thepoint of view of hissocialcommitment (see Castelfranchi, 1995). In fact,x andy are notonly individually committed each to execute a share of P(x), but they are also committed toeach other. Each is committed before the other to accomplish a share of P(x) (cf. figure 10).As a consequence neither can drop his commitment until:

1. Each is aware5 of the other’s intention to drop the commitment;2. Each is aware of the reason for dropping it, and;3. Each releases the other from social commitment.

This picture accounts for a cooperative plan. In social exchange, each agent is sociallycommitted to reciprocate, to effectively give the partner what was promised (cf. figure 11).But what is social commitment, indeed? This is not the forum for a complete analysis of

Figure 10. Social commitment in common activity.

Figure 11. Social commitment in exchange.

Page 15: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 217

Figure 12. Mutual benefits of social commitment.

such an important social notion (however, see again Castelfranchi, 1995. For the presentexposition, suffice to say that a social commitment is a special relation which holds betweenone agentx, another agenty and a given actiona: we will say thatx is committed to a fory when:

1. x believes thaty benefits fromx doinga,2. x acts so thaty expects thatx will eventually doa (cf. figure 12).

These are minimal conditions for commitment. However, social commitment has socialconsequences whichx can predict:

1. Control of efficiency: y will monitor x’s action, by checking whetherx is intending tokeep to his commitment, and by putting pressure on him to do so;

2. Enforcement of commitment: y will exact commitment completion both because it isa beneficiary and becausex has given rise toy’s expectation; by predicting this,x isbound to fulfil its own promise;

3. Empowerment of beneficiary: in response to a violation of commitment,y’s reaction isperceived as justified, legitimate, entitled byy itself, by x, and by potential witnesses.Such a social perception provides anextensionto y’s personal powers, a sort of externalpower. Not onlyy’s protest will be stronger, but it will be shared by others (includingx), thereby reducingx’s defence capacity (cf. figure 13).

4. Multiple Agent Intelligence

Social commitment significantly enlarges the capacity of autonomous social agents to en-gage in effective social interaction. It provides grounds for a lot of interesting phenomena;in particular, it gives rise to the most elementary forms of right and entitlement. Thanksto social commitment, important forms of interaction, such as exchange and reciprocation,are effectively carried out.

Page 16: Social Intelligence Among Autonomous Agents

218 CONTE

Figure 13. Objective social power arising from social commitment.

Autonomous agents are likely to cheat their partners and violate their commitmentswhenever this is convenient. Thanks to its consequences, social commitment reduces theconvenience of cheat. This is possible through a bi-directional link between the mind of theagent and its social environment. In order to make social interaction among autonomousagents possible, specific cognitive mechanisms evolved: the capacity to select inputs, toreason, and to plan upon others and their minds, to form multi-agent plans, to influenceothers to share them, and finally to commit themselves to carry out common plans. Thanksto the mechanisms of social commitment, beneficiaries result empowered. Their powerto exact the expected benefit is significantly incremented by their partner’s promise. Thisemergent social power is independent of the agent’s individual power and competence(although it may be positively affected by it):y’s entitlement to react tox’s cheating isindependent ofy’s personal characteristics. Analogously, others may react ony’s behalfeven independent ofy’s effort to get them to do so. Such a generalised reaction is a deterrentagainst the breaking of commitment. Social commitment gives rise to a special form ofright. It is a social source of agents’ powers.

Are these ingredients sufficient to account for a more complex set of social phenomena,i.e. collective actions? If that is not the case, what else is needed for a collective action totake place?

Searle and others have answered this question by introducing a special, built-in typeof intentions, i.e.,we-intentions as a mental primitive evolved through the interactionalpractice. In Castelfranchi and Conte (1996), such an answer is found unwarranted preciselybecause it is not derived from individual (social) action. Essentially, this answer convergeson the hypothesis, discussed earlier in this paper, that pro-social solutions to social dilemmasare hard-wired into the agents. Whether such a disposition has effectively evolved is ahypothesis to be carefully investigated. However, a collective action does not need tobe based upon built-in prosocial dispositions, nor upon primitive we-intentions. Collectiveaction6 can be derived from individual social actions plus some specific cognitive properties,which we will turn to below.

Collective action occurs when each agent in a given set commits to execute its share ofa plan to achieve a common goal(cf., figure 14).

Page 17: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 219

Figure 14. Collective commitment and members’ empowerment.

Commitment in such a case is collective (see again Castelfranchi, 1995). Collective com-mitment seems to produce interesting social and cognitive consequences which should befurther analysed and experimentally shown.

What are the specific cognitive ingredients of collective action, if any? In order toanswer this question in an incremental way, we must distinguish betweengroupexchangeandcollectiveaction.

4.1. Group Exchange

Lets take the example of group or generalised exchange, which could be seen as a forerunnerof collective action (see the work by Yamagishi and Cook, 1993). Consider an elementarygroup exchange, in whichx adoptsy’s goalt , y adoptsw’s goals, andw adoptsx’s goalr .Of course, this may be characterised as a mere sequence of exchange interactions. For thesake of simplicity,7 let us assume thatx,y, andw are willing to obtain from and give helpto one another.

1. x knows that ify obtainst , thanx will obtain r : for the GAR,x has the goal to adopt8

y’s goalt ,2. y knows that ifw obtainss, y will obtain t : for the above ruley adoptsw’s goals,3. w knows that ifx obtainsr , w will obtain s: for the same rule,w adoptsx’s goalr .

Apparently, this seems enough for the exchange to occur and succeed. In figure 15, theflat representation of a group exchange structure is shown. In the actual structure, indeed,also an intersection betweenw’s andx’s MAPs occurs. Not unlike social exchange, groupexchange may be endangered by unpredicted events: at any step, the social reasoning andplanning processes of any member of the group may be impaired or fail, thereby endangeringthe whole group’s achievement. But this does not essentially differ from what happens at thesocial level. In a factual and primitive group exchange, each agent adopts another agent’sgoal, and commits to do a given action, in order to be reciprocated (from a third one). Nofundamental difference exists from the situation described in social exchange.

Page 18: Social Intelligence Among Autonomous Agents

220 CONTE

Figure 15. Group exchange.

However, the group exchange produces some interesting consequences at the level of theagent, both objective and cognitive.

4.1.1. Cognitive Requirements for Group Action: Group Commitment.When a set ofagents X establish an agreed upon group exchange (group exchange is no more only factual,but also intended), members have some specific mental states, i.e.,

1. Group beliefs:each member believes that even if its helper does not coincide with itsdirect recipient, it is an indirect beneficiary ofx’s action. Thereforex, as well as anyother member of the group, believes that the outcomex obtains from adoptingy’s goalis actually achieved through arecursiveapplication of social reasoning and planning; inparticular,x believes that

(a) there is another agent (w) whose goal(b) y adoptsin order for x to adopty’s goal(c) and thatw adoptsx’s goal in order for y to adoptw’s goal. Therefore,in order to

obtain reciprocity, and for the goal-generation rule,

2. Group goals:each member adopts another’s goal and wants this to adopt the third agent’sgoal, etc., in order to obtain reiprocation.

3. Group in-ward9 commitment. In committing to adopty’s, its direct beneficiary’s, goal,x commits also to do so forw’s, i.e. its benefactor’s interests. Therefore,w is not onlya witness ofx’s commitment, a mere supporter ofy: w is entitled to expect and exactthat both its own andy’s expectation be fulfilled byx.10 Group commitment implies thateach member is committed to a given action:

(a) for (and before) a given recipient, and(b) for (and before) all other members: any member of the group is entitled to expectx

to fulfil its commitment to achievey’s goal and its commitment to allow the groupexchange.

Page 19: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 221

4.1.2. Objective Individual Properties Emerging from Group Action.We will mentiontwo major effects of group action:

1. Group in-ward11 responsibility:each has a responsibility for the execution of the adoptedgoal before the adopted agent (social responsibility) and for the exchange to succeed(group responsibility). They cannot be only concerned with their own interest: ifyreceives help fromx and does not provide help tow, it doesw a wrong, but is alsoresponsible forx’s damage. Agents which enter a group agreement may be unaware ofthis implication; still, this is an emergent objective effect of their decision.

2. Group empowerment:this is complementary to responsibility. Precisely because eachis responsible for the consequences of its action on all the group members, each memberis protected against its fellows’ cheats and wrongdoing by the existence (and structuralor procedural characteristics) of the group.

4.2. Collective Action

If a group action allows agents to achieve their separate goals, a collective action allowsagents to achieve a common goal. This produces further properties at the individual level,both cognitive and objective.

4.2.1. Cognitive Requirements for Collective Action.What is a common goal? Elsewhere(Conte et al., 1991), this was defined as a world state that a set of agents X want to achieveand with regard to which they depend on one another. In order to achieve this goal X’smembers need to accomplish a common multi-agent plan (MAP-X), and certainly theycommit to it for and before one another.Prima facie, this resembles very closely the lowerpart of figure 16: all agents commit to a common plan. However, the example discussedbelow shows that there is a significant difference. While in group exchange, the commonplan is a means for obtaining reciprocity (in figure 16, see the smaller portions of eachsub-plan in each MAP which exceeds the common plan), in collective action, the oppositeis true: reciprocity, if any, is a means to execute a common plan.

Figure 16. Group exchange with group commitment.

Page 20: Social Intelligence Among Autonomous Agents

222 CONTE

Let us consider the example of a car convoy (again, a classical example in the AI literatureon teamwork; cf. Cohen and Levesque, 1991). Supposex, y, andwwant to get to destinationtogether each driving its own car but only one of them knows how to get there. They decidethat one will take the lead and the others will follow behind.

Apparently, things are quite simple:x commits to take the lead and check thaty is behind,y commits to keep in the row and check thatw is behind, whilew commits to keep in therow and followy. Each agents is committed to do something for someone. For example,x must check every bit whaty is doing, stop aside as soon as it loses sight ofy, help ypredict or perceive and adapt tox’s own moves, and finally adapt its own moves toy’s.As we know from group exchange, each one is also committed to do it in a way that doesnot prevent the other from fulfilling its own commitment. For example,x should not putpressure ony to proceed at a higher speed, but also consider thaty ought to adapt tow’svelocity. Analogously, ifx gets at a cross-road while the headlights are turning yellow,xshould calculate whether not onlyy but the whole convoy has time enough to cross the roadbefore the light turns red. In turn, ify has lost sight ofw, shouldy only stop and wait forw, or should it also communicate tox thatw got lost? Both, evidently.

Furthermore, as soon as the friends move on, the plan execution reveals new technicaldifficulties, which turn into furthersocialandcognitivedifficulties as well. While in thegroup exchange, each agent was committed to the group in order to obtain reciprocity, in areally collective action, each agent wants the whole plan to be executed, and is committedto act so that it gets done.

Therefore, at least one member has

1. acollective belief, i.e. believes that

(a) a goalp is shared by a set of agentsX,(b) a common plan Pfor that goal is available,(c) each member of X depends on all others for P to be executed andp to be achieved

(or, which is the same, X are complementary in P forp).

While group exchange arises at the intersection among separate MAPs (“x is recip-rocated...”, “y is reciprocated...”, etc.), a collective action executed one single MAPpossibly (but not necessarily)12 shared by all members of X: “X execute P forp”.

2. Collective goal, i.e.

(a) each member adopts a given sub-taskin order to obtainthe common goal,(b) wants each other member to adopt and execute their sub-tasksin order to obtainthe

common goal.

While in group exchange, each agent adopts another’s goal in order to receive adoptionfrom someone else, in collective action adoption is instrumental to a common goal.

3. Commitment: for the collective (collective commitment), i.e. x commits to

(a) its own sub-task(b) the whole plan, and therefore to facilitate any other’s task execution (e.g.,y’s task

to follow x) and(c) any other’s commitment (e.g.,y check whatw is doing).

Page 21: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 223

Thanks to group commitment, each member is bounded to adopt another’s goal also for thesake of the group. Thanks to collective commitment, each member is bounded to executea sub-task and facilitate others’ task accomplishment.

4.2.2. Objective Properties Emerging from Collective Action.Agents inherit some fea-tures of thewholecollective:

1. Collective in-ward responsibility.13 Members are accountable for

(a) the effects of their task-executionon others’ task accomplishment, and(b) on the whole plan execution.

As in group exchange, agents are responsible for the effects of their action on thecommon plan. However, in collective action, agents are also accountable for

(c) the others’ accomplishing of their tasks: any agent in X can be held responsible (bythe other members or by a delegating entity) for any P sub-task. In our car convoyexample, if the last driver, in fear to lose sight of the leader, passes the one in themiddle and forgets about her, both himandthe leader will be held responsible by theunlucky fellow whom is left behind. Moreover, supposey believes thatw is behindher, while in fact she misperceives him,x will be later held responsible byw, whowill probably argue thatx did never check whether he was there!

2. Collective obligations(and their cognitive correlates, see below). Given the precedingcondition, all participants are equally expected to check and ensure that each sub-taskis carried through either by exercising control on the executors, or facilitating and co-operating with their activity, or even by substituting them when required, or finally bydelegating someone else to do so. This is not the case in group exchange, where agentsdo not commit to a common goal.

Two general consequences may be drawn from this initial analysis:

1. participation in a collective action may enlarge agents’ responsibility and increase thenumber of obligations impinging on it.

2. On the other hand, it strengthens the agents’ mutual powers within the collective, andreduces the collective fragility.

5. Why Bother with Cognitive Complexity?

The analysis carried on in the last two sections shows a rather complex interrelationshipbetween the effects of (social) action and the agents social and cognitive properties. Giventhe complexity of the analysis, one may be led to wonder what is the use of such an investi-gation. Do we need to model, let alone implement, such a complexity in order to account forsocial, collective, and organisational phenomena? Existing work (e.g., Epstein and Axtell,1996) seems to obtain interesting collective effects from very simple reactive systems. More

Page 22: Social Intelligence Among Autonomous Agents

224 CONTE

generally, the notion ofswarm intelligenceenjoys a large popularity among social scientists.Furthermore, when facing the cognitive complexity of higher-level systems, one is temptedto conclude that if swarm intelligence is sufficient to explain organisational phenomena,perhaps a higher-level intelligence is either superfluous or useful only to understand a sub-set of these phenomena, namely those which evolved in human societies, and, even morespecifically, to understand what allows human societies to perform better, to work moreefficiently.

I think this conclusion is unwarranted. Undoubtedly, cognitive architectures arenotnecessary for societies and organisations to exist. Animal societies provide countless evi-dence ofmindlessorganisations. Indeed, even higher-level agents are sometimes involvedin merely objective social structures. Therefore, if the purpose is to find out the minimalingredients for a given population to exhibit some level of observable organisation, oneneeds no model of a mind, and even less of a cognitive mind.

This paper, however, proceeds from a different purpose and a different perspective. It aimsto show the emergence of social and organisational events among autonomous intelligentagents, that is agents, with internal criteria for action. In such a case, emergence (andperhaps, social and cultural evolution) proceeds in abi-directionalway:

1. agents are socially situated, meaning plunged into a set of fundamental objective re-lationships. This elementary level of organisation (first level social emergence) doesnot (only) limit the agent’s capacity to survive and adapt to its environment, but actu-ally modifies and even increases this capacity, provided the agents are autonomous andflexible enough to respond to social demands.

2. Flexibility and (social) responsiveness lead autonomous agents to evolve given cognitivecapacities (first level cognitive emergence) which allow social action to take place (thatis, the agent capacity tofilter external social demands and maintain its autonomy whileat the same time taking advantage of social resources or avoid/reduce the costs of socialinterference).

3. From social action, agents derive further individual properties and capacities whichextend their individual powers in the social context, and could not be exercised in anon-social context (second level social and collective emergence), and which exercise apressure on their cognitive capacities (second level cognitive emergence).

This process needs to be taken into account whenever one needs to explain social com-plexity in systems involving autonomous agents. This process is not to be explained in termsof the efficiency of the level of social organisations it allows to reach. Ant societies aremuch more efficient than primate societies, with the only exception (at times) of the humanspecies. The rationale of the process is the recombination of autonomy with flexibility.Agents are flexible and individually autonomous, because they have evolved the capacityto filter among external pressures and answer adaptively. At the same time, their actionshave effects on the social environment, not always predicted and wanted by the agents,which exercise further pressures on them and on their mental capacity. Social organisationsemerge from aggregates of agents with their individual properties, but they also modifyagents, providing them with further interactional properties, i.e, individual emergent effects

Page 23: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 225

of multi-agent contexts which allow agents to produce more complex social and collectivephenomena. In this sense, a multiple-agent intelligence phenomenon can be said to emergeand operate in societies of autonomous, social responsive agents.

6. Conclusions

In this paper, a notion of social intelligence as a property of autonomous but sociallysituated agents has been proposed. This property results from the interplay of individualsocial intelligence and some objective effects of social action. The main ingredients ofindividual social intelligence have been pointed out:

1. Cognitive mechanisms for filtering external (the goal-generation mechanism).2. The social corollary of this mechanism (the goal-adoption rule) and the consequent

formation of individual social goals and plans.3. The mechanism of social commitment.

These cognitive properties and capacities are found necessary to account for the formationof social intentions. However, they are insufficient to explain effective social interactionat both the micro (inter-individual) and macro (collective) levels. For social interaction tobe carried through, further mechanisms are needed, such as social and collective commit-ment. These mechanisms have effects at the social level which go far beyond the intentionsand even the awareness of individual agents. At the same time, these consequences giverise to additional powers of the agents involved in social and collective action (namelyempowerment, entitlement, collective obligations and responsibility, etc.). These powers,which exceed and implement agents’ capacities, do at the same time allow them to answeradaptively to social demands. What is more, socially situated agents’ empowerment repre-sent a form of social intelligence, in the sense that they allow different form of social andcollective action to be carried through in a rather effective way. However, the theoreticalanalysis carried out in this paper should be strengthened by empirical evidence, particularlyby the findings of simulation studies.

The follow-up question is whether and to what extent do these additional powers allowfor specific cognitive properties to emerge at the individual level. It is reasonable to assumethat the cognitive counterparts of obligations and rights, responsibility and empowerment,evolve thanks to the effects that social action produces on the social environment. Such aquestion is insufficiently addressed (but see Conte et al., 1998), and should be extensivelyinvestigated.

A theory of social intelligence should account for the complex links between the externalinputs to agents’ minds and autonomous intelligent social action, in particular, on one hand,between intelligent social action and its effects on the social surrounding, and, on the other,between these social effects and the individual powers of the agents, both at the cognitiveand objective level. To make these links explicit is not only a task of socially situated actiontheory, but a task of any theory of social intelligence as a multi-level, individual and extraor super-individual, phenomenon.

Page 24: Social Intelligence Among Autonomous Agents

226 CONTE

Notes

1. It actually implies two sequential filters for beliefs and goals respectively (Castelfranchi, 1996) but at thesame time an integrated processing of mental representations. For brevity, only some aspects of the goal filterwill be examined here.

2. Nevertheless, BDIagents are not yet allowed to modify their goals, unless the programmer takes the trouble tomodify them (off-line change). Agents’ goals are still taken as given in the AI domain, as much as preferencesare in the rationality community. However, some steps into this direction are being moved (e.g., the dynamicsof goals values, cf. Cavedon and Sonenberg, 1998).

3. However, even at this level, things are more complex. When agent select requests for adoption, they need toreconstruct the real request, or the real goal to be adopted. Two follow-up questions arise here:

— How far theymustgo into the hierarchy of goals of the postulant in order to provide a useful help? Thisquestion, pointing to the classical issue of plan-recognition, has been addressed within several AI subfieldsespecially within the area of human-computer interaction.

— How far theycango into such a hierarchy? This question, relatively more overlooked, points to problemsof social conventions and social competence, more familiar to the social scientists.

4. Searle treats this as an example of collective intention. We (see also Conte and Castelfranchi, 1995) prefer toconsider it as an example of a multi-agent plan to achieve an individual goal, which generates a social goal.Unlike collective ones, social goals are individual. However, the execution of multi-agent plans gives rise tocollective goals.

5. This is the condition usually accepted in the formal literature on commitment (see, Cohen and Levesque,1990; Jennings, 1993, etc.). However, as was shown in (Castelfranchi and Conte, 1996), such a condition isinsufficient: suppose that, after having found out his pipe,x says toy that there is no further need to hold onthe sofa, and releases his hold. His action is still unacceptable from the social side of the deal, even thoughyis aware ofx’s intention.

6. The notion of collective action applies at different levels (see Castelfranchi and Conte, 1996). The weakestform does not even imply that the agents involved be aware of participating in a common activity (for example,agents’ actions may be exploited by a central intelligence). Suffice it that their actions get integrated in acommon plan

Weak collective action occurs when a given set of agents execute actions that form part of a common plan.

However, such a notion does not do justice to the potentialities and functionalities of collectives. Indeed,agents participating in a common activity can be considered to join collective action in the full sense whenthey are not only aware of the common goal but each commits to do a share of the plan before all othermembers of the collective (see figure 15).

7. One of the difficulties agents encounter is how to get to form an exchange relationship in which one giveshelp to an agent which is different from the agent one receives help from. One earlier difficulty is how theyobtain the necessary information. Here, for simplicity, each is endowed with such information.

8. This does not mean that each executes such a goal (for example, they may not be able to do so). Again, tosimplify matters, we will assume thatx, y, andw know from experience that each is able to help one andreceive help from another.

9. Group commitment includes also out-ward commitment. For example, a given group may exchange resourcesin order to accomplish a given (sub)task.

10. A subtler question arises: isx’s adoption ofy’s goal, andx’s commitment beforew equivalent? Not really.In the former case,x commits to a given action iny’s interest. In the latter,x commits to a given action inw’s interest. Obviously, the former interest is instrumental to the latter, andx cannot keep to its commitmentbeforew if it does not keep to its commitment beforey. Still, they do not coincide. They may conflict witheach other, as whenx rushes to lendy his car in order for her to be able, before getting to work, to dropw atthe commercial center near the school where he can buy the pipe tobaccox is in strong need of. Concernedwith facilitatingw’s action,x forgets that one of the car’s wheels has been precariously repaired and needs tobe substituted, what will certainly cause him to put up withy’s reaction.

Page 25: Social Intelligence Among Autonomous Agents

SOCIAL INTELLIGENCE AMONG AUTONOMOUS AGENTS 227

11. There is also a responsibility of the group wrt the external effects of the group action, which we will notconsider here.

12. A collective action may be executed even if the whole plan is not known in all details by its executors: in anorchestra, the single elements may not know the whole symphony.

13. Collective responsibilitymay also be external: the collective is accountable for its effects on out-members.Indeed, single members are (and consequently feel) somehow protected by the collective responsibility againstout-members reactions, and this somehow compensates the weight of individual responsibility with regard tothe ultimate goal of the collective.

References

Alterman, R. (1988), “Adaptive Planning,”Cognitive Science, 12, 393–421.Axelrod, R. (1984),The Evolution of Cooperation, Basic Books, New York.Axelrod, R. (1990), “The Emergence of Cooperation Among Egoists,” in P.K. Moser (Ed.)Rationality in Action.

Contemporary Approaches, Cambridge University Press, Cambridge.Axelrod, R. (1997),The Complexity of Cooperation. Princeton University Press, Princeton.Bandura, A. (1977),Social Learning Theory. Englewood Cliffs, NJ, Prentice Hall.Binmore, K. (1994),Game Theory and the Social Contract. Playing Fair. The MIT Press, Cambridge, MA.Carley, K.M. and A. Newell (1994), “The Nature of the Social Agent,”Journal of Mathematical Sociology, 19(4),

221–262.Carley, K.M. and M. Prietula (1994), “ACTS Theory: Extending the Model of Bounded Rationality, in Com-

putational Organizaton Theory,” in K.M. Carley and M. Prietula (Eds.),Computational Organization Theory,Hillsdale, NY: Lawrence Erlbaum.

Castelfranchi, C. (1995), “Commitments: From Individual Intentions to Groups and Organizations,”Proc. of the1st International Conference on Multi-Agent Systems, ICMAS-95, June 12–14, AAAI Press/The MIT Press,San Francisco, CA/Menlo Park, CA.

Castelfranchi, C. (1996), “Reasons: Belief Support and Goal-Dynamics,”Journal of Mathware & Soft Computing,3(1/2), 233–247.

Castelfranchi, C. (1998), “Modelling Social Action for AI Agents,”Artificial Intelligence, 103, 157–182.Castelfranchi, C. and R. Conte (1996), “Distributed Artificial Intelligence and Social Science: Critical Issues,”

in G.M.P. O’Hare and N.R. Jennings (Eds.),Distributed Artificial Intelligence: An Introduction, Wiley, SixthGeneration Computer Technology Series.

Castelfranchi, C. and R. Conte (1998), “Limits of Economic Rationality for Agents and MA Systems,”Roboticsand Autonomous Systems, Special issue on Multi-Agent Rationality, 24, 127–139.

Castelfranchi, C. and R. Falcone (1998), “Principles of Trust for MAS: Cognitive Anatomy, Social Importance,and Quantification,”Proceedings of the Third International Conference of Multi-Agent Systems (ICMAS’98),Paris, pp. 72–79.

Cavedon, L. and L. Sonenberg (1998), “On Social Commitment, Roles, and Preferred Goals,”Proc. of ThirdInternational Conference on Multi-Agent Systems (ICMAS’98), Paris, pp. 80–88.

Cohen, P.R. and H.J. Levesque (1990), Intention is Choice with Commitmentitment,Artificial Intelligence, 42:213–261.

Conte, R. and C. Castelfranchi (1995),Cognitive and Social Action. UCL Press, London.Conte, R., M. Miceli and C. Castelfranchi (1991), “Limits and Levels of Cooperation. Disentangling Various Types

of Prosocial Interaction,” in Y. Demazeau, J.P. Mueller (Eds.),Decentralized AI-2, Armsterdam: Elsevier, pp.147–157.

Conte, R. (1996), “Foundations of Rational Interaction in Cognitive Agents: A Computational Approach,” in W.Liebrand and D. Messick (Eds.) Frontiers in Social Dilemmas Research, Berlin: Springer.

Conte, R., C. Castelfranchi and F. Dignum (1998), “Autonomous Norm Acceptance,” in J. Mueller (Ed.),IntelligentAgents V, Berlin: Springer.

Cook, K.S. and R.M. Emerson (1978), Power, Equity and Commitment in Exchange NetworksAmerican Socio-logical Review, 43, 721–739.

Page 26: Social Intelligence Among Autonomous Agents

228 CONTE

Dignum, F. and R. e Conte (1997), “Intentional Agents and Goal Formation,” in M. Singh et al. (Ed.).Proceedingsof the 4th International workshop on Agent Theories Architectures and Languages, Providence, USA.

Edney, J.J. (1980). The commons problem: Alternative perspectives.American Psychologist, 35, 131–150.Epstein, J.M. and R.L. Axtell (1996),Growing Artificial Societies: Social Science from the Bottom Up. The

Brookings Institution Press, Washington, DC & The MIT Press, Cambridge, MA.Flache, A. (1996), “The Double Edge of Networks: An Analysis of the Effect of Informal Networks on Co-

operation in Social Dilemmas,” PhD Thesis, Thesis Publishers, Amsterdam.Hardin, G. (1968), “The Tragedy of the Commons,”Science, 162, 1243–1248.Hardin, R. (1982),Collective action, Johns Hopkins Press, Baltimore.Hegselmann, R. (1996), “Modelling Social Dynamics by Cellular Automata,” in K.G. Troitzsch, U. Mueller, N.

Gilbert, and J. Doran (Eds.),Social Sience Microsimulation, Springer: Heidelberg.Holland, J.H. (1992), “Complex Adaptive Systems,”Daedalus, 121, 17–30.Holland, J.H. (1995),Hidden Order: How Adaptation Builds Complexity, MIT Press, Reading, MA.Homans, G.C. (1951),The Human Group, Harcourt, NY.Homans, G.C. (1974),Social Behavior. Its Elementary Forms.Harcourt, NY.Jennings, N. (1995), “Commitment and Conventions: The Foundation of Coordination in Multi-Agent Systems,”

The Knowledge Engineering Review, 8.Kramer, R.M. and M.B. Brewer (1986), “Social Group Identity and the Emergence of Cooperation in Resource

Conservation Dilemmas,” in H. Wilke, C. Rutter and D.M. Messick (Eds.),Experimental Studies of SocialDilemmas, Frankfurt: Peter Lang Publishing Co.

Kramer, R.M. and L. Goldman (1995), “Helping the Group or Helping Yourself? Social Motives and GroupIdentity in Resource Dilemmas,” in D.A. Schroeder (Ed.),Social Dilemmas. Perspectives on Individuals andGroups, Prager: London.

Liebrand, W. and D. Messick (Eds.) (1996),Frontiers in Social Dilemmas Research, Berlin: Springer.Macy, M. and A. Flache (1995), “Beyond Rationality in Models of Choice,”Annual Review of Sociology, 21,

73–91.Margolis, H. (1982),Selfishness, Altruism and Rationality, Cambridge University Press, Cambridge.Messick, D.M. (1974), “When a Little “Group Interest” Goes a Long Way: Social Motives and Union Joining,”

Orgaizational Behavior and Human Performance, 12, 331–334.Miceli, M., A. Cesta, and P. Rizzo (1995), “Distributed Artificial Intelligence from a Socio-Cognitive Standpoint:

Looking at Reasons for Interaction,”AI and Society, 9, 287–320.Olson, M. (1965),The Logic of Collective Action. Harvard University Press, Cambridge, MA.Pfeffer, J. (1981),Power in Organizations, Pitman, London.Rao, A.S. and M.P. Georgeff (1991), “Modelling Rational Agents within a BDI Architecture,”Proceedings of the

International Conference on Principles of Knowledge Representation and Reasoning, Kaufmann, San Mateo,pp. 473–485.

Schroeder, D.A. (Ed.) (1995),Social Dilemmas. Perspectives on Individuals and Groups. Praeger, London.Searle, J.R. (1990), “Collective Intentions and Actions,” in P.R Cohen, J. Morgan and M.A. Pollack (Eds.)

Intentions in Communication, Cambridge, MA: MIT Press, pp. 401–415.Taylor, M. (1987),The Possibility of Cooperation. Cambridge University Press, Cambridge.Tuomela, R. (1992), “Group Beliefs,”Synthese, 91, 285–318.Watkins, C. (1989), “Learning from Delayed Rewards,” PhD Dissertation, Dept. of Psychology, Cambridge Univ.,

UK.Weibull, J.W. (1996),Evolutionary Game Theory.The MIT Press: Cambridge, MA.Yamagishi, T. and K.S. Cook (1993), “Generalized Exchange and Social Dilemmas,”Social Psychology Quarterly,

56, 235–249.

Rosaria Conte is a cognitive scientist, with a background in philosophy and linguistics. She is head of theDivision of AI, Cognitive and Interaction Modelling at the Institute of Psychology of Cnr in Rome, and is teachingSocial Psychology at the University of Siena. She is active in both the Multi-Agent Systems and the SocialSimulation community. Her research fields of interest include agent theory and architecture; multi-agent systems;agent-based social simulation. She has published 6 volumes and c. 70 between conference and journal articlesabout computational and formal-theoretical models of intelligent social action.