paul bello 1,2, yingrui yang 2, selmer bringsjord 2,3 & kostas arkoudas 2,3 air force research...

31
l Bello 1,2 , Yingrui Yang 2 , Selmer Bringsjord 2,3 & Kostas Arkouda Air Force Research Laboratory – Information Directorate 1 Department of Cognitive Science 2 Department of Computer Science 3 Rensselaer Polytechnic Institute [email protected] [email protected] [email protected] [email protected] Towards a Psychology of Rational Agency

Post on 22-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Paul Bello1,2, Yingrui Yang2, Selmer Bringsjord2,3 & Kostas Arkoudas2,3

Air Force Research Laboratory – Information Directorate1

Department of Cognitive Science2

Department of Computer Science3

Rensselaer Polytechnic Institute

[email protected] [email protected] [email protected] [email protected]

Towards a Psychology of Rational Agency

Page 2: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Presentation Summary

• Some Questions…

• What Can AI/Computer Science Tell Us?

• What Can Philosophy/Economics Tell Us?

• What Can Psychology Tell Us?

• Is There a Synthesis?

• Humble (Yet Promising) Beginnings…

• Implications and Applications…

Page 3: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Some Questions…

• What is an agent, and what stance should we take on mental attitudes and constructions?

• If we admit these mental representations, what should they look like, and how do they guide behavior of the individual possessing them?

• What happens when agents interact, either cooperatively, or competitively? How do the mental representations of each individual interact holistically?

• CAN WE MODEL ANY OF THIS????

Page 4: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

AI and Computer Science

• Taxonomy of agents in Russell & Norvig– Sensors, Effectors, “Processing Unit”. From reflex agents to BOID.

• Variety of models– Bayesian models for belief, decision-theoretic models for intentions,

desires.– Logical models

• Ranges from the exceedingly simple (propositional calculus) to the exceedingly complicated (multi-modal logics)

• Interaction– Mostly goal-driven planning, heuristic search, probabilistic inference

• Models?– Of course. That’s AI’s bread and butter.

• Issues– Not very informed from the psychological dimension. Lip service paid to

philosophy and economic theory. Usually heuristics and short-cuts.

Page 5: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Philosophy and Economics

• Vastly more complex definitions– Philosophy: Intentionality, Utilitarianism, etc. Argument between

PSSH and connectionists.– Economics: Rationality and Four Requirements

• Usually modeled using foundational mathematics– Philosophical logic, set theory, and representations of uncertainty.

Emphasis on remaining true to philosophical roots.• Interactions

– Utilitarian semantics, Adam Smith, Game theory• Models?

– Yup.• Issues

– Overly formal. Not psychologically plausible. Optimality emphasized. Philosophers not concerned with intractability.

Page 6: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Psychology and Cognitive Science

• Usually left to the more “philosophically oriented.” Closest thing is the debate between psychological paradigms.

• Cognitivism: Mental representations. MM, ML, Concept Hierarchies, etc.

• Interactions: Behavioral Game Theory• Models?

– A ton. Some with representation, some without. Overall, captures some general phenomenon in human behavior/cognition/mental processing.

• Issues– Affect-by-Affect approach. No systematicity. Not informed by

normative theory (most of the time). Hard to capture computationally.

Page 7: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Synthesis

• Yes! Resoundingly so. Need patience to bridge disciplinary gaps, and find the non-null intersection.

• Focus of effort: Deontic preference logic– How do we reason about context-dependent situations,

including social settings?– Bridge the gap between individual cognition and group

cognition.

• Approach: Narrow the divide between the normative and the psychological as much as possible, then implement the narrowed system in a computational architecture.

Page 8: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

General Algorithm

• Leverage RAIR lab expertise in developing machine reasoning systems. Produce a natural deduction system for a deontic preference logic.

• Perform experiments in the empty domain of the psychology of philosophical logic. Focus on the deontic “distinctions” present in the literature on philosophical logic.

• Rework natural deduction system so it is informed by experimental results.

Page 9: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Hold On a Sec…

• What’s this natural deduction stuff?

• What’s deontic logic about, and how is it useful?

Page 10: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Natural Deduction

• An intuitive framework for suppositional reasoning.– Assumptions, discharging, and conditional introduction.

– Amenable to the goal-decomposition paradigm pioneered by Herb Simon & co. Means-Ends.

– Some more complex systems like Hyperproof become suitable for the all important “heterogeneous” style of reasoning that we claim is what most folks do.

• Consistent with psychological models of reasoning.– Braine’s mental logic

– Rip’s production style mental logic

Page 11: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Deontic Logic (SDL)

• Logic of norms (obligations)• Introduces a new modal operator O(p) standing for “it ought to

be the case that p”• Separates possible states of affairs into “deontically ideal” and

non-ideal situations.• Closed under the simple rules

– Modus ponens: p implies q, p, therefore q– Necessitation of obligation: p yields O(p)– Distribution of obligation: O(p implies q) gives O(p) implies O(q)– Non-contradictory obligation: it can’t be the case that O(p) and not

O(p)– Normal propositional rules

Page 12: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Deontic Paradox 1

• You should not insult someone.

• If you insult someone, you should do it in private.

• Insulting someone in private logically implies that you insult them.

• You insult someone

• O(i)• i O(p)• p i• i

Page 13: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Dyadic Deontic Logic

• Uh oh. Lots of deontic material talks about “sub-ideal” situations.

• SDL fails miserably on this kind of material. What to do?

• Dyadic deontic logic: O(a|b): if b (is done), then a ought to be (done).

• Basically: in the best non-ideal situations where b holds, a should hold as well.

• Regular SDL is subsumed: O(a|T) = O(a)

Page 14: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Deontic Paradox 2

• A person should not commit murder.

• It should be that if someone doesn’t commit murder, he should not be punished for it.

• If the person commits murder, he should be punished for it.

• A suspect commits murder

• O(m)• O(m p)• m O(p)• m

Page 15: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Deontic Paradox 3

• Usually, you should not insult someone.

• When someone harms the public interest, you should insult him.

• How to model this???

Page 16: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

1 & 3 Combined

• Usually you should not insult someone.

• If you insult someone, you should do it in private.

• Insulting someone in private implies that you insult him.

• If someone harms the public interest, then you should insult him.

Page 17: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Van der Torre’s DIODE Logic

• Syntax– Standard propositional language.– A finite set of violation propositions, one per deontic statement.– A finite set of exception propositions, one per deontic statement.– A set of background facts– A set of conditional obligations.

• Semantics– Basically insure that situations are ordered in terms of increasing

numbers of violations. Preferred situations have the least number of violations.

– Semantics for exceptions. Normal worlds are separated from “exceptional circumstances”

– Conditions to ensure the proper mixing of these two notions.

Page 18: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Solutions to Deontic Paradoxes

ipi p

i hp

i phV1, V2

h ip

Ex1h i p

Ex1

V3

hip

h i pEx1

V2

normal

exceptional

ih h iV1

h iEx1

Ex1

V2

h i

pm

pmV1 V1V1, V2

mp

m pV2

pV1

iV1, V2

normal

exceptionalF = {i}

F = {m}

Page 19: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Partial Taxonomy

• Obligations– Unconditional vs. Conditional (CTD problem)– Normal vs. Exceptional (defeasibility vs. CTD)– Prima Facie vs. conditional (overriding conditions)– ….

Page 20: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

A Sample ND Schema

ipi ppV1

iV1, V2

F = {i}pi i pp

V1

iV1, V2

F = {i}pi i pp

V1

iV1, V2

V1

V1

V2

O(i)

Page 21: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Decisions, Decisions…

• Well, we’ve talked about reasoning, and even about preference…

• What about decisions?– Where’s the probability?

• Let’s have a look at some new interesting work…

Page 22: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Kahneman/Tversky 1

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

Program A: 200 people will be saved.

Program B: 600 people will be saved with probability 1/3 and 0 people will be saved with probability 2/3.

Which of the two programs would you favor?

Page 23: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Kahneman/Tversky 2

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

Program C: 400 people will die.

Program D: 0 people die with 1/3 probability and 600 people die with 2/3 probability.

Which of the two programs would you favor?

Page 24: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Bello/Yang 1

Page 25: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Bello/Yang 2

Page 26: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Kahneman/Tversky Results

Page 27: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Bello/Yang Results

?

However, it wouldn’t be unreasonable to expect that they are analogous to the Kahneman/Tversky results, but purely explainable without resorting to traditional decision-theoretic devices.

Page 28: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Implications and Applications

• Implications: Next-Generation Logic-Based AI– A synthesis of philosophical, psychological, and

computational dimensions for higher-order cognitive function.

• Reasoning and Decision-Making!

– MARMML as an embodiment and a test-bed.

• Application: RASCALS for…– Third-Generation Wargaming

– Intelligence Analysis

– Educational Technologies

Page 29: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Game

EnvironmentCognitive system

StrategicDecisions

Long-termPlanning

Human andMachine Reasoning

ComputationalCognitive Modeling

Perception &Action

ResourceManagement Terrain Model

Physics

Opportunism

output

input

DecisionProblemFormation

High-level Low-level

Tactics

InfrastructureEnvironment

Urban Models

Entity Interdependence

Knowledge

Ethical Norms

Wargaming: A Cognitive Approach

Page 30: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

SLATE

Page 31: Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive

Educational Technology