1 defining agents 2 2 how agents should act 3 3 designs...

30
Intelligent Agents Christian Jacob TOC 1 Back 1 Defining Agents 2 2 How Agents Should Act 3 2.1 Mapping from Percept Sequences to Actions 5 2.2 Autonomy 6 3 Designs of Intelligent Agents 7 3.1 Architecture and Program 7 3.2 Agent Programs 9 3.3 Simple Lookup? 11 3.4 Example — An Automated Taxi Driver 13 4 Types of Agents 15 4.1 Simple Reflex Agents 16 4.2 Agents that Keep Track of the World 19 4.3 Goal-Based Agents 22 4.4 Utility-Based Agents 24 5 Environments 26 5.1 Properties of Environments 26 5.2 Environment Programs 29

Upload: ngotuyen

Post on 25-Mar-2018

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Intelligent Agents

Christian Jacob

TOC

1

Back

1 Defining Agents 2

2 How Agents Should Act 3

2.1 Mapping from Percept Sequences to Actions 52.2 Autonomy 6

3 Designs of Intelligent Agents 7

3.1 Architecture and Program 73.2 Agent Programs 93.3 Simple Lookup? 113.4 Example — An Automated Taxi Driver 13

4 Types of Agents 15

4.1 Simple Reflex Agents 164.2 Agents that Keep Track of the World 194.3 Goal-Based Agents 224.4 Utility-Based Agents 24

5 Environments 26

5.1 Properties of Environments 265.2 Environment Programs 29

Page 2: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Defining Agents

Christian Jacob

First Back TOC

2

Prev Next Last

1 Defining Agents

An

agent

is anything that can be viewed as

perceiving

its environment through

sensors

and

acting

upon that environment through

effectors

.

environmentagent

sensors

effectors

actions

percepts

?

Page 3: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

How Agents Should Act

Christian Jacob

First Back TOC

3

Prev Next Last

2 How Agents Should Act

Rational Agent

A rational agent performs actions that cause the agent to be most successful, depending on a

performance measure

, which decides

how

and

when

to evaluate an agent.

What is rational at any given time depends on four things:

• The

performance measure

that defines degree of success

• The

percept sequence

, a complete history of what the agent has perceived so far

• The agent’s

knowledge

about the environment

• The

actions

that the agent can perform

This leads to a definition of an

ideal rational agent ...

Page 4: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

How Agents Should Act

Christian Jacob

First Back TOC

4

Prev Next Last

Ideal Rational Agent

For each possible percept sequence, an ideal rational agent should do

• whatever action is expected to

maximize

its

performance

measure,

• on the basis of the evidence provided by the

percept

sequence and

• whatever built-in

knowledge

the agent has.

Example: Crossing a busy road

Doing actions

in order to obtain useful information

is an important part of rationality.

The notion of an agent is meant to be a

tool for analyzing systems

.

Page 5: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

How Agents Should Act

Christian Jacob

First Back TOC

5

Prev Next Last

2.1 Mapping from Percept Sequences to Actions

Any particular agent can be described by making a table of the action it takes in response to each possible percept sequence.

Such a list is called a

mapping

from percept sequences to actions.

However, this does not mean that we have to create an explicit table with an entry for every possible percept sequence (compare the square root example).

Percept x Action z

1.0 1.0000001.1 1.0488081.2 1.0954451.3 1.1401751.4 1.1832151.5 1.2247441.6 1.2649111.7 1.3038401.8 1.3416401.9 1.378404...

function SQRT(x)

z := 1.0 /* initial guess */

repeat until | z2 - x | < 10-15

z := z - (z2 - x) / ( 2 z )

end

return z

Page 6: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

How Agents Should Act

Christian Jacob

First Back TOC

6

Prev Next Last

2.2 Autonomy

Assume an agent’s actions are based completely on built-in knowledge.

Then it need not pay attention to its percepts.

This agent clearly lacks

autonomy

.

An agent’s behavior can be based on both its

• own

experience

and

• built-in

knowledge

A system is

autonomous

to the extent that its behavior is determined by its own experience.

--> Agents should be provided with

• initial

knowledge

(compare animal reflexes) and

• the ability to

learn

.

Page 7: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents

Christian Jacob

First Back TOC

7

Prev Next Last

3 Designs of Intelligent Agents

AI intends to design

agent programs

, functions that implement the agent mapping from percepts to actions.

The

computing device

, we assume this program to run on, is referred to as the

architecture

.

3.1 Architecture and Program

The architecture might include special-purpose

hardware

(camera, microphone,etc.).

The

software

might provide a degree of insulation between the raw computer hardware and the agent program, enabling programming at a higher level.

agent

=

architecture

+

program

What matters is not the distinction between “real” and “artificial” environments, but the

complexity

of the relationship among the

behavior

of the agent, the

percept

sequence generated by the environment, and the

goals

that the agent is supposed to achieve.

Page 8: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents

Christian Jacob

First Back TOC

8

Prev Next Last

Percepts and Actions for a Selection of Agent Types

Agent Type Percepts Actions Goals Environment

Medical diagno-sis system

Symptoms, find-ings, patient’s answers

Questions, tests, treatments

Healthy patient, minimize cost

Patient, hospital

Satellite image analysis system

Pixels of varying intensity, color

Print a categoriza-tion of scene

Correct catego-rization

Image from orbiting satellite

Part-picking robot

Pixels of varying intensity

Pick up parts and sort into bins

Place parts in correct bins

Conveyor belt with parts

Refinery control-ler

Temperature, pressure readings

Open, close valves; adjust temperature

Maximizing purity, yield, safety

Refinery

Interactive English tutor

Typed words Print exercises, suggestions, corrections

Maximize stu-dent’s score on test

Set of students

Page 9: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents

Christian Jacob

First Back TOC

9

Prev Next Last

3.2 Agent Programs

All agents and agent programs accept percepts from an environment and generate actions.

function

SKELETON-AGENT(

percept

)

returns

action

static

:

memory

the agent’s memory of the world

memory

<--- UPDATE-MEMORY(

memory, percept

)

action

<--- CHOOSE-BEST-ACTION(

memory

)

memory <--- UPDATE-MEMORY( memory, action )

return action

Page 10: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents Christian Jacob

First Back TOC 10 Prev Next Last

Remarks on agent programs:

Percept sequence

• The agent program receives only a single percept as its input.

• It is up to the agent to build up the percept sequence in memory.

Performance measure

• The goal or performance measure is not part of the agent program.

• The performance evaluation is applied externally.

Page 11: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents Christian Jacob

First Back TOC 11 Prev Next Last

3.3 Simple Lookup?

A lookup table is the simplest possible way of writing an agent program.

It operates by keeping in memory its entire percept sequence, and using it to index into table, which contains the appropriate action for all possible percept sequences.

function TABLE-DRIVEN-AGENT( percept ) returns action

static: percepts, a sequence, initially empty

table, a table, indexed by percept sequences, initially fully specified

append percept to the end of percepts

action <--- LOOKUP( percepts, table )

return action

Page 12: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents Christian Jacob

First Back TOC 12 Prev Next Last

Why is this TABLE-DRIVEN AGENT proposal doomed to failure?

• Table size: The table needed for something as simple as an agent that can only play chess would be about 35100 entries.

• Time to build: It would take an enormous amount of time to build complete tables.

• Lack of autonomy: The agent has no autonomy at all, because the calculation of best actions is entirely built-in.

If the environment changed in some unexpected way, the agent would be lost.

• Lack or infeasibility of learning: Even if the agent were given a learning mechanism, so that it could have a degree of autonomy, it would take forever to learn the right value for all the table entries.

Page 13: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents Christian Jacob

First Back TOC 13 Prev Next Last

3.4 Example — An Automated Taxi Driver

The full driving task is extremely open-ended. There is no limit to the novel combination of circumstances that can arise.

First, we have to think about the percepts, actions, goals and environment for the taxi.

Agent Type Percepts Actions Goals Environment

Taxi driver Cameras,speedometer,GPS,sonar,microphone

Steer,accelerate,brake,talk to passenger,communicate with other vehicles

Safe,fast,legal,comfortable trip,maximize profits

Roads,other traffic,pedestrians,customers

Page 14: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Designs of Intelligent Agents Christian Jacob

First Back TOC 14 Prev Next Last

Performance measures for the taxi driver agent:

• Getting to the correct destination

• Minimizing fuel consumption and wear and tear

• Minimizing the trip time and/or cost

• Minimizing violations of traffic laws

• Minimizing disturbances to other drivers

• Maximizing safety and passenger comfort

• Maximizing profits

Obviously, some of these goals conflict, so there will be trade-offs involved.

Driving environments:

• local roads or highways

• weather conditions

• left or right lane driving

• …

Page 15: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 15 Prev Next Last

4 Types of Agents

We have to decide how to build a real program to implement the mapping from percepts to action for the taxi driver agent.

Different aspects of driving suggest different types of agent programs.

We will consider four types of agent programs:

• Simple reflex agents

• Agents that keep track of the world

• Goal-based agents

• Utility-based agents

Page 16: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 16 Prev Next Last

4.1 Simple Reflex Agents

Instead of constructing a lookup table for the percept-action mapping, we can summarize portions of the table by noting certain commonly occurring input/output associations.

This can be accomplished by condition-action rules.

Example:

if car-in-front-is-breaking then initiate-braking

Humans (and animals in general) have many such connections,

• some of which are learned responses (e.g., driving) and

• some of which are innate reflexes.

Page 17: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 17 Prev Next Last

Schematic diagram of a simple reflex agent

Enviro

nm

ent

Agent

Sensors

Effectors

What the worldis like now

What action Ishould do nowCondition-action rules

Page 18: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 18 Prev Next Last

A simple reflex agent

function SIMPLE-REFLEX-AGENT( percept ) returns action

static: rules, a set of condition-action rules

state <-- INTERPRET-INPUT( percept )

rule <-- RULE-MATCH( state, rules )

action <-- RULE-ACTION[ rule ]

return action

Works only if the correct decision can be made on the basis of the current percept.

Page 19: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 19 Prev Next Last

4.2 Agents that Keep Track of the World

For determining whether a vehicle is braking one has to keep the previous frame from the camera to detect when two red lights at the edge of the vehicle go on or off simultaneously.

Hence, the driving agent will have to maintain some sort of internal state.

Two kinds of knowledge have to be encoded in the agent program:

• Information about how the world evolves independently of the agent.

• Information about how the agent´s own actions will affect the world.

Page 20: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 20 Prev Next Last

Schematic diagram of a reflex agent with internal state

Enviro

nm

ent

Agent

Sensors

Effectors

What the worldis like now

What action Ishould do nowCondition-action rules

State

How the world evolves

What my actions do

Page 21: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 21 Prev Next Last

A reflex agent with internal state

function REFLEX-AGENT-WITH-STATE( percept ) returns action

static: state, a description of the current world state

rules, a set of condition-action rules

state <-- UPDATE-STATE( state, percept )

rule <-- RULE-MATCH( state, rules )

action <-- RULE-ACTION[ rule ]

state <-- UPDATE-STATE( state, action )

return action

However, knowing about the current state of the environment is not always enough to decide what to do.

Page 22: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 22 Prev Next Last

4.3 Goal-Based Agents

Besides a current state description the agent needs some sort of goal information, which describes situations that are desirable.

The agent program can combine this with information about the results of possible actions in order to choose actions that achieve the goal.

Achieving the goal may involve

• a single action or

• (long) sequences of actions.

The subfields of Ai devoted to finding action sequences that do achieve agent´s goals are

• searching and

• planning.

Goal-based agents involve consideration of the future and is more flexibly reacting to changes in the environment.

Page 23: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 23 Prev Next Last

Schematic diagram of an agent with explicit goals

Enviro

nm

ent

Agent

Sensors

Effectors

What the worldis like now

What action Ishould do nowGoals

State

How the world evolves

What my actions do

What it will be likeif I do action A

Page 24: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 24 Prev Next Last

4.4 Utility-Based Agents

Goals alone are not really enough to generate high-quality behavior.

There might be different ways (action sequences) of achieving a specific goal.

If one world state is preferred to another, then it has higher utility for the agent.

Utility is therefore a function that maps a state onto a real number, which describes the associated degree of “happiness”.

A complete specification of the utility function allows rational decisions in two kinds of cases:

• When there are conflicting goals, only some of which can be achieved, the utility function specifies the appropriate trade-off.

• When there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed up against the importance of the goals.

Page 25: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Types of Agents Christian Jacob

First Back TOC 25 Prev Next Last

Schematic diagram of a complete utilty-based agent

Enviro

nm

ent

Agent

Sensors

Effectors

What the worldis like now

What action Ishould do now

Utility

State

How the world evolves

What my actions do

What it will be likeif I do action A

How happy I willbe in such a state

Page 26: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Environments Christian Jacob

First Back TOC 26 Prev Next Last

5 Environments

5.1 Properties of Environments

• Accessible vs. inaccessible.

An agent´s sensory apparatus gives it access to the complete state of the environment.

An environment is effectively accessible if the sensors detect all aspects that are relevant to the choice of action.

In an accessible environment an agent need not maintain any internal state to keep track of the world.

• Deterministic vs. nondeterministic

In a deterministic environment, the next state of the environment is completely determined by the current state and the actions selected by the agents.

An agent need not worry about uncertainty in an accessible, deterministic environment.

If the environment is inaccessible, however, it may appear to the agent to be nondeterministic.

Page 27: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Environments Christian Jacob

First Back TOC 27 Prev Next Last

• Episodic vs. nonepisodic

In an episodic environment, the agent´s experience is divided into “episodes”. Each episode consists of the agent perceiving and the acting.

Subsequent episodes do not depend on what actions occur in previous episodes.

In episodic environments the agent does not have to think ahead.

• Static vs. dynamic

A dynamic environment can change while an agent is deliberating.

In static environments, an agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time.

An environment is called semidynamic if it does not change with the passage of time but the agent´s performance score does.

• Discrete vs. continuous

If there are a limited number of distinct, clearly defined percepts and actions we say that the environment is discrete. Chess is discrete. Taxi driving is continuous.

Page 28: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Environments Christian Jacob

First Back TOC 28 Prev Next Last

Examples of Environments and their Characteristics

Environment Accessible Deterministic Episodic Static Discrete

Chess with a clockChess without a clockPokerBackgammonTaxi drivingMedical diagnosis systemImage-analysis systemPart-picking robotRefinery controllerInteractive English tutor

YesYesNoYesNoNoYesNoNoNo

YesYesNoNoNoNoYesNoNoNo

NoNoNoNoNoNoYesYesNoNo

SemiYesYesYesNoNo

SemiNoNoNo

YesYesYesYesNoNoNoNoNoYes

Page 29: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Environments Christian Jacob

First Back TOC 29 Prev Next Last

5.2 Environment Programs

A generic environment program illustrates the basic relationship between agents and environments.

procedure RUN-ENVIRONMENT( state, UPDATE-FN, agents, termination )

inputs: state the initial state of the environmentUPDATE-FN function to modify the environmentagents a set of agentstermination a predicate to test when we are done

repeatfor each agent in agents do

PERCEPT[ agent ] <--- GET-PERCEPT( agent, state )end

for each agent in agents doACTION[ agent ] <--- PROGRAM[ agent ]( PERCEPT[ agent ])

end

state <--- UPDATE-FN( actions, agents, state )until termination( state )

Page 30: 1 Defining Agents 2 2 How Agents Should Act 3 3 Designs …pages.cpsc.ucalgary.ca/.../CPSC533/Slides/02-IntelligentAgents.pdf2.2 Autonomy 6 3 Designs of Intelligent Agents 7 ... analysis

Environments Christian Jacob

First Back TOC 30 Prev Next Last

Environment Simulator Keeping Track of Agent Performances

procedure RUN-EVAL-ENVIRONMENT( state, UPDATE-FN, agents, termination, PERFORMANCE-FN) returns scores

local variables: scores a vector the same size as agents, all 0

repeatfor each agent in agents do

PERCEPT[ agent ] <--- GET-PERCEPT( agent, state )end

for each agent in agents doACTION[ agent ] <--- PROGRAM[ agent ]( PERCEPT[ agent ])

end

state <--- UPDATE-FN( agents, state )

scores <--- PERFORMANCE-FN( scores, agents, state )

until termination( state )

return scores