74.419 artificial intelligence intelligent agents 1 russell and norvig, ch. 2

56
74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Post on 18-Dec-2015

312 views

Category:

Documents


25 download

TRANSCRIPT

Page 1: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

74.419 Artificial Intelligence

Intelligent Agents 1

Russell and Norvig, Ch. 2

Page 2: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Outline

Agents and environments Rationality PEAS (Performance measure,

Environment, Actuators, Sensors) Environment types Agent types

Page 3: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Human agent has, for example: eyes, ears, and other organs as sensors; hands, legs, mouth, and other body parts as

actuators.

Robotic agent has, for example: cameras and infrared range finders as sensors; various motors as actuators.

Page 4: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Agent and Environment

Page 5: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

The Vacuum-Cleaner Mini-World

Environment: square A and B Percepts: location and status, e.g., [A, Dirty] Actions: left, right, suck, and no-op

Page 6: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

The Vacuum-Cleaner Mini-World

World State Action

[A,Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Dirty], [A, Clean] Right

[A, Clean], [B, Dirty]

[A, Clean], [B, Clean]

...

Suck

No-op

...

Page 7: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Agent Function

The agent function maps from percept histories to actions:

[f: P* A] An agent is completely specified by the agent

function mapping percept sequences to actions The agent program runs on the physical

architecture to produce f. agent = architecture + program

Page 8: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

The Vacuum-Cleaner Mini-World

function REFLEX-VACUUM-AGENT ([location, status]) return an actionif status == Dirty then return Suckelse if location == A then return Rightelse if location == B then return Left

Does not work this way. Need full state space (table) or memory.

Page 9: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

The Vacuum-Cleaner Mini-World

World State Action

[A, Clean]

[A, Dirty]

[B, Clean]

[B, Dirty]

[A, Dirty], [A, Clean]

[A, Clean], [B, Dirty]

[B, Dirty], [B, Clean]

[B, Clean], [A, Dirty]

[A, Clean], [B, Clean]

[B, Clean], [A, Clean]

Right

Suck

Left

Suck

Right

Suck

Left

Suck

No-op

No-op

Page 10: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Rational Agents

Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Page 11: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Rationality

Rationality omniscience An omniscient agent knows the actual outcome

of its actions.

Rationality perfection Rationality maximizes expected performance,

while perfection maximizes actual performance.

"Ideal Rational Agent": Always does "the right thing".

Page 12: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Rationality

The proposed definition requires: Information gathering/exploration

To maximize future rewards Learn from percepts

Extending prior knowledge Agent autonomy

Compensate for incorrect prior knowledge

Page 13: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Rationality

What is rational at a given time depends on: Performance measure, Prior environment knowledge, Actions, Percept sequence to date (sensors).

Page 14: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Task Environment

To design a rational agent we must first specify its task environment.

PEAS description of the task environment: Performance Environment Actuators Sensors

Page 15: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Task Environment - Example

For example, a fully automated taxi driver:PEAS description of the environment: Performance

Safety, destination, profits, legality, comfort Environment

Streets/freeways, other traffic, pedestrians, weather,, … Actuators

Steering, accelerating, brake, horn, speaker/display,… Sensors

Video, sonar, speedometer, engine sensors, keyboard, GPS, …

Page 16: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Examples of Agents (Norvig)

Page 17: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

PEAS

Agent: Medical diagnosis system Performance measure: Healthy patient, minimize

costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests,

diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings,

patient's answers)

Page 18: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

PEAS

Agent: Part-picking robot Performance measure: Percentage of parts in

correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors

Page 19: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

PEAS

Agent: Interactive English Tutor Performance measure: Maximize student's score

on test Environment: Set of students Actuators: Screen display (exercises,

suggestions, corrections) Sensors: Keyboard

Page 20: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Classification of Environment Types

Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.

Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic, except for the actions of other agents, then the environment is strategic)

Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

Page 21: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types Static (vs. dynamic): The environment is unchanged

while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does).

Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

Single agent (vs. multiagent): An agent operating by itself in an environment.

Page 22: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Task Environments (Norvig)Agents design depends on task environment:

deterministic vs. stochastic vs. non-deterministic

assembly line vs. weather vs. “odds & gods”

episodic vs. non-episodic assembly line vs. diagnostic repair robot, Flakey

static vs. dynamic room without vs. with other agents

discrete vs. continuous chess game vs. autonomous vehicle

single vs. multi agent solitaire game vs. soccer, taxi driver

fully observable vs. partially observable video camera vs. infrared camera - colour?

Page 23: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Infrared Picture of an Unpleasant Situation

from www.indigosystems.com

Page 24: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable??

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Page 25: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable??

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Fully vs. partially observable: an environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action.

Page 26: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Fully vs. partially observable: an environment is fully observable when the sensors can detect all aspects that are relevant to the choice of action.

Page 27: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic??

Episodic??

Static??

Discrete??

Single-agent??

Deterministic vs. stochastic: if the next environment state is completely determined by the current state and the executed action, then the environment is deterministic.

Page 28: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic??

Static??

Discrete??

Single-agent??

Deterministic vs. stochastic: if the next environment state is completely determined by the current state and the executed action, then the environment is deterministic.

Page 29: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic??

Static??

Discrete??

Single-agent??

Episodic vs. sequential: In an episodic environment, the agent’s experience can be divided into atomic steps, where the agent perceives and then performs a single action. The choice of action depends only on the episode itself.

Page 30: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping

Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static??

Discrete??

Single-agent??

Episodic vs. sequential: In an episodic environment, the agent’s experience can be divided into atomic steps, where the agent perceives and then performs a single action. The choice of action depends only on the episode itself.

Page 31: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static??

Discrete??

Single-agent??

Static vs. dynamic: If the environment can change, while the agent is choosing an action, the environment is dynamic. It is semi-dynamic, if the agent’s performance changes, even when the environment remains the

same.

Page 32: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete??

Single-agent??

Static vs. dynamic: If the environment can change, while the agent is choosing an action, the environment is dynamic. It is semi-dynamic, if the agent’s performance changes, even when the environment remains the

same.

Page 33: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete??

Single-agent??

Discrete vs. continuous: This distinction can be applied to the state of the environment, the way time is handled and to the percepts/ actions of

the agent.

Page 34: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete?? YES YES YES NO

Single-agent??

Discrete vs. continuous: This distinction can be applied to the state of the environment, the way time is handled and to the percepts/ actions of the agent.

Page 35: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete?? YES YES YES NO

Single-agent??

Single vs. multi-agent: Does the environment contain other agents who are also maximizing some performance measure that depends on the current agent’s actions?

Page 36: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

Solitaire Backgammon Internet shopping Taxi

Observable?? FULL FULL PARTIAL PARTIAL

Deterministic?? YES NO YES NO

Episodic?? NO NO NO NO

Static?? YES YES SEMI NO

Discrete?? YES YES YES NO

Single-agent?? YES NO NO NO

Single vs. multi-agent: Does the environment contain other agents, who are also maximizing some performance measure that depends on the current agent’s actions?

Page 37: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Chess with Chess w.o. Taxi clock clock driving

Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes NoSingle agent No No No

The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent.

Examples of Environment Types

Page 38: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Environment types

The simplest environment is Fully observable, deterministic, episodic, static,

discrete and single-agent.

Most real situations are: Partially observable, stochastic, sequential,

dynamic, continuous and multi-agent.

Page 39: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Agent types

How does the inside of the agent work? Agent = architecture + program

All agents have the same skeleton: Input = current percepts Output = action Program= manipulates input to produce output

Note difference with agent function.

Page 40: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Agent types

Function TABLE-DRIVEN_AGENT(percept) returns an action

static: percepts, a sequence initially empty

table, a table of actions, indexed by percept sequence

append percept to the end of percepts

action LOOKUP(percepts, table)

return action This approach is doomed to failure.

Page 41: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Agent types

Four basic kinds of agent programs will be discussed: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

All these can be turned into learning agents.

Page 42: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Simple Reflex Agents

Select action on the basis of only the current percept. E.g. the vacuum-agent

Large reduction in possible percept/action situations (next page).

Implemented through condition-action rules If dirty then suck

Page 43: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Simple Reflex Agent – Example (Nilsson)

Robot in Maze• perceives 8 squares around it • low-level percept: can robot move to

square or not• higher level percept: 2 unit segments• 4 basic actions: left (west), right (east),

up (north), down (south)• task is to move along a border• no 'tight' spaces, at least two free

squares

Page 44: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Simple Reflex Agent - Example (Nilsson)

Note: The description of the left bottom agent seems to be wrong. This agent will walk clockwise along the outside wall.

Note: The description of the left bottom agent seems to belong to this agent. It will walk counter- clockwise around the object.

Page 45: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Simple Reflex Agent - Example

Behaviour RoutinesIf x1=1 and x2=0 then move rightIf x2=1 and x3=0 then move downIf x3=1 and x4=0 then move leftIf x4=1 and x1=0 then move upelse move up

Page 46: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Simple Reflex Agent - Example

Page 47: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Simple Reflex Agents

function SIMPLE-REFLEX-AGENT(percept) returns an action

static: rules, a set of condition-action rulesstate INTERPRET-INPUT(percept)rule RULE-MATCH(state, rules)action RULE-ACTION[rule]return action

Will only work if the environment is fully observable. Otherwise infinite loops may occur.

Page 48: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

The Vacuum-Cleaner Mini-World

function REFLEX-VACUUM-AGENT ([location, status]) return an actionif status == Dirty then return Suckelse if location == A then return Rightelse if location == B then return Left

Does not work this way. Need full state space (table) or memory.

Page 49: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Model/State-based Agents To tackle partially

observable environments. Maintain internal state

Over time update state using world knowledge How does the world

change. How do actions affect

world. Model of World

Page 50: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Model/State-based Agents

function REFLEX-AGENT-WITH-STATE(percept) returns an actionstatic: rules, a set of condition-action rules

state, a description of the current world state

(action, the most recent action)state UPDATE-STATE(state, (action,) percept)rule RULE-MATCH(state, rule)action RULE-ACTION[rule]return action

Page 51: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Goal-based Agents The agent needs a goal to know

which situations are desirable. Things become difficult when long

sequences of actions are required to reach the goal.

Typically investigated in search and planning research.

Major difference: future is taken into account.

Is more flexible since knowledge is represented explicitly - to a certain degree - and can be manipulated.

Page 52: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Utility-based Agents Certain goals can be reached

in different ways. Some are better, have a

higher utility. Utility function maps a

(sequence of) state(s) onto a real number.

Improvement on goal setting: Selecting between conflicting

goals. Select appropriately between

several goals based on likelihood of success.

Page 53: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Learning Agents

All previous agent-programs describe methods for selecting actions. Yet, this does not explain

the origin or development of these programs.

Learning mechanisms can be used.

Advantage is the robustness of the program towards unknown environments.

Page 54: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Learning Agents Learning element: introduce improvements in performance element. Critic provides feedback on

agents performance based on fixed performance standards.

Performance element: selecting actions based on percepts. Corresponds to the previous

agent programs. Problem generator:

suggests actions that will lead to new and informative experiences. Exploration vs. exploitation

Page 55: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Robotic Sensors (digital) camera infrared sensor range finders, e.g. radar, sonar GPS tactile (whiskers, bump panels) proprioceptive sensors, e.g. shaft

decoders force sensors torque sensors

Page 56: 74.419 Artificial Intelligence Intelligent Agents 1 Russell and Norvig, Ch. 2

Robotic Effectors ‘limbs’ connected through joints; degrees of freedom = #directions in

which limb can move (incl. rotation axis) drives: wheels (land), propellers, turbines

(air, water) driven through electric motors, pneumatic

(gas), or hydraulic (fluids) actuation statically stable, dynamically stable