intelligent agents1 artificial intelligence intelligent agents chapter 2

37
Intelligent Agents 1 Artificial Intelligence Intelligent Agents Chapter 2

Upload: adrian-evan-ferguson

Post on 19-Jan-2016

260 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 1

Artificial IntelligenceIntelligent Agents

Chapter 2

Page 2: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 2

Outline of this Chapter

• What is an agent?• Rational Agents• PEAS (Performance measure,

Environment, Actuators, Sensors)• Structure of Intelligent Agents• Types of Agent Program• Types of Environments

Page 3: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 3

What is an agent?• Anything that can be viewed as:

– perceiving its environment through sensors– acting upon that environment through Actuators.

• Human agent:– Sensors: eyes, ears…– Actuators : legs, mouth, and other body parts

• Robotic agent:– Sensors: cameras and infrared range finders – Actuators: various motors

Page 4: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 4

Rational AgentsRational Agent

– Agent that does the right thing based on what it can perceive and the actions it can perform.

– Right action cause the agent to be most successful.

Problem: how & when to evaluate the agent’s success?

It depends on 4 things:1. Performance measure- defines degree of success

– the criteria that determine how successful an agent is.

– Example: vacuum-cleaner • amount of dirt cleaned up, • amount of electricity consumed, • amount of noise generated, etc..

– When to evaluate the agent’s success? • E.g. Measure performance over the long run.

Page 5: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 5

Rational AgentsRational Agents

Environment

Perce

pts A

ction

s

Sensors

Actuators

Agent Progra

m

Page 6: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 6

Rational Agents (cont.)

• Rational agent is distinct from omniscient (knows the actual outcome of its actions & act accordingly).

– percepts may not supply all relevant information2. Percept sequence

– Everything that the agent has perceived so far.3. What the agent knows about the environment4. The actions that the agent can perform

– maps any given percept sequence to an action

• Ideal Rational Agent– One that always takes the action that is expected

to maximize its performance measure, given the percept sequence it has seen so far & whatever build-in knowledge the agent has.

Page 7: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 7

Mapping: percept sequences actions

• Mappings describe agents: – by making a table of the action it takes

in response to each possible percept sequence.

• Do we need to create an explicit table with an entry for every possible percept sequence?

• Example: square root function on a calculator– Percept sequence: sequence of

keystrokes– Action: display a No on screen

Page 8: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 8

Autonomy

• An agent’s behaviour can be based on both its own experience and the built-in knowledge used in constructing the agent for the environment in which it operates.

• An agent is autonomous if its behaviour is determined by its own experience, rather than on knowledge of the environment that has been built-in by the designer.

• AI agent should have some initial knowledge, & ability to learn.

• Autonomous intelligent agent should be able to operate successfully in a wide variety of environment, given sufficient time to adapt.

Page 9: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 9

PEAS

• The design of an agent program depends on:PEAS: Performance measure, Environment, Actuators, Sensors

• Must first specify the setting for intelligent agent design

• Consider, e.g., the task of designing an automated taxi driver:– Performance measure– Environment– Actuators– Sensors

Page 10: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 10

PEAS

• Example, the task of designing an automated taxi driver:

– Performance measure: Safe, fast, legal, comfortable trip, maximize profits

– Environment: Roads, other traffic, pedestrians, customers

– Actuators: Steering wheel, accelerator, brake, signal, horn

– Sensors: Cameras, sonar, speedometer, odometer, engine sensors, keyboard

Page 11: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 11

Structure of Intelligent Agents

• Agent: humans, robots, softbots, etc.

The role of AI is to design the agent program.• Agent Program: a function that implements the

agent mapping from a percept to an action.

This program will run on some sort of computing device:

• Architecture: computing device (computer / special HW), makes the percept from the sensors available to the program, runs the program, and feeds the program’s action choices to the effectors as generated.

• The relationship among the above can be summed up as bellow:agent = architecture + program

Page 12: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 12

Agent TypesHow to build a program to implement the mapping from percepts

to action?

Five basic types will be considered:

• Table-driven agents use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table.

• Simple reflex agents Respond immediately to percepts.• Model-based reflex agents maintain internal state to track

aspects of the world that are not evident in the current percept.• Goal-based agents Act so that they will achieve their goals• Utility-based agents base their decision on classic utility-

theory in order to act rationally.

Page 13: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 13

Table-driven agents• It operates by keeping in memory its entire percept

sequence• Use it to index into table of actions (contains

appropriate action for all possible sequences.)This proposal is doomed to failure.• The table needed for something as simple as an agent

that can only play chess would be about 35100 entries. • Agent has no autonomy at all – the calculation of best

actions is entirely built in. (if the Env changes, agent would be lost)

Page 14: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 14

Simple reflex agents in schematic form

What Action Ishould I do Now?

AgentSensors

Actuators

Condition-action rules

Environm

ent

What the world

is like now

How the condition-action rules allow the agent to make the connection from percept to action

Page 15: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 15

Simple reflex agents

• can often summarise portions of the look-up table by noting commonly occurring input/output associations which can be written as condition-action rules:– if {set of percepts} then {set of actions}

• Based on condition-action rules. In humans, condition-action rules are both learned responses and innate reflexes (e.g., blinking). if it is raining then put up umbrella

• Correct decisions must be made solely based on current percept.

• Examples: Is driving purely a matter of reflex? What happens when making a lane change?

Page 16: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 16

Simple reflex agents (Cont.)

Function SIMPLE-REFLEX-Agent(percept)

Static: rules, set of condition-actions rules

state <- Interpret-Input(percept)

Rule <- Rule-Match(state, rules)

action <- Rule-Action[Rule]

Return action

Find the rule whose condition matches the current situation, and do the action associated with that rule.

Page 17: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 17

Environment

Percepts

Actions

Sensors Actuators

Selected Action

Current State

If-then Rules

Simple Reflex Agent

Page 18: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 18

Simple Reflex Agent

Function Simple reflex agents([locaiton, status]) returns actions

– IF status = Dirty then return Suck– Else IF location = A then return Right– Else IF location = B then return Left

Page 19: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 19

Model-based reflex agents (Reflex agents with state)

How the current percept is combined with the old internal state to generate the updated description of the current state.

Page 20: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 20

Model-Based Agent

Environment

Percepts

Actions

Sensors Actuators

Selected Action

Current State

Previous perceptions

Impact of actions

World changes If-then Rules

Page 21: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 21

Model-based reflex agents (Cont)

Function REFLEX-Agent-WITH-STATE(percept)

Static: state, a description of the current world state

rules, a set of condition-action rules.

action, the most recent action, initially none

state <- Update-State(state,action,percept)

Rule <- Rule-Match(state, rules)

action <- Rule-Action[Rule]

Return action

•It keeps track of the current state of the world using an internal model. It then chooses an action in the same way as the reflex agent.

•Update-State, responsible for creating the new internal state description, interpreting the new percept in the light of existing knowledge about the state & it uses inf about how the world evolves to keep track of the unseen parts of the world.

Is it enough to decide what to do by knowing only the current state of the environment?

Page 22: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 22

Goal-based agents• agents which in addition to state information have

a kind of goal information which describes desirable situations. Agents of this kind take future events into consideration.

• The right decision depends on where the agent is trying to get to agent needs goal info & current state description.e.g: Decision to change lanes depends on a goal to go somewhere.

• It is flexible: the knowledge that supports its decisions is represented explicitly & can be modified without having to rewrite a large No of condition-action rules ( reflex agent).

Page 23: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 23

An agent with explicit goals

It keeps track of the world state as well as a set of goals it is trying to achieve, & chooses an action that will lead to the achievement of its goals.

Agent Sensors

What the world islike now ?

What Action should Ido Now?

Goals

Actuators

Envi

ronm

ent

state

How the worldevolves

What my actionsdo

What it will be like ifI do Action A

Page 24: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 24

Goal-Based Agent

Environment

Percepts Actions

Sensors Actuators

Selected Action

Current State

GoalPrevious perceptions

Impact of actions

World changes

State if I do action X

Page 25: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 25

Goal-based agents (Cont.)

• Given knowledge of how the world evolves, • how its own actions will affect its state, • an agent will be able to determine the consequences of all

possible actions. • then compare each of these against its goal to determine

which action achieves its goal, and hence which action to take.

• If a long sequence of actions is required to reach the goal, then Search (Russell Chapters 3-5) and Planning (Chapters

11-13) are the sub-fields of AI that must be called into action.

• Are goals alone enough to generate high quality behaviour?– Many action sequences can result in the same goal being

achieved.– but some are Quicker,Safer,more Reliable,cheaper

than others.

Page 26: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 26

Utility-based agents• Any utility based agent can be described as possessing an

utility function• Utility is a function that maps a state/sequence of state

onto real number, which describes the associated degree of happiness.

• Use the utility to choose between alternative sequences of

actions/states that lead to a given goal being achieved.

• Utility function allows rational decisions in 2 kind of cases:– When there are conflicting goals, only some of which can be

achieved (e.g. speed & safety). – When they are several goals that the agent can aim for, none

of which can be achieved with certainty.

Page 27: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 27

Utility-based agents

Agent Sensors

What the world islike now ?

How happy I will be

In such a state

What Action should I

do Now?

Utility

Actuators

Envi

ronm

ent

state

How the worldevolves

What my actionsdo

What it will be like ifI do Action A

Page 28: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 28

Utility-Based Agent

Environment

Percepts Actions

Sensors Actuators

Selected Action

Current State

UtilityPrevious perceptions

Impact of actions

World changes

State if I do action X

Happiness in that state

Page 29: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 29

Learning agents• An agent whose behavior improves over time based on

its experience.

Page 30: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 30

Learning AgentLearning Agent

Environment

Percepts Actions

Sensors Actuators

Problem Generator

Learning Element

Feedback

Performance standard

ChangesKnowledge

Learning Goals

Performance Element

Critic

Page 31: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 31

Machine Learning• Percept should not only be used for generating an agent’s

immediate actions, but also for improving its ability to act in the future, i.e. for learning

• learning can correspond to anything from trivial memorisation, to the creation of complete scientific theories.

learning can be classified into 3 increasingly difficult classes:• Supervised Learning – learning with a teacher (e.g. the

system is told what outputs it should produce for each of a set of inputs)

• Reinforcement learning – learning with limited feedback (e.g. the system must produce outputs for a set of inputs and is only told whether they are good or bad)

• Unsupervised learning – learning with no help (e.g. the system is given a set of inputs and is expected to make some kind of sense of them on its own)

Machine learning systems can be set up to do all three types of learning

Page 32: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 32

Types of Environments• We have talked about intelligent agents, but little about

the environments with which they interact• How to couple an agent to an environment? • Actions are done by the agent on the environment,

which in turn provides percepts to the agent.

• Fully Observable vs partially: An agent's sensors give it access to the complete state of the environment at each point in time. Such environments are convenient, since the agent is freed from the task of keeping track of the changes in the environment.

• Deterministic vs stochastic: The next state of the environment is completely determined by the current state and the action selected by the agent. e.g. taxi driving is stochastic- one can never predict the behavior of traffic exactly.

Page 33: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 33

Types of Environments (Cont.)

• Episodic/Nonepisodic: An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. Such environments do not require the agent to plan ahead. e.g. an agent has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions 

• Static vs dynamic: The environment is unchanged while an agent is deliberating. Easy to deal with agent doesn’t need to keep looking at the world while it is deciding on an action.

• Discrete vs continuous: if they are a limited number of distinct, clearly defined percepts and actions.e.g. chess is discrete- they are fixed no of possible moves on each turn.

• Single agent vs. multiagent: An agent operating by itself in an environment.

• Diff environment types require somewhat diff agent programs to deal with them effectively.

Page 34: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 34

Examples of Environments

Solitaire Chess(clock) Internet shopping Taxi driver

Observable Yes Yes No NoDeterministic Yes Yes Partly NoEpisodic No No No NoStatic Yes Semi Semi

NoDiscrete Yes Yes Yes No

Single-agent Yes No Yes (except auctions) No

The real world is partially observable, nondeterministic, nonepisodic (sequential), dynamic, continuous, multi-agent.

Page 35: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 35

Components of an AI Agent

The components that need to be built into an AI agent:1. A means to infer properties of the world from its

percepts.2. Information about the way the world evolves.3. Information about what will happen as a result of its

possible actions.4. Utility information indicating the desirability of

possible world states and the actions that lead to them.

5. Goals that describe the classes of states whose achievement maximises the agent’s utility

6. A mapping from the above forms of knowledge to its actions.

7. An active learning system that will improve the agents ability to perform well

Page 36: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 36

Conclusion

• An agent perceives and acts in an environment. It has an architecture and is implemented by a program.

• An ideal agent always chooses the action which maximizes its expected performance, given the percept sequence received so far.

• An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer.

• An agent program maps from a percept to an action and updates its internal state.

Page 37: Intelligent Agents1 Artificial Intelligence Intelligent Agents Chapter 2

Intelligent Agents 37

Conclusion (Cont.)

• Reflex agents respond immediately to percepts.

• Goal-based agents act in order to achieve their goal(s).

• Utility-based agents maximize their own utility function.

• Representing knowledge is important for successful agent design.

• Some environments are more difficult for agents than others. The most challenging environments are partially observable, non-deterministic, non-episodic, dynamic, and continuous.