intelligent agents

40
1 Agents and Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. An intelligent agent acts further for its own interests.

Upload: arch-sidz

Post on 06-Aug-2015

74 views

Category:

Engineering


0 download

TRANSCRIPT

1

Agents and Intelligent Agents

• An agent is anything that can be viewed as – perceiving its environment through sensors and – acting upon that environment through actuators.

• An intelligent agent acts further for its own interests.

2

Agents and Intelligent Agents

• Human agent: – Sensors: eyes, ears, nose….– Actuators: hands, legs, mouth, …

• Robotic agent: – Sensors: cameras and infrared range finders – Actuators: various motors

• Agents include humans, robots, etc.• Perceptions: – Vision, speech reorganization, etc.

3

Perception

• The see function is the agent’s ability to observe its environment, whereas the action function represents the agent’s decision making process

• Output of the see function is a percept:see : E Per

which maps environment states to percepts, and action is now a function

action : Per* Awhich maps sequences of percepts to actions

4

Agents and environments

• The agent function maps from percept histories (sequences of percepts) to actions:

[f: P* A]

5

Properties of agent:• mobility: the ability of an agent to move

around in an environment.• veracity: an agent will not knowingly

communicate false information• benevolence: agents do not have conflicting

goals, and that every agent will therefore always try to do what is asked of it

• rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved.

• learning/adoption: agents improve performance over time

6

Example: A Vacuum-cleaner agent

A B

• Percepts: location and contents, e.g., (A,dirty)– (Idealization: locations are discrete)

Actions: move, clean, do nothing: LEFT, RIGHT, SUCK, NOP

7

Vacuum-cleaner world: agent function

8

Agents and Objects• Main differences:

– agents are autonomous:agents embody stronger notion of autonomy than objects, and in particular, they decide for themselves whether or not to perform an action on request from another agent

– agents are smart:capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior

– agents are active:a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

9

Rationality

• What is rational at any given time depends on four things:

The performance measure that defines the criterion of success.

The agent’s prior knowledge of the environment.

The actions the agent can perform.The agent’s percept sequence to date.

10

Task environment

• To design a rational agent we need to specify a task environment – a problem specification for which the agent is a

solution

• PEAS: to specify a task environment– Performance measure– Environment– Actuators– Sensors

11

PEAS: Specifying an automated taxi driver

• Performance measure: – safe, fast, legal, comfortable, maximize

profits• Environment:

– roads, other traffic, pedestrians, customers• Actuators:

– steering, accelerator, brake, signal, horn• Sensors:

– cameras, sonar, speedometer, GPS

12

PEAS: Another example

• Agent: Medical diagnosis system• Performance measure: Healthy patient,

minimize costs.• Environment: Patient, hospital, staff• Actuators: Screen display (questions,

tests, diagnoses, treatments, referrals)• Sensors: Keyboard (entry of symptoms,

findings, patient's answers)

13

Environment types:

• Fully observable :(vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.

• Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. – If the environment is deterministic except for the

actions of other agents, then the environment is strategic

14

Environment types Cont…

• Single agent (vs. multi-agent): An agent operating by itself in an environment.

– An agent solving crossword puzzle by itself is in a single-agent environment, whereas an agent playing chess is in a two-agent environment.

– A multi-agent environment can either be competitive or co-operative.

15

Environment types: Static vs. Dynamic

• A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent

• A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control. – The physical world is a highly dynamic

environment.

16

Environment types – Discrete vs. continuous

• An environment is discrete if there are a finite number of distinct states in the environment and a discrete set of percepts and actions. – The game of chess is an example of a

discrete environment, automated taxi driver is a continuous-state and continuous-time problem.

• Discrete environments could in principle be handled by a kind of “lookup table”

17

Agent types• Six basic agent types in order of increasing

generality:– Simple reflex– Model-based reflex– Goal-based– Utility-based– Learning Agents– Knowledge based• Choosing Appropriate agents– possible percepts and actions– what goals or performance measure the agent is

supposed to achieve– what sort of environment it will operate in

18

Simple reflex agent• Only use current percept to select an action• Works only in fully observable environments

• function SIMPLE-REFLEX-AGENT(percept) returns action

• static: rules, a set of condition-action rules

• state INTERPRET-INPUT (percept)• rule RULE-MATCH (state,rules)• action RULE-ACTION [rule]• return action

19

Simple reflex agent Example

function REFLEX_VACUUM_AGENT( percept )returns an action

(location,status) = UPDATE_STATE( percept )

if status = DIRTY then return SUCK;else if location = A then return RIGHT;else if location = B then return LEFT;

20

• Disadvantages• Applicable where limited intelligence is required• A little bit of unobservability can cause serious

trouble

Simple reflex agent

• Example –Vacuum Cleaner• If there is no location sensor

21

Model-based reflex agents• Deal with partially observable environment

• an internal state maintains important information from previous percepts

• sensors only provide a partial picture of the environment

• The internal states reflects the agent’s knowledge about the world this knowledge is called a model.

22

Model-based reflex agents• function REFLEX-AGENT-WITH-STATE (percept) returns

action• static: state, a description of the current world state• rules, a set of condition-action rules

• state UPDATE-STATE (state, percept)• rule RULE-MATCH (state, rules)• action RULE-ACTION [rule]• state UPDATE-STATE (state, action)• return action

TO Update internal state information , agent must knowHow the world evolves independently of the agentHow the agents own actions affect the world

23

Model-based reflex agents• New

24

What is the Difference? Example - Taxi Driver changing position

• Model-based– percept -no car – internal state -to keep

track where the other cars are

– update state-• overtaking car will

be closer behind• whether he should

turns the steering wheel clockwise or anti...

• Simple Reflex– percept -no car– action - just change

his position

25

• Advantages– Sensors do not provide access to the

complete state of the world– Internal state helps agent to

distinguish between world states• Disadvantages

– More complex than simple reflex agent– Computation time increases

Model-based reflex agents

26

Goal-based agents

• Use current state and goal state to decide correct

actions.

• Agents consider future influence when making current decision.

• More flexible for searching and planning.

27

Goal-based agents

• New

28

Example: Tracking a TargetExample: Tracking a Target

targetrobot

• The robot must keep the target in view• The target’s trajectory is not known in advance• The robot may not know all the obstacles in advance• Fast decision is required

29

Goal-based agentsDisadvantage:

The goal based agent appears less efficient because the agent have to consider long sequences of possible actions before deciding if goal is achieved.

It requires searching and planning because it involves considerations of the future, “what will happen if I do.?”

Advantage:it is more flexible because the knowledge that

supports its decision is represented explicitly and can be modified.

30

Goals alone are not really enough to generate high quality behavior in most environment. Goals can be achieved in multiple ways. A goal specifies a crude destination between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness”

Utility-based agents specifies How well can the goal be achieved (degree of happiness). Utility function U: State Real indicating a measure of success or happiness when at a given stateWhich goal should be selected if several can be achieved?What to do if there are conflicting goals?

– Speed and safety

Utility-based agents:

31

Utility-based agents:

• New

32

A complete specification of the utility function allows rational decisions in two kinds of cases:First : when there are conflicting goals, only some

of which can be achieved (for ex: speed and safety), the utility function specifies the appropriate tradeoff.

Second: when there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides way in which the likelihood of success can be weighted up against the importance of the goals.

Utility-based agents:

33

Advantage:Utility based agents can handle the uncertainty

inherent in partially observable environments.Consider the taxi driver example:There are many action sequences that will get the

taxi to its destination but some are quicker ,safer, more reliable or cheaper than others.

Goals just provides a distinction between whether the passenger is ‘happy’ or ‘unhappy’. The utility function defines the degree of happiness.

Utility-based agents:

34

The idea behind learning is that percepts should be used not only for acting, but also for improving the agent’s ability to act in the future.

Learning takes place as a result of the interaction between the agent and the world, and from observation by the agent of its own decision-making processes.

Learning agents:

35

A Learning Agent can be divided into four conceptual components:

Leaning Element, Performance Element, Critic, and Problem Generator.

Learning agents:

36

The Knowledge based approach is a particularly powerful way of constructing an agent program.It aims to implement a view of agents in which they can be seen as knowing about their world, and reasoning about their possible courses of actions.Knowledge Based Agents are able to accept the new tasks in the form of explicitly described goals; they can achieve competence quickly by being told or learning new knowledge about the environment; and they can adapt to changes in the environment by updating the relevant knowledge.

Knowledge Based Agents

37

A Knowledge based agent needs to know many things ---- The current state of the world,How to infer unseen properties of the world from

percepts,How the world evolves over time,What it wants to achieve, andWhat its own actions do in various circumstances.

The central component of a Knowledge based agent is its Knowledge Base, or KB.

Knowledge Based Agents

38

A Knowledge Based agent can be described at three levelsKnowledge LevelThe most abstract level; we can describe the agent by

saying what it knows. Example: A taxi agent might know that, To reach Malibag

from Mogbazar he needs to go through Mouchak. Logical LevelThe level at which the knowledge is encoded into

sentences. Example: Links(Mouchak, Malibag, Mogbazar)

Implementation LevelIt is the level that runs on the agent architecture.The physical representation of the sentences at the logical

level are contained in this level.Example: “Links(Mouchak, Malibag, Mogbazar)” in the

logical level can be represented as a string “Links(Mouchak, Malibag, Mogbazar)”in the KB.

Knowledge Based Agents

39

How is an Agent different from other software?

Agents are autonomous, that is, they act on behalf of the user

Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environment

Agents don't only act reactively, but sometimes also proactively

40

How is an Agent different from other software?

Agents have social ability, that is, they communicate with the user, the system, and other agents as required

Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle

Agents may migrate from one system to another to access remote resources or even to meet other agents