Nothing Special   »   [go: up one dir, main page]

AI Chapter 2 Agents

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

Intelligent Agents

Chapter 2

1
2
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types

3
Agents
• An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators or
• A common definition of an intelligent agent from the perspective of
artificial intelligence is an autonomous entity that exists in an environment
and acts in a rational way.
• The control system provides a mapping from sensors to effectors, and provides
intelligent (or rational) behavior
• Human agent: eyes, ears, and other organs
• for sensors; hands,
• legs, mouth, and other body parts for actuators
• A robot can also be considered an agent.
• Robotic agent: cameras and infrared range,
• microphones finders for sensors;
• various motors for actuators and vaccum pump/ to manipulate the
enviorment.
• an agent’s choice of action at any given instant can depend on the entire percept
sequence observed to date, but not on anything it hasn’t perceived.

4
An agent is an intermediary between two or more parties.
Franklin and Graesser first identified viruses as agents in their 1996 taxonomy
AGENT PROPERTIES AND AI
We can think of agents as a super-set of artificial intelligence.

5
Vacuum-cleaner world

Percepts: location and contents, e.g.,


[A,Dirty]

• Actions: Left, Right, Suck, NoOp
• 6
An agent behaviour is described by agent function that maps any given
percept to an action.

External and internal characterization of the agent functions:

Internally the agent function for an agent is implemented by an agent


program.

The agent function is an abstract mathematical description whereas the


agent program is a concrete implementation. Running within some physical
system.

Agent program

function REFLEX-VACUUM-
AGENT([location,status]) returns an action
if status = Dirty then return Suck else if
location = A then return Right else if
location = B then return Le
7
Rationale :
The property of rationality simply means that the agent does the right thing at the
right time, given a known outcome.
This depends on the actions that are available to the agent (can it achieve the best
outcome), and also how the agent’s performance is measured

What is rational at any given time depends on four things:


• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.

Autonomous:
Autonomy simply means that the agent is able to navigate its environment without
guidance from an external entity (such as a human operator). The autonomous
agent can therefore seek out goals in its environment, whether to sustain itself or
solve problems
Persistent
Persistence implies that the agent exists over time and continuously exists in its
environment. This property can also imply that the agent is stateful in conditions
where the agent must be serialized and moved to a new location

8
Rational agents
An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. The right action is the one that will
cause the agent to be most successful

• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up,
amount of time taken, amount of electricity
consumed, amount of noise generated, etc.
9

Rational agents
Rational Agent: For each possible percept
sequence, a rational agent should select an
action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and
whatever built-in knowledge the agent has.

10
Rational agents
Rationality is distinct from omniscience (all-
knowing with infinite knowledge)

• Agents can perform actions in order to
modify future percepts so as to obtain
useful information (information gathering,
exploration)

• An agent is autonomous if its behavior is
determined by its own experience (with
ability to learn and adapt) 11
PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design

• Consider, e.g., the task of designing an
automated taxi driver:

– Performance measure
– Environment
– Actuators
– Sensors
12
PEAS
Consider, e.g., the task of designing an automated
taxi driver:

• Performance measure: Safe, fast, legal, comfortable trip,
maximize profits

– Environment: Roads, other traffic, pedestrians,
customers

– Actuators: Steering wheel, accelerator, brake, signal,
horn

– Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
– 13
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)

14
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors

15
Property Description
Observability Are all elements visible to the agent?
Change: Does the environment change (dynamic) or does it stay the same
(static) and change only when the agent performs an action to
initiate change?

Deterministic: Does the state of the environment change as the agent expects
after an action (deterministic), or is there some randomness to
the environment change from agent actions (stochastic)?

Episodic: Does the agent need prior understanding of the environment


(prior experience) to take an action (sequential), or can the agent
perceive the environment and take an action (episodic)?

Continuous: Does the environment consist of a finite or infinite number of


potential states (during action selectionby the agent)? If the
number of possible states is large, then the task envirionment is
continuous, otherwise it is discrete.

Multi-Agent: Does the environment contain a single agent, or possibly


multiple agents acting in a cooperative or competitive fashion

16
Environment types
Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.

• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)

• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode
depends only on the episode itself.
17

Environment types
Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time but
the agent's performance score does)

• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.

• Single agent (vs. multiagent): An agent operating
by itself in an environment.

18
Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No

The environment type largely determines the agent design



• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent

19
Agent functions and programs
• An agent is completely specified by the
agent function mapping percept sequences
to actions
• One agent function (or a small equivalence
class) is rational
• Aim: find a way to implement the rational
agent function concisely
• The job of the AI is to design an agent
program that implements agent
functions.
• 20
Table-lookup agent

• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn
the table entries

21
Agent types
Four basic types in order of increasing
generality:

• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents

22
Simple reflex agents

23
Model-based reflex agents

24
25
Goal-based agents

26
Utility-based agents

27
Learning agents

28

You might also like