Nothing Special   »   [go: up one dir, main page]

Lecture - 3 - 6

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 26

Introduction to Agent

Ms. Anita Shrotriya


Assistant Professor-CSE
Manipal University Jaipur
Agents
• An agent is anything which can perceive its environment
through sensors and acts upon that environment through
actuators/effectors.
• The agent analyses the complete history of its percepts using
an agent function (or an agent program), that maps the
sequence of percepts to an action.
• There are mainly two ways the agents interact with the
environment, such as perception and action.
• Perception in Artificial Intelligence is the process of
interpreting vision, sounds, smell, and touch through sensors.
• When the agent changes the environment through active
interaction with the help of past and current perception is
called Action. So, We can say that the agent transforms
information from one format to another – Perceptual data is
Agents
• An AI system is composed of an agent and its environment.
• The agents act in their environment. The environment may
contain other agents.
• An agent is anything that can be viewed as:
• Perceiving its environment through sensors and
• Acting upon that environment through actuators.
• We need to build intelligent agents that work in an
environment.
• We can define in simple terms, Agent as game and
Environment as ground. e
r
a ?
an ??
m ts
Hu en
Ag
Intelligent Agents
• An Intelligent Agent is a computational entity or software
program that possesses the ability to perceive its
environment, make decisions, and take actions in pursuit of
specific goals or objectives.
• These agents operate autonomously, meaning they can act
independently and make decisions without direct human
intervention.
• Intelligent agents are characterized by their capacity to
exhibit intelligent behavior, which includes reasoning,
learning, problem-solving, and adapting to changing
circumstances.
• AI researchers and developers aim to create systems that can
perform complex tasks, make decisions in real-time, and
effectively interact with and adapt to dynamic environments.
Key Characteristics of Intelligent Agents
• Autonomy: Intelligent agents are autonomous entities
capable of acting independently without direct human control
or intervention. They have the ability to make decisions and
take actions based on their internal mechanisms and perceived
information from the environment.
• Perception: Intelligent agents are equipped with sensors or
other means of perception that allow them to gather
information from their environment. They can perceive and
interpret data from various sources, such as cameras,
microphones, or other sensors, to understand the state of the
world around them.
• Decision-Making: Intelligent agents possess decision-making
capabilities, which involve processing the information gathered
from the environment and selecting appropriate actions to
Key Characteristics of Intelligent Agents
• Adaptability: Intelligent agents are adaptable and can adjust
their behavior in response to changes in the environment or
unexpected situations. They can learn from past experiences
and modify their decision-making process to improve
performance.
• Proactiveness: Intelligent agents are proactive in their
actions. They can anticipate future events and take
preemptive actions to achieve their goals more effectively.
• Reactive Capability: Reactive agents are a type of
intelligent agent that responds to immediate stimuli from the
environment without maintaining internal models or planning.
They react based on predefined rules or behaviors.
• Communication: Intelligent agents can often communicate
with other agents or humans to exchange information,
Type of Intelligent Agents
• Agents can be classified into different types based on their
characteristics, such as whether they are reactive or
proactive, whether they have a fixed or dynamic environment,
and whether they are single or multi-agent systems.

An agent can be:


• Human-Agent: A human agent has eyes, ears, and other
organs which work for sensors and hand, legs, vocal tract
work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared
range finder, NLP for sensors and various motors for
actuators.
• Software Agent: Software agent can have keystrokes, file
• Reactive agents are those that respond to immediate stimuli
from their environment and take actions based on those
stimuli. Proactive agents, on the other hand, take initiative and
plan ahead to achieve their goals. Also known as Deliberative
agents.
• Hybrid agents combine the characteristics of reactive and
deliberative agents. They can respond to immediate stimuli
from their environment, but they also have a model of the
world and a plan for achieving their goals.
• The environment in which an agent operates can also be fixed
or dynamic. Fixed environments have a static set of rules that
do not change, while dynamic environments are constantly
changing and require agents to adapt to new situations.
• Multi-agent systems involve multiple agents working together
to achieve a common goal. These agents may have to
We should first know about sensors, effectors, and actuators
• Sensor: Sensor is a device which detects the change in the
environment and sends the information to other electronic
devices. An agent observes its environment through sensors.
• Actuators: Actuators are the component of machines that
converts energy into motion. The actuators are only
responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the
environment. Effectors can be legs, wheels, arms, fingers,
wings, fins, and display screen.
• Rules for an AI agent:
Rule 1: An AI agent must have the ability
to perceive the environment.
Rule 2: The observation must be used to
make decisions.
Rule 3: Decision should result in an
action.
Rule 4: The action taken by an AI agent
must be a rational
Rational Agents
• Artificial intelligence is defined as the study of rational agents.
• Agents have inherent goals that they want to achieve (e.g.
survive, reproduce).
• A rational agent acts in a way to maximize the achievement of
its goals.
• A rational agent could be anything that makes decisions, such
as a person, firm, machine, or software.
• It carries out an action with the best outcome after
considering past and current percepts (agent’s perceptual
inputs at a given instance).
• True maximization of goals requires omniscience and
unlimited computational abilities.
• Limited rationality involves maximizing goals within the
computational and other resources available.
Example of Rational Agents

Human???
Types of Environments
1. Fully observable vs Partially Observable: A fully
observable environment is one in which an agent sensor can
perceive or access the complete state of an agent at any given
time; otherwise, it is a partially observable environment.
• When the agent has no sensors in all environments, it is said
to be unobservable.
• It’s simple to maintain a completely observable environment
because there’s no need to keep track of the environment’s
past history.
• Example:
• Chess — the board is fully observable, so are opponent moves.
• Driving — the environment is partially observable because you
never know what’s around the corner.
Types of Environments
2. Deterministic vs Stochastic: A deterministic environment
is one in which an agent’s current state and chosen action
totally determine the next state of the environment.
• Unlike deterministic environments, stochastic environments
are random in nature and cannot be totally predicted by an
agent.
• Example:
• Chess has only a few possible movements for pieces in their current
state, and these moves can be predicted.
• Self-Driving Cars – The activities of a self-driving car are not
consistent; they change over time.
3. Competitive vs Collaborative: When an agent competes
with another agent to optimize output, it is said to be in a
competitive environment. Ex: Chess is a competitive game in
Types of Environments
4. Static vs Dynamic: A static environment is one in which
there is no change in its state.
• When an agent enters a vacant house, there is no change in
the surroundings.
• A dynamic environment is one that is always changing when
the agent is performing some action.
• A roller coaster ride is dynamic since it is in motion and the
surroundings change all the time.

5. Discrete vs Continuous: A discrete environment is one


that has a finite number of actions that can be deliberated in
the environment to produce the output.
• Chess is a discrete game since it has a limited or finite
number of moves. The number of moves varies from game to
Types of Environments
6. Single-agent vs Multi-agent: A single-agent environment
is defined as one that has only one agent participates.
• A person left alone in a maze or forest or playing a crossword
puzzle is an example of the single-agent system.
• A multi-agent environment is one in which more than one
agent exists.
• Football is a multi-agent game since each team has 11
players.

7. Episodic vs sequential: The episodic environment is also


called the non-sequential environment. In an episodic
environment, an agent’s current state or action will not affect a
future state or action. Whereas, in a non-episodic environment,
an agent’s current state or action will affect a future action and
Types of Environments
8. Known vs Unknown: These two are specially referred to
the environment itself, not to the agent; If a known
environment, the outcomes for all actions are given. If the
environment is unknown, the agent will have to learn how it
works in order to make good decisions. These environments are
good examples of Exploitation (Known Environment) and
Exploration (Unknown Environment) which come in
Reinforcement Learning.

9. Accessible vs Inaccessible: If an agent can acquire


complete and accurate knowledge about the state’s
environment, that environment is said to as accessible;
otherwise, it is referred to as inaccessible.
• An accessible environment is an empty room whose state can
Examples of Task Environments
Types of Agents -1
• Simple Reflex Agents ignore the rest of the percept history
and act only on the basis of the current percept. Percept
history is the history of all that an agent has perceived to
date. The agent function is based on the condition-action rule.
A condition-action rule is a rule that maps a state i.e., a
condition to an action. If the condition is true, then the action
is taken, else not. This agent function only succeeds when the
environment is fully observable. For simple reflex agents
operating in partially observable environments, infinite loops
are often unavoidable. It may be possible to escape from
infinite loops if the agent can randomize its actions.
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
Types of Agents -2
• Model-Based Reflex Agents works by finding a rule whose
condition matches the current situation. A model-based agent
can handle partially observable environments by the use of a
model about the world. The agent has to keep track of the
internal state which is adjusted by each percept and that
depends on the percept history. The current state is stored
inside the agent which maintains some kind of structure
describing the part of the world which cannot be seen.
Updating the state requires information about:
• How the world evolves independently from the agent?
• How do the agent’s actions affect the world?
Types of Agents -3
• Goal-Based Agents take decisions based on how far they
are currently from their goal(description of desirable
situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose
among multiple possibilities, selecting the one which reaches
a goal state. The knowledge that supports its decisions is
represented explicitly and can be modified, which makes
these agents more flexible.
• They usually require search and planning.
• The goal-based agent’s behavior can easily be changed.
Types of Agents -4
• Utility-Based Agents: The agents which are developed
having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives,
then to decide which one is best, utility-based agents are
used. They choose actions based on a preference (utility) for
each state. Sometimes achieving the desired goal is not
enough. We may look for a quicker, safer, cheaper trip to
reach a destination. Agent happiness should be taken into
consideration. Utility describes how “happy” the agent is.
• Because of the uncertainty in the world, a utility agent
chooses the action that maximizes the expected utility.
• A utility function maps a state onto a real number
which describes the associated degree of happiness.
Types of Agents -5
• Learning Agent in AI is the type of agent that can learn from
its past experiences, or it has learning capabilities. It starts to
act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly
four conceptual components, which are:
1) Learning element: It is responsible for making improvements
by learning from the environment.
2) Critic: The learning element takes feedback from critics
which describes how well the agent is doing with respect to
a fixed performance standard.
3) Performance element: It is responsible for selecting
external action.
4) Problem Generator: This component is responsible for
suggesting actions that will lead to new and informative
Types of Agents -6
• Multi-Agent Systems : These agents interact with other
agents to achieve a common goal. They may have to
coordinate their actions and communicate with each other to
achieve their objective.
• A multi-agent system (MAS) is a system composed of multiple
interacting agents that are designed to work together to
achieve a common goal. These agents may be autonomous or
semi-autonomous and are capable of perceiving their
environment, making decisions, and taking action to achieve
the common objective.
• In a homogeneous MAS, all the agents have the same
capabilities, goals, and behaviors.
• In contrast, in a heterogeneous MAS, the agents have
different capabilities, goals, and behaviors.
Types of Agents -7
• Hierarchical Agents are organized into a hierarchy, with
high-level agents overseeing the behavior of lower-level
agents. The high-level agents provide goals and constraints,
while the low-level agents carry out specific tasks.
• These goals and constraints are typically based on the overall
objective of the system. For example, in a manufacturing
system, the high-level agents might set production targets for
the lower-level agents based on customer demand.
• These subtasks may be relatively simple or more complex,
depending on the specific application. For example, in a
transportation system, low-level agents might be responsible
for managing traffic flow at specific intersections.
• Hierarchical agents are useful in complex environments with
many tasks and sub-tasks.
“Thank you for
being such an
engaged
audience during
my
presentation.”

You might also like