Nothing Special   »   [go: up one dir, main page]

FA17-BCS-027 Lab

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 27

Assignment :02

Name:
MUHAMMAD TALHA BIN TUFAIL
Reg No:
FA17-BCS-027
Submitted To:
SIR WAQAR
Dated:
23 NOVEMBER,2020

Comsats University Islamabad,


Sahiwal Campus
Answer the following short questions:
1) What is informed and uninformed searches? Also explain its types.
Informed Searches:
A search using domain-specific knowledge. Informed Search algorithms have information on the
goal state which helps in more efficient searching. This information is obtained by a function that
estimates how close a state is to the goal state. Types of informed searches:

 Best First Search Algorithm (Greedy search)


 A* Search Algorithm
I. Best-first Search Algorithm (Greedy Search):

Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search algorithms. It
uses the heuristic function and search. Best-first search allows us to take the advantages
of both algorithms. 
f(n)= g(n)

II. A* Search Algorithm:

A* search is the most known form of best-first search. It uses heuristic function h(n), and
cost to reach the node n from the start state g(n). It has combined features of UCS and
greedy best-first search so we can solve the problem efficiently using the A* Search
Algorithm

f(n) = g(n) + h(n)

Uninformed Searches:

Uninformed search is a class of general-purpose search algorithms which operates in brute force-
way. Uninformed search algorithms do not have additional information about state or search
space other than how to traverse the tree, so it is also called blind search. Types of uninformed
searches:
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

 Breadth-first Search:
Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.

 Depth-first Search:
Depth-first search is a recursive algorithm for traversing a tree or graph data structure. It
is called the depth-first search because it starts from the root node and follows each path
to its greatest depth node before moving to the next path. DFS uses a stack data structure
for its implementation.

 Depth-limited Search:
A depth-limited search algorithm is like depth-first search with a predetermined limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search

 Iterative deepening depth-first search:


The iterative deepening algorithm is a combination of DFS and BFS algorithms. This
search algorithm finds out the best depth limit and does it by gradually increasing the
limit until a goal is found.

 Bidirectional Search:
Bidirectional search algorithm runs two simultaneous searches, one form initial state
called as forward-search and other from goal node called as backward-search, to find the
goal node.

 Uniform cost search:


Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge. The
primary goal of the uniform-cost search is to find a path to the goal node which has the
lowest cost.

(2) What is Uniform cost search?

Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This
algorithm comes into play when a different cost is available for each edge. The primary goal of
the uniform-cost search is to find a path to the goal node which has the lowest cost. Uniform-cost
search expands nodes according to their path costs form the root node. It can be used to solve
any graph and tree where the optimal cost is in demand. A uniform-cost search algorithm is
implemented by the priority queue.

Advantages:

 Uniform cost search is optimal because at every state the path with the least cost is
chosen.

Disadvantages:

 It does not care about the number of steps involve in searching and only concerned
about path cost. Due to which this algorithm may be stuck in an infinite loop.

Example:
(3) What is Bidirectional search Algorithm?

Bidirectional search algorithm runs two simultaneous searches, one form initial state called as
forward-search and other from goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.

Advantages:

 Bidirectional search is fast.


 Bidirectional search requires less memory

Disadvantages:

 Implementation of the bidirectional search tree is difficult.


 In bidirectional search, one should know the goal state in advance.

Example:
(4) Difference between graph and tree traversal?

BASIS FOR TREE GRAPH


COMPARISON
Path Only one between two vertices. More than one path is allowed.
Root node It has exactly one root node. Graph doesn't have a root node.
Loops No loops are permitted. Graph can have loops.
complexity Less complex More complex comparatively
Traversal techniques Pre-order, In-order and Post- Breadth-first search and depth-
order. first search.
(5) What is heuristic search?

Heuristic search refers to a search strategy that attempts to optimize a problem by iteratively
improving the solution based on a given heuristic function or a cost measure. A heuristic
search method does not always guarantee to find an optimal or the best solution but may
instead find a good or acceptable solution within a reasonable amount of time and memory
space. The commonly used heuristic search method including the hill climbing method, BFS
and A star algorithm
(6) What are adversarial searches?

Adversarial searches are those in which the opponent and enemy are the changing the state of the
problem which can degrade the performance other player. Adversarial search is a search, where
we examine the problem which arises when we try to plan of the world and other agents are
planning against us.

Types of adversarial searches are:

 Min-Max
 Alpha-beta pruning

Write the code of searches mention below with choosing any random graph
from internet/book examples. Also write short introduction of all five searches
with the screen shot of its output. Make user dependent of all the code of
searches.

Depth-Limited Search:

Definition:

A depth-limited search algorithm is like depth-first search with a predetermined limit. Depth-
limited search can solve the drawback of the infinite path in the Depth-first search. In this
algorithm, the node at the depth limit will treat as it has no successor nodes further.

Code:

from collections import defaultdict

# This class represents a directed graph using


# adjacency list representation
class Graph:

# Constructor
def __init__(self,vertices):

self.ver=vertices
self.graph = defaultdict(list)

def addEdge(self, u, v):


self.graph[u].append(v)
def DLS(self, source , target, depth):
if source == target :return True
if depth <= 0 : return False

for i in self.graph[source]:
if(self.DLS(i,target,depth-1)):
return True
return False

def IDDFS(self, source,target, depth):


for i in range(depth):
if ( self.DLS(source,target,depth)):
return True
else:
return False

total_vertices=int(input("Enter the total vertices value "))


g=Graph(total_vertices)

for i in range(total_vertices):
num1=int(input("Enter the first edge"))
num2=int(input("Enter the Second edge"))
g.addEdge(num1,num2)

source=int(input("Enter the source value "))


target=int(input("Enter the target value"))
depth=int(input("Enter the depth value"))

if g.IDDFS(source,target,depth)== True:
print("The target is reachable ")
else:
print("The target is not reachable ")

Output:

Iterative deepening search:

Definition:

The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal
is found. This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.

Code:

from collections import defaultdict

class Graph:
def __init__(self,vertices):

self.ver=vertices
self.graph = defaultdict(list)

def addEdge(self, u, v):


self.graph[u].append(v)

def DLS(self, source , target, depth):


if source == target :return True
if depth <= 0 : return False

for i in self.graph[source]:
if(self.DLS(i,target,depth-1)):
return True
return False

def IDDFS(self, source,target, depth):


for i in range(depth):
if ( self.DLS(source,target,depth)):
return True
else:
return False

total_vertices=int(input("Enter the total vertices value : "))


g=Graph(total_vertices)

for i in range(total_vertices):
num1=int(input("Enter the first edge : "))
num2=int(input("Enter the Second edge : "))
g.addEdge(num1,num2)
source=int(input("Enter the source value : "))
target=int(input("Enter the target value : "))
depth=int(input("Enter the depth value : "))

if g.IDDFS(source, target, depth) == False:


for i in range(total_vertices):
depth = int(input("Enter the depth value again : "))

if g.IDDFS(source, target, depth) == False:


print("The target is not reachable : ")

else:
print("The target is reachable : " , +depth)
break

else:

print("The target is reachable with given limited depth :


" ,+depth)
output:

A star Search:

Definition

A* search is the most known form of best-first search. It uses heuristic function h(n), and cost to
reach the node n from the start state g(n). It has combined features of UCS and greedy best-first
search, by which it solves the problem efficiently. A* search algorithm finds the shortest path
through the search space using the heuristic function. This search algorithm expands less search
tree and provides optimal result faster. A* algorithm is like UCS except that it uses g(n)+h(n)
instead of g(n).

Code:

class AStarGraph(object):

def __init__(self):
self.barriers = []
t=int(input("Enter total number of barriers : "))
for i in range(t):
x=int(input("Enter the row value :"))
y=int(input("Enter the column value :" ))

self.barriers.append(
[x,y])

def heuristic(self, start, goal):

D = 1
D2 = 1
dx = abs(start[0] - goal[0])
dy = abs(start[1] - goal[1])
return D * (dx + dy) + (D2 - 2 * D) * min(dx, dy)

def get_vertex_neighbours(self, pos):


n = []
# Moves allow link a chess king
for dx, dy in [(1, 0), (-1, 0), (0, 1), (0, -1), (1,
1), (-1, 1), (1, -1), (-1, -1)]:
x2 = pos[0] + dx
y2 = pos[1] + dy
if x2 < 0 or x2 > 7 or y2 < 0 or y2 > 7:
continue
n.append((x2, y2))
return n

def move_cost(self, a, b):


for barrier in self.barriers:
if b in barrier:
return 100 # Extremely high cost to enter
barrier squares
return 1 # Normal movement cost

def AStarSearch(start, end, graph):


G = {} # Actual movement cost to each position from the
start position
F = {} # Estimated movement cost of start to end going via
this position

# Initialize starting values


G[start] = 0
F[start] = graph.heuristic(start, end)

closedVertices = set()
openVertices = set([start])
cameFrom = {}

while len(openVertices) > 0:


# Get the vertex in the open list with the lowest F
score
current = None
currentFscore = None
for pos in openVertices:
if current is None or F[pos] < currentFscore:
currentFscore = F[pos]
current = pos

# Check if we have reached the goal


if current == end:
# Retrace our route backward
path = [current]
while current in cameFrom:
current = cameFrom[current]
path.append(current)
path.reverse()
return path, F[end] # Done!

# Mark the current vertex as closed


openVertices.remove(current)
closedVertices.add(current)

# Update scores for vertices near the current position


for neighbour in graph.get_vertex_neighbours(current):
if neighbour in closedVertices:
continue # We have already processed this node
exhaustively
candidateG = G[current] + graph.move_cost(current,
neighbour)

if neighbour not in openVertices:


openVertices.add(neighbour) # Discovered a new
vertex
elif candidateG >= G[neighbour]:
continue # This G score is worse than
previously found

# Adopt this G score


cameFrom[neighbour] = current
G[neighbour] = candidateG
H = graph.heuristic(neighbour, end)
F[neighbour] = G[neighbour] + H

raise RuntimeError("A* failed to find a solution")

if __name__ == "__main__":
graph = AStarGraph()
s1=int(input("Enter the row number for start:"))
s2=int(input("Enter the column number for start : "))
g1=int(input("Enter the row number for goal :"))
g2=int(input("Enter the column number for goal :"))
result, cost = AStarSearch((s1,s2), (g1, g2), graph)
print("route", result)
print("cost", cost)

output:
Genetic Algorithm:

Definition

A genetic algorithm is a heuristic search method used in artificial intelligence and computing. It


is used for finding optimized solutions to search problems based on the theory of natural
selection and evolutionary biology. Genetic algorithms are excellent for searching through large
and complex data sets.

Code:

# Python3 program to create target string, starting from


# random string using Genetic Algorithm

import random

# Number of individuals in each generation


POPULATION_SIZE = int(input("Enter the total population size"))

# Valid genes
GENES = '''abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP
QRSTUVWXYZ 1234567890, .-;:_!"#%&/()=?@${[]}'''

# Target string to be generated


TARGET = input("Enter the target string ")

class Individual(object):
'''
Class representing individual in population
'''

def __init__(self, chromosome):


self.chromosome = chromosome
self.fitness = self.cal_fitness()

@classmethod
def mutated_genes(self):
'''
create random genes for mutation
'''
global GENES
gene = random.choice(GENES)
return gene

@classmethod
def create_gnome(self):
'''
create chromosome or string of genes
'''
global TARGET
gnome_len = len(TARGET)
return [self.mutated_genes() for _ in range(gnome_len)]

def mate(self, par2):


'''
Perform mating and produce new offspring
'''

# chromosome for offspring


child_chromosome = []
for gp1, gp2 in zip(self.chromosome, par2.chromosome):

# random probability
prob = random.random()

# if prob is less than 0.45, insert gene


# from parent 1
if prob < 0.45:
child_chromosome.append(gp1)

# if prob is between 0.45 and 0.90, insert


# gene from parent 2
elif prob < 0.90:
child_chromosome.append(gp2)

# otherwise insert random gene(mutate),


# for maintaining diversity
else:
child_chromosome.append(self.mutated_genes())

# create new Individual(offspring) using


# generated chromosome for offspring
return Individual(child_chromosome)

def cal_fitness(self):
'''
Calculate fittness score, it is the number of
characters in string which differ from target
string.
'''
global TARGET
fitness = 0
for gs, gt in zip(self.chromosome, TARGET):
if gs != gt: fitness += 1
return fitness

# Driver code

def main():
global POPULATION_SIZE

# current generation
generation = 1

found = False
population = []

# create initial population


for _ in range(POPULATION_SIZE):
gnome = Individual.create_gnome()
population.append(Individual(gnome))

while not found:

# sort the population in increasing order of fitness


score
population = sorted(population, key=lambda x:
x.fitness)

# if the individual having lowest fitness score ie.


# 0 then we know that we have reached to the target
# and break the loop
if population[0].fitness <= 0:
found = True
break

# Otherwise generate new offsprings for new generation


new_generation = []

# Perform Elitism, that mean 10% of fittest population


# goes to the next generation
s = int((10 * POPULATION_SIZE) / 100)
new_generation.extend(population[:s])

# From 50% of fittest population, Individuals


# will mate to produce offspring
s = int((90 * POPULATION_SIZE) / 100)
for _ in range(s):
parent1 = random.choice(population[:50])
parent2 = random.choice(population[:50])
child = parent1.mate(parent2)
new_generation.append(child)

population = new_generation

print("Generation: {}\tString: {}\tFitness: {}". \


format(generation,
"".join(population[0].chromosome),
population[0].fitness))

generation += 1

print("Generation: {}\tString: {}\tFitness: {}". \


format(generation,
"".join(population[0].chromosome),
population[0].fitness))

if __name__ == '__main__':
main()
output:

Min-Max:

Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making


and game theory. It provides an optimal move for the player assuming that opponent is also
playing optimally.

Code:

# A simple Python3 program to find


# maximum score that
# maximizing player can get
import math

def minimax(curDepth, nodeIndex,


maxTurn, scores,
targetDepth):
# base case : targetDepth reached
if (curDepth == targetDepth):
return scores[nodeIndex]

if (maxTurn):
return max(minimax(curDepth + 1, nodeIndex * 2,
False, scores, targetDepth),
minimax(curDepth + 1, nodeIndex * 2 + 1,
False, scores, targetDepth))

else:
return min(minimax(curDepth + 1, nodeIndex * 2,
True, scores, targetDepth),
minimax(curDepth + 1, nodeIndex * 2 + 1,
True, scores, targetDepth))

# Driver code

scores=[]
x=int(input("Enter the total number of leafs :"))
for i in range(x):
y=int(input("Enter leaf value : "))
scores.append(y)
curDepth=int(input("Enter the depth value : "))
nodeIndex=int(input("Enter the node value : "))
maxt=True
treeDepth = math.log(len(scores), 2)

print("The optimal value is : ", end="")


print("The answer is : " , minimax(curDepth,nodeIndex, maxt,
scores, treeDepth))

Output :
Alpha -beta pruning:

Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization


technique for the mini-max algorithm. It is also called as Alpha-Beta Algorithm. Alpha-beta
pruning can be applied at any depth of a tree, and sometimes it not only prunes the tree leaves
but also entire sub-tree.

Code:

# Python3 program to demonstrate


# working of Alpha-Beta Pruning

# Initial values of Aplha and Beta


MAX, MIN = 1000, -1000

# Returns optimal value for current player


# (Initially called for root and maximizer)
def minimax(depth, nodeIndex, maximizingPlayer,
values, alpha, beta):
# Terminating condition. i.e
# leaf node is reached
if depth == 3:
return values[nodeIndex]

if maximizingPlayer:
best = MIN

# Recur for left and right children


for i in range(0, 2):

val = minimax(depth + 1, nodeIndex * 2 + i,


False, values, alpha, beta)
best = max(best, val)
alpha = max(alpha, best)

# Alpha Beta Pruning


if beta <= alpha:
break

return best

else:
best = MAX

# Recur for left and


# right children
for i in range(0, 2):

val = minimax(depth + 1, nodeIndex * 2 + i,


True, values, alpha, beta)
best = min(best, val)
beta = min(beta, best)

# Alpha Beta Pruning


if beta <= alpha:
break

return best

# Driver Code

if __name__ == "__main__":

values = []
x = int(input("Enter the total number of leafs :"))
for i in range(x):
y = int(input("Enter node value : "))
values.append(y)

depth=int(input("Enter the depth value : "))


nodeIndex=int(input("Enter the node value : "))
maxt=True
print("The optimal value is :", minimax(depth, nodeIndex, maxt,
values, MIN, MAX))

Output:

Hill climbing Algorithm:

Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a higher value.

Code:

from numpy import asarray


from numpy import arange
from numpy.random import randn
from numpy.random import rand
from numpy.random import seed
from matplotlib import pyplot
# objective function
def objective(x):
return x[0]**2.0

# hill climbing local search algorithm


def hillclimbing(objective, bounds, n_iterations, step_size):
# generate an initial point
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1]
- bounds[:, 0])
# evaluate the initial point
solution_eval = objective(solution)
# run the hill climb
solutions = list()
solutions.append(solution)
for i in range(n_iterations):
# take a step
candidate = solution + randn(len(bounds)) * step_size
# evaluate candidate point
candidte_eval = objective(candidate)
# check if we should keep the new point
if candidte_eval <= solution_eval:
# store the new point
solution, solution_eval = candidate, candidte_eval
# keep track of solutions
solutions.append(solution)
# report progress
print('>%d f(%s) = %.5f' % (i, solution,
solution_eval))
return [solution, solution_eval, solutions]

# seed the pseudorandom number generator


seed(5)
# define range for input
bounds = asarray([[-5.0, 5.0]])
# define the total iterations
n_iterations = 1000
# define the maximum step size
step_size = 0.1
# perform the hill climbing search
best, score, solutions = hillclimbing(objective, bounds,
n_iterations, step_size)
print('Done!')
print('f(%s) = %f' % (best, score))
# sample input range uniformly at 0.1 increments
inputs = arange(bounds[0,0], bounds[0,1], 0.1)
# create a line plot of input vs result
pyplot.plot(inputs, [objective([x]) for x in inputs], '--')
# draw a vertical line at the optimal input
pyplot.axvline(x=[0.0], ls='--', color='red')
# plot the sample as black circles
pyplot.plot(solutions, [objective(x) for x in solutions], 'o',
color='black')
pyplot.show()

output:

You might also like