About Project
About Project
About Project
chapter 2 – Coding
2.1 Source Code
chapter 3 – In use
3.1 Showing the work
chapter 5 - refrences
CHAPTER I
Technical Details
For this project we have used various latest technologies
which will be evaluated in this chapter with every details of
why it is used.
We’ll divide this section of explaination of technolgy based on
modules/features in project.
But first lets see the language used in this project.
Language Used,
Well these are just the minor points from our sides Python is
just a lot more than this.
1. Monitor
2. Identify the family member
3. Detect for Noises
4. Visitors in room detection
Monitor Feature :
This feature is used to find what is the thing which is
stolen from the frame which is visible to webcam. Meaning It
constantly monitors the frames and checks which object or
thing from the frame has been taken away by the thief.
Haar features are similar to these convolution kernels which are used to detect the
presence of that feature in the given image.
For doing all this stuff openCV module in python language has inbuild function
called cascadeclassifier which we have used in order to detect for faces in the frame
2 – Using LBPH for face recognition
So now we have detected for faces in the frame and this is the time to
identify it and check if it is in the dataset which we’ve used to train
our lbph model.
And after all this the model is trained and later onwhen we want to make
predictions the same steps are applied to the make and its histograms are
compared with already trained model and in such way this feature works.
3 – Detect for Noises in the frame
This feature is used to find the noises in the frames well
this is something you would find in most of the cctv’s but in
this module we’ll see how it works.
Talking in simple way all the frames are continously
analyzed and checked for noises. Noise in checked in the
consecutive frames. Simply we do the absolute difference
between two frames and in this way the difference of two
images are analyzed and Contours( boundaries of the motion
are detected ) and if there are no boundries then no motion
and if there is any there is motion.
As you would know all images are just integer/ float values of
pixels which tells the brightness of pixel and similarly every
pixel has that valules of brigthness.
So we just do simply absolute difference because negative
will make no sense at all.
4 – Visitors in room detection
Waterfall model,
Classical waterfall model is the basic software development life cycle model.
It is very simple but idealistic. Earlier this model was very popular but nowadays it is
not used. But it is very important because all the other software development life
cycle models are based on the classical waterfall model.
Classical waterfall model divides the life cycle into a set of phases. This model
considers that one phase can be started after completion of the previous phase. That
is the output of one phase will be the input to the next phase. Thus the development
process can be considered as a sequential flow in the waterfall. Here the phases do
not overlap with each other. The different sequential phases of
Software Requirements
Hardware Requirements
Working PC or Laptop
Webcam with drivers installed
Flashlight/ LED if using this at night.
Technology used to make this project :
Main.py
import tkinter as tk
import tkinter.font as font
from in_out import in_out
from motion import noise
from rect_noise import rect_noise
from record import record
from PIL import Image, ImageTk
from find_motion import find_motion
from identify import maincall
window = tk.Tk()
window.title("Smart cctv")
window.iconphoto(False, tk.PhotoImage(file='mn.png'))
window.geometry('1080x700')
frame1 = tk.Frame(window)
icon = Image.open('icons/spy.png')
icon = icon.resize((150,150), Image.ANTIALIAS)
icon = ImageTk.PhotoImage(icon)
label_icon = tk.Label(frame1, image=icon)
label_icon.grid(row=1, pady=(5,10), column=2)
btn1_image = Image.open('icons/lamp.png')
btn1_image = btn1_image.resize((50,50), Image.ANTIALIAS)
btn1_image = ImageTk.PhotoImage(btn1_image)
btn2_image = Image.open('icons/rectangle-of-cutted-line-geometrical-shape.png')
btn2_image = btn2_image.resize((50,50), Image.ANTIALIAS)
btn2_image = ImageTk.PhotoImage(btn2_image)
btn5_image = Image.open('icons/exit.png')
btn5_image = btn5_image.resize((50,50), Image.ANTIALIAS)
btn5_image = ImageTk.PhotoImage(btn5_image)
btn3_image = Image.open('icons/security-camera.png')
btn3_image = btn3_image.resize((50,50), Image.ANTIALIAS)
btn3_image = ImageTk.PhotoImage(btn3_image)
btn6_image = Image.open('icons/incognito.png')
btn6_image = btn6_image.resize((50,50), Image.ANTIALIAS)
btn6_image = ImageTk.PhotoImage(btn6_image)
btn4_image = Image.open('icons/recording.png')
btn4_image = btn4_image.resize((50,50), Image.ANTIALIAS)
btn4_image = ImageTk.PhotoImage(btn4_image)
btn7_image = Image.open('icons/recording.png')
btn7_image = btn7_image.resize((50,50), Image.ANTIALIAS)
btn7_image = ImageTk.PhotoImage(btn7_image)
btn_font = font.Font(size=25)
btn3 = tk.Button(frame1, text='Noise', height=90, width=180, fg='green', command=noise,
image=btn3_image, compound='left')
btn3['font'] = btn_font
btn3.grid(row=5, pady=(20,10))
frame1.pack()
window.mainloop()
Monitors is divided into two modules.
1. find_noise.py
2. spot_diff.py
import cv2
from spot_diff import spot_diff
import time
import numpy as np
def find_motion():
motion_detected = False
is_start_done = False
cap = cv2.VideoCapture(0)
check = []
_, frm1 = cap.read()
frm1 = cv2.cvtColor(frm1, cv2.COLOR_BGR2GRAY)
while True:
_, frm2 = cap.read()
frm2 = cv2.cvtColor(frm2, cv2.COLOR_BGR2GRAY)
#look at it
contors = [c for c in contors if cv2.contourArea(c) > 25]
if len(contors) > 5:
cv2.putText(thresh, "motion detected", (50,50), cv2.FONT_HERSHEY_SIMPLEX, 2, 255)
motion_detected = True
is_start_done = False
elif motion_detected and len(contors) < 3:
if (is_start_done) == False:
start = time.time()
is_start_done = True
end = time.time()
end = time.time()
print(end-start)
if (end - start) > 4:
frame2 = cap.read()
cap.release()
cv2.destroyAllWindows()
x = spot_diff(frame1, frame2)
if x == 0:
print("runnig again")
return
else:
print("found motion sending mail")
return
else:
cv2.putText(thresh, "no motion detected", (50,50),
cv2.FONT_HERSHEY_SIMPLEX, 2, 255)
cv2.imshow("winname", thresh)
_, frm1 = cap.read()
frm1 = cv2.cvtColor(frm1, cv2.COLOR_BGR2GRAY)
if cv2.waitKey(1) == 27:
break
return
spot_diff.py
import cv2
import time
from skimage.metrics import structural_similarity
from datetime import datetime
frame1 = frame1[1]
frame2 = frame2[1]
g1 = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
g2 = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)
g1 = cv2.blur(g1, (2,2))
g2 = cv2.blur(g2, (2,2))
if len(contors):
for c in contors:
x,y,w,h = cv2.boundingRect(c)
else:
print("nothing stolen")
return 0
cv2.imshow("diff", thresh)
cv2.imshow("win1", frame1)
cv2.imwrite("stolen/"+datetime.now().strftime('%-y-%-m-%-d-%H:%M:%S')
+".jpg", frame1)
cv2.waitKey(0)
cv2.destroyAllWindows()
return 1
identify.py
import cv2
import os
import numpy as np
import tkinter as tk
import tkinter.font as font
def collect_data():
name = input("Enter name of person : ")
count = 1
ids = input("Enter ID: ")
cap = cv2.VideoCapture(0)
filename = "haarcascade_frontalface_default.xml"
cascade = cv2.CascadeClassifier(filename)
while True:
_, frm = cap.read()
cv2.imwrite(f"persons/{name}-{count}-{ids}.jpg", roi)
count = count + 1
cv2.putText(frm, f"{count}", (20,20), cv2.FONT_HERSHEY_PLAIN, 2,
(0,255,0), 3)
cv2.imshow("new", roi)
cv2.imshow("identify", frm)
recog = cv2.face.LBPHFaceRecognizer_create()
dataset = 'persons'
faces = []
ids = []
labels = []
for path in paths:
labels.append(path.split('/')[-1].split('-')[0])
ids.append(int(path.split('/')[-1].split('-')[2].split('.')[0]))
faces.append(cv2.imread(path, 0))
recog.train(faces, np.array(ids))
recog.save('model.yml')
return
def identify():
cap = cv2.VideoCapture(0)
filename = "haarcascade_frontalface_default.xml"
print(labelslist)
recog = cv2.face.LBPHFaceRecognizer_create()
recog.read('model.yml')
cascade = cv2.CascadeClassifier(filename)
while True:
_, frm = cap.read()
label = recog.predict(roi)
cv2.imshow("identify", frm)
if cv2.waitKey(1) == 27:
cv2.destroyAllWindows()
cap.release()
break
def maincall():
root = tk.Tk()
root.geometry("480x100")
root.title("identify")
btn_font = font.Font(size=25)
return
in_out.py
import cv2
from datetime import datetime
def in_out():
cap = cv2.VideoCapture(0)
while True:
_, frame1 = cap.read()
frame1 = cv2.flip(frame1, 1)
_, frame2 = cap.read()
frame2 = cv2.flip(frame2, 1)
x = 300
if len(contr) > 0:
max_cnt = max(contr, key=cv2.contourArea)
x,y,w,h = cv2.boundingRect(max_cnt)
cv2.rectangle(frame1, (x, y), (x+w, y+h), (0,255,0), 2)
cv2.putText(frame1, "MOTION", (10,80), cv2.FONT_HERSHEY_SIMPLEX, 2,
(0,255,0), 2)
elif right:
if x < 200:
print("to left")
x = 300
right, left = "", ""
cv2.imwrite(f"visitors/in/{datetime.now().strftime('%-y-%-m-%-d-%H:%M:
%S')}.jpg", frame1)
elif left:
if x > 500:
print("to right")
x = 300
right, left = "", ""
cv2.imwrite(f"visitors/out/{datetime.now().strftime('%-y-
%-m-%-d-%H:%M:%S')}.jpg", frame1)
cv2.imshow("", frame1)
k = cv2.waitKey(1)
if k == 27:
cap.release()
cv2.destroyAllWindows()
break
motion.py
import cv2
def noise():
cap = cv2.VideoCapture(0)
while True:
_, frame1 = cap.read()
_, frame2 = cap.read()
if len(contr) > 0:
max_cnt = max(contr, key=cv2.contourArea)
x,y,w,h = cv2.boundingRect(max_cnt)
cv2.rectangle(frame1, (x, y), (x+w, y+h), (0,255,0), 2)
cv2.putText(frame1, "MOTION", (10,80),
cv2.FONT_HERSHEY_SIMPLEX, 2, (0,255,0), 2)
else:
cv2.putText(frame1, "NO-MOTION", (10,80),
cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 2)
if cv2.waitKey(1) == 27:
cap.release()
cv2.destroyAllWindows()
break
finding noises in the rectangle.
import cv2
donel = False
doner = False
x1,y1,x2,y2 = 0,0,0,0
def rect_noise():
cv2.namedWindow("select_region")
cv2.setMouseCallback("select_region", select)
while True:
_, frame = cap.read()
cv2.imshow("select_region", frame)
while True:
_, frame1 = cap.read()
_, frame2 = cap.read()
if len(contr) > 0:
max_cnt = max(contr, key=cv2.contourArea)
x,y,w,h = cv2.boundingRect(max_cnt)
cv2.rectangle(frame1, (x+x1, y+y1), (x+w+x1, y+h+y1), (0,255,0), 2)
cv2.putText(frame1, "MOTION", (10,80),
cv2.FONT_HERSHEY_SIMPLEX, 2, (0,255,0), 2)
else:
cv2.putText(frame1, "NO-MOTION", (10,80),
cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 2)
if cv2.waitKey(1) == 27:
cap.release()
cv2.destroyAllWindows()
break
At-last this is most required feature which is recording.
import cv2
from datetime import datetime
def record():
cap = cv2.VideoCapture(0)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(f'recordings/{datetime.now().strftime("%H-
%M-%S")}.avi', fourcc,20.0,(640,480))
while True:
_, frame = cap.read()
cv2.putText(frame, f'{datetime.now().strftime("%D-%H-%M-
%S")}', (50,50), cv2.FONT_HERSHEY_COMPLEX,
0.6, (255,255,255), 2)
out.write(frame)
if cv2.waitKey(1) == 27:
cap.release()
cv2.destroyAllWindows()
break
Chapter III
In Use
# feature 1 - Monitor
Showing use of first feature or you can
consider it as output from feature 1.
as