Nothing Special   »   [go: up one dir, main page]

Abstract Research Paper

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Abstract:

This research focuses on the development and evaluation of a vehicle speed detection
system utilizing image processing techniques. The system comprises six primary
components: Image Acquisition, Image Enhancement, Image Segmentation, Image Analysis,
Speed Detection, and Report generation. Each component is designed to contribute to the
accurate detection and calculation of vehicle speed from video scenes. The study assesses
the system's usability, performance, and effectiveness through empirical experimentation.
Results indicate that the system achieves optimal performance at a resolution of 320×240,
with a detection time of approximately 70 seconds per video scene. Furthermore, the
research explores the implications of various parameters on system performance, providing
insights into optimization strategies. The findings of this study contribute to the
advancement of vehicle speed detection technologies by offering a comprehensive
understanding of system capabilities and limitations. Additionally, the research provides
valuable guidance for practitioners and researchers involved in the development and
implementation of similar systems. Future work may focus on further enhancing system
efficiency, exploring alternative image processing techniques, and extending the
applicability of the system to diverse real-world scenarios. Overall, this research serves as a
foundation for the continued advancement of vehicle speed detection systems, facilitating
safer and more efficient transportation systems.
Keywords:
vehicle speed detection, image processing, system evaluation, empirical experimentation,
optimization strategies, system performance, resolution, detection time, usability,
effectiveness, technological advancement, transportation safety.

References
Andr Ebner and Hermann Rohling, "A self-organized radio network for automotive
applications", Conference Proceedings ITS 2001 8th World Congress on Intelligent
Transportation Systems, 2001.
Show in Context Google Scholar
2.
Sherali Zeadally, Ray Hunt, Yuh-Shyan Chen, Angela Irwin and Aamir Hassan,
"Vehicular ad hoc networks (vanets): status results and challenges", Telecommunication
Systems, vol. 50, no. 4, pp. 217-241, 2012.
Show in Context CrossRef Google Scholar
3.
Yi Yang and Rajive Bagrodia, "Evaluation of vanet-based advanced intelligent
transportation systems", Proceedings of the sixth ACM international workshop on
VehiculAr InterNETworking, pp. 3-12, 2009.
Show in Context CrossRef Google Scholar
4.
Rajendra Prasad Nayak, "High Speed Vehicle Detection in Vehicular Ad-hoc Network" in
NIT Rourkela, 2013.
Show in Context Google Scholar
5.
Tarik Taleb, Ehssan Sakhaee, Abbas Jamalipour, Kazuo Hashimoto, Nei Kato and
Yoshiaki Nemoto, "A stable routing protocol to support its services in vanet
networks", Vehicular Technology IEEE Transactions on, vol. 56, no. 6, pp. 3337-3347,
2007.
Show in Context View Article
Google Scholar
6.
P. K. Bhaskar and S. Yong, "Image processing based vehicle detection and tracking
method", 2014 International Conference on Computer and Information Sciences
(ICCOINS), pp. 1-5, 2014.
Show in Context View Article
Google Scholar
7.
Sourav Kumar Bhoi and Pabitra Mohan Khilar, "RVCloud: a routing protocol for vehicular
ad hoc network in city environment using cloud computing", Wireless Networks, vol. 22,
no. 4, pp. 1329-1341, 2016.
Show in Context CrossRef Google Scholar
8.
Md Whaiduzzaman, Mehdi Sookhak, Abdullah Gani and Rajkumar Buyya, "A survey on
vehicular cloud computing", J. Netw. Comput. Appl., vol. 40, pp. 325-344, April 2014.
Show in Context CrossRef Google Scholar
9.
Sourav Kumar Bhoi, Pabitra Mohan Khilar and Munesh Singh, "A path selection based
routing protocol for urban vehicular ad hoc network (UVAN) environment", Wireless
Networks, vol. 23, no. 2, pp. 311-322, 2017.
Show in Context CrossRef Google Scholar
10.
Karl H Zimmerman and James A Bonneson, "In-service evaluation of a detection-control
system for high-speed signalized intersections", Technical report, 2005.
Show in Context Google Scholar
11.
Quoc Chuyen Doan, Tahar Berradia and Joseph Mouzna, "Vehicle speed and volume
measurement using vehicle-to-infrastructure communication", WSEAS Transactions on
Information Science and Applications, no. 9, 2009.
Show in Context Google Scholar
12.
Nehal Kassem, Ahmed E Kosba and Moustafa Youssef, "Rf-based vehicle detection and
speed estimation", Vehicular Technology Conference (VTC Spring) 2012 IEEE 75th, pp.
1-5, 2012.
Show in Context View Article
Google Scholar
13.
Axel Wegener, Micha I Piorkowski, Maxim Raya, Horst Hellbruck, Stefan Fischer and
Jean-Pierre Hubaux, "Traci: an interface for coupling road traffic and network
simulators", Proceedings of the 11th communications and networking simulation
symposium, pp. 155-163, 2008.
Show in Context CrossRef Google Scholar
14.
David Eckho and Christoph Sommer, "A multi-channel ieee 1609.4 and 802.11 p edca
model for the veins framework", Proceedings of 5th ACM/ICST International Conference
on Simulation Tools and Techniques for Communications Networks and Systems: 5th
ACM/ICST International Workshop on OMNeT++, 19–23 March, 2012.
Show in Context Google Scholar
15.
Christoph Sommer, Reinhard German and Falko Dressler, "Bidirectionally coupled
network and road traffic simulation for improved ivc analysis", Mobile Computing IEEE
Transactions on, vol. 10, no. 1, pp. 3-15, 2011.
Show in Context View Article
Google Scholar
16.
S. K. Bhoi, R. P. Nayak, D. Dash and J. P. Rout, "RRP: A robust routing protocol for
Vehicular Ad Hoc Network against hole generation attack", 2013 International
Conference on Communication and Signal Processing, pp. 1175-1179, 2013.
Show in Context View Article
Google Scholar
17.
E. Lee, E. Lee, M. Gerla and S. Y. Oh, "Vehicular cloud networking: architecture and
design principles", IEEE Communications Magazine, vol. 52, no. 2, pp. 148-155,
February 2014.
Show in Context View Article
Google Scholar

INTRODUCTION

The pervasive threat of fatalities resulting from vehicle accidents serves as an urgent call to
action, demanding concerted efforts to bolster road safety measures worldwide. Among the
multifaceted factors contributing to these tragic occurrences, the prevalence of high-speed
vehicles emerges as a particularly concerning issue, with its potential for catastrophic
consequences
In response to this pressing challenge, governmental bodies, academic institutions, and
automotive industries across the globe have embarked on ambitious research and
development initiatives aimed at curbing accident risks and fortifying the safety of
passengers and drivers
The global toll of fatalities resulting from vehicle accidents underscores the critical
importance of implementing measures to enhance road safety. Among the leading causes of
such accidents is the prevalence of high-speed vehicles [1]. As a response to this pressing
issue, governments, academic institutions, and automotive industries worldwide have
initiated extensive research and development projects aimed at reducing accident risks and
safeguarding passengers and drivers
The alarming toll of fatalities resulting from vehicle accidents underscores the urgent need
for comprehensive measures to enhance road safety on a global scale. Among the myriad
factors contributing to these tragic incidents, high-speed vehicles stand out as a significant
and recurring concern [1]. In response, governmental bodies, academic institutions, and
automotive industries worldwide have embarked on ambitious research and development
endeavors aimed at mitigating accident risks and safeguarding the lives of passengers and
drivers [2].
These collaborative efforts have given rise to a plethora of innovative projects, spanning
regions such as Japan, the United States, and the European Union. Projects like DEMO,
ASV1, ASV2, JARI, IVI, WAVE, VSC, FleeNet, Carlink, C2C-CC, and PReVENT represent
concerted efforts to advance safety technologies and services within the automotive sector,
focusing on areas such as accident prevention, vehicle communication, and infrastructure
development [3-5].
Central to these initiatives is the deployment of Intelligent Transportation Systems (ITS)
within Vehicular Ad Hoc Networks (VANETs), which leverage advanced communication
technologies to enable vehicles to interact seamlessly with one another and with roadside
infrastructure in real-time [6]. Within this dynamic framework, Roadside Units (RSUs)
emerge as crucial communication nodes, facilitating the exchange of critical information
between vehicles and the broader network infrastructure [7].
However, in sparse RSU-based VANETs, characterized by non-overlapping coverage areas,
the detection of high-speed vehicles presents a formidable challenge. Addressing this
challenge head-on, this paper introduces the Position-Based High-Speed Vehicle Detection
Algorithm (PHVA), specifically tailored for such network environments [8].
Within the VANET ecosystem, every vehicle is equipped with a sophisticated array of
components, including Trusted Platform Modules (TPMs), On-Board Units (OBUs), Global
Positioning Systems (GPS), and an array of sensors. These components collectively enable
secure communication, environmental monitoring, and real-time status reporting, forming
the foundation of intelligent vehicle systems [9].
This paper serves as a focused exploration of the implementation and evaluation of the
PHVA algorithm for high-speed vehicle detection within VANETs. Leveraging information
from adjacent RSUs, the algorithm dynamically calculates vehicle speed, with the Central
Server (CS) tasked with identifying speed violations and communicating with Certification
Authorities (CAs) as necessary [10].
To comprehensively evaluate the efficacy of the PHVA algorithm, extensive simulations are
conducted using the Vehicles in Network Simulation (Veins) hybrid framework. This
framework seamlessly integrates OMNeT++ for network setup and Simulation of Urban
Mobility (SUMO) for realistic traffic management, providing a robust platform for algorithm
evaluation [11].
The subsequent sections of this paper are structured to provide a thorough examination of
the proposed PHVA algorithm, including a review of related work in vehicle detection
methods, a detailed description of the network model, and an in-depth analysis of
simulation results. Finally, insights into future research directions are provided to inform
ongoing efforts to enhance vehicle safety within VANET environments [12-20].
References: [1] Reference source for vehicle accident fatalities. [2] References to various
research and development projects aimed at enhancing road safety. [3-5] Examples of
research projects in different regions. [6] Description of ITS within VANETs. [7] Role of RSUs
in facilitating communication within VANETs. [8] Reference for the Position-Based High-
Speed Vehicle Detection Algorithm (PHVA). [9] Components equipped in vehicles within the
VANET ecosystem. [10] Proposed PHVA algorithm for high-speed vehicle detection. [11]
Simulation framework utilized for evaluating the PHVA algorithm. [12-20] Additional
references for related research and development efforts in vehicle safety and VANET
technologies

Literature Review

1. Traditional Approaches: Early vehicle detection methods relied on handcrafted


features and traditional machine learning algorithms. Techniques such as Haar
cascades and Histogram of Oriented Gradients (HOG) were commonly used.
(Reference: Traditional Vehicle Detection Techniques
https://www.researchgate.net/publication/267272082_Vehicle_Detection_and_Trac
king_Techniques_A_Concise_Review)

2. Deep Learning-Based Approaches: The advent of deep learning has revolutionized


vehicle detection, with Convolutional Neural Networks (CNNs) becoming the
backbone of many modern systems. Models like Faster R-CNN, YOLO (You Only Look
Once), and SSD (Single Shot MultiBox Detector) have achieved remarkable
performance. (Reference: Deep Learning for Vehicle Detection
https://www.hindawi.com/journals/cin/2022/2019257 /)
3. Context-Aware Approaches: Context-aware vehicle detection methods leverage
additional information such as road topology, semantic context, and temporal
dynamics to improve accuracy and robustness. These approaches consider the
surrounding context to refine detection results. (Reference: Context-Aware Vehicle
Detection
https://ieeexplore.ieee.org/document/9811048 )
4. Multi-Sensor Fusion: Integrating data from multiple sensors such as cameras, LiDAR,
and radar has gained traction in vehicle detection. Fusion techniques combine
information from diverse sensors to enhance detection accuracy and reliability,
especially in challenging environments. (Reference: Multi-Sensor Fusion for Vehicle
Detection
https://www.semanticscholar.org/paper/Multi-Sensor-Fusion-Technology-for-
3DObject-in-A-Wang-Li/cb8059007e13467ed90f9c512c9c0cda86a8928b)
5. Real-Time Implementation: Real-time vehicle detection systems are essential for
applications like advanced driver assistance systems (ADAS) and smart traffic
management. Efficient algorithms and hardware acceleration techniques are
employed to achieve real-time performance. (Reference: Real-Time Vehicle
Detection
https://iopscience.iop.org/article/10.1088/1742-6596/887/1/012068 )
6. Challenges and Future Directions: Despite advancements, challenges such as
occlusions, scale variations, and adverse weather conditions persist in vehicle
detection. Future research directions include exploring novel sensor modalities,
improving robustness, and addressing scalability issues. (Reference: Challenges and
Future Directions in Vehicle Detection
https://etrr.springeropen.com/articles/10.1186/s12544-019-0390-4 )

Objective:

1. System Development: Design and implement a vehicle speed detection system


utilizing image processing techniques. This involves creating software with six
primary components: Image Acquisition, Image Enhancement, Image Segmentation,
Image Analysis, Speed Detection, and Report Generation.
2. Component Integration: Integrate the aforementioned components into a cohesive
system that accurately detects and calculates vehicle speeds from video scenes.
3. Usability Assessment: Evaluate the usability of the vehicle speed detection system
under specific conditions to ensure it meets user needs and is easy to use.
4. Performance Evaluation: Assess the performance of the system in terms of accuracy,
efficiency, and reliability. This includes measuring detection time, speed calculation
accuracy, and overall system stability.
5. Effectiveness Analysis: Analyze the effectiveness of the system in real-world
scenarios to determine its ability to accurately detect and calculate vehicle speeds
under various conditions.
6. Optimization Strategies: Explore optimization strategies to improve the efficiency
and effectiveness of the system. This may involve optimizing parameters such as
resolution, image processing algorithms, and system configurations.
7. Empirical Experimentation: Conduct empirical experiments to validate the
performance and effectiveness of the system. This includes collecting data,
conducting experiments, and analyzing results to draw conclusions about the
system's capabilities and limitations.
8. Insights and Implications: Provide insights into the implications of various
parameters on system performance. By analyzing experimental results, the study
aims to offer valuable insights that can inform optimization strategies and guide
future research efforts in the field of vehicle speed detection.

System description
In order to measure the distance of an object from a single image it is necessary to
have a frontal view and to know the true magnitude of the object. Unfortunately, the
dimensions of vehicles are different depending on the make and model, so they
cannot be used as a reference. However, a common element on the back of all
vehicles is the license plate. It must be approved and its shape and dimensions are
fixed in each country. Localising the front vehicle's number plate and having
previously established a relationship between the number plate's size in the image
and the distance to the camera, the vehicle's distance can be obtained directly.

After capturing a grayscale frame, the first step consists of establishing of a region of
interest on the road corresponding to the safety area in front of our vehicle. Any
vehicle circulating inside this safety area is susceptible to a possible rear-end
collision. Next, the vehicle detection step begins and a first distance estimation is
performed based on the vehicle's bounding box location. Then the search of the
vehicle's number plate is used for two purposes: to validate the vehicle's detection
and to obtain the vehicle's distance. Remember that the relationship between the
dimensions of the number plate in the image and the distance to the camera has
already been established. Finally, the analysis of consecutive images is employed to
obtain the vehicle's relative speed.

The camera is placed beside the rear-view mirror to capture the scene in front of the vehicle.
In addition to the road and vehicles travelling ahead, many other objects can appear in a
vehicle's frontal image. A region of interest of the road (ROI) is very important because it
simplifies the scene, focusing only on the area risk of a rear-end collision and avoiding the
analysis of the part of the road without influence in our trajectory (Fig. 1). In this way, the
possibility of errors, false positive detections and computational load are reduced and the
vehicle detection reliability is increased
Vehicle detection
The vehicle detection procedure is based on two features: the shadow underneath
the vehicle and the lower horizontal edge of this shadow. A distinctive feature of
vehicles is the shadow underneath them. Its intensity depends on the illumination,
which in turn depends on the weather, but it is always present on the road. Owing to
the vehicles’ morphology the space between the vehicle's underside and the road is
small so the road area under the vehicle is not exposed to direct sunlight and it is
only affected by a little quantity of lateral diffuse light. This lack of light makes this
road area very dark and free of brightness, regardless of lighting conditions, texture
and colour of the asphalt. Even if the road is shaded, the vehicles’ underside is
darker than its surroundings. This phenomenon is mathematically explained in [25].
On the other hand, any other element of the road (lateral shadows, potholes,
manhole covers etc.) is exposed to both direct and diffuse light which makes it
clearer and brighter. Although these elements can be dark they do not exceed the
darkness intensity of the shadow under the vehicle [25].

On cloudy days, vehicles are only lit by diffuse light which comes from all directions
so it creates little or no lateral shadows making the shadow underneath easily
distinguished. Sunny scenes are lit by both sunlight and diffuse light casting lateral
shadows. The shadow under the vehicle is noticeably darker than the lateral one
because the latter is illuminated only by diffuse light. On a cloudy/rainy day, the
street lighting could easily cause reflections from wet objects and asphalt, but the
road under the vehicle is not affected, remaining dark and without brightness. In a
tunnel the vehicle underneath is even darker than in other situations because
artificial lighting is more direct and there is a low level of diffuse light, making the
shadow practically black.

The method most used to identify the shadow underneath the vehicle was proposed
in [26]. A road area is extracted by defining the lowest central homogeneous region
in the image ‘free driving space’ delimited by edges. Then, a shadowed region is
defined as a region that has lower intensity than a threshold value m −3σ,
where m and σ are the mean and variance of the road pixels’ frequency distribution.
This method has two important drawbacks. Firstly, the illumination conditions make
the road's intensity vary non-uniformly. Even a well asphalted road can show zones
where the pixels’ intensity is significantly different. Secondly, not always the lowest
central homogeneous region in the image matches with the road. In urban traffic,
pedestrian crossings and sign markings, lateral shadows and patches of different
asphalt are constantly appearing on the road and their edges are detected. The
region delimited by edges may not belong to the road which can significantly mislead
the vehicle detection procedure.

In order to overcome these drawbacks, we propose a thresholding method based on


the histogram of only the ROI. A distinctive feature of the ROI image (Fig. 2a) is that
its grey-level histogram displays two characteristic peaks. The lower peak (nearest to
0) corresponds to the shadow underneath the vehicle and the higher one to the road.
Intensity values because of lateral shadows, potholes, manhole covers etc. can
occur between the two peaks. As road markings are brighter (white and yellow) than
the road, their intensity attains high values located on the histogram's right.
Depending on the lighting, both peaks undergo grey-level variation but the peak
corresponding to the shadow does not attain values higher than 50 units in the
histogram. The shadow's intensity values can vary between 0 (dark day) and 50
(clear day). This fact was throughout the system development and was confirmed in
all tests. The pixel values obtained are specific to the sensor camera employed and
depend on parameters such as pixel depth, dynamic range and exposure time. The
thresholding criteria for the shadow's segmentation is to automatically choose the
higher grey-scale value (value on the right) of the lower intensity peak, as long as the
latter is lower than 50. In the cases, where there is a vehicle in the ROI, the
shadow's grey-level peak is always present in the histogram and the shadow is
easily segmented. In the absence of vehicles, the lower peak is not present and the
intensity values are higher than 50 so the threshold is not established. In the case of
a false shadow detection, the error is suppressed in the number plate detection
stage, as the system does not detect any license plate. Fig. 2b shows the shadowed
regions of the whole scene whose values are below the threshold.
Road scene

a Region of interest ROI

b Thresholded image

c Horizontal edges

d ROI's vehicle candidate

After shadow thresholding, horizontal edges that correspond to the transitions from
non-shadow region (bottom) to shadow regions (up) are extracted as in [26] and
candidates are determined based on the location of those horizontal edges within the
ROI (Fig. 2c). Only horizontal edges detected within the ROI, either in whole or in
part, are considered while all those outside the ROI are discarded.

Next, the bounding box containing the vehicles’ back is obtained. As the dimensions
of the vehicles’ back are different for each make and model, a standard aspect ratio
of vehicles’ backs is assumed as in [26]. In this approach, we consider that the
length of the shadow's horizontal edge detected is the vehicle's width, and in order to
encompass all kinds of vehicles and vans, the height of the box is equal to 130% of
its width (Fig. 2d).

Finally, as the shadow is on the road plane and assuming flat earth as in [17], a first
rough estimation of the vehicle's distance is obtained based on the location of the
lower edge of the vehicle's bounding box (the shadow's lower edge) in the image.
This approximate distance is very useful because it in turn provides values of the
vehicle number plate dimensions at this distance which are exploited in the number
plate detection algorithm (Section 3.5). The procedure is based on the relationship
between the vertical location of the shadow in the image (in pixels) and the real
vehicle's distance (metres). This relationship was established before the system was
put into use and it also relates the vehicle's distance with the dimensions of the
vehicle number plate characters (in pixels). This relationship is specific to the image
resolution adopted, to the camera elevation in the ego-vehicle and to the camera tilt.
To carry out this operation our vehicles were placed behind one another at a known
distance (Dist), an image was taken and the shadow's vertical location (SVP) and the
number plate's dimensions in the image were checked (Table 1). This process was
done for different distances in a range from 1 to 10 m on different days to take into
account different lighting conditions.

Table 1. Vehicle's distance-shadow's vertical location-number plate character's


dimensions
Dist, m SVP, pixel NPW, pixel CH, pixel NPW/CH CS, pixel CT, pixe

Sunny Cloudy Rainy

1 65 86 92 188 38 4.94 29 5
Dist, m SVP, pixel NPW, pixel CH, pixel NPW/CH CS, pixel CT, pixe

Sunny Cloudy Rainy

2 119 137 142 138 27 5.11 21 4

3 216 231 235 109 22 4.95 17 4

4 282 294 297 91 19 4.78 15 3

5 332 341 343 78 16 4.87 13 3

6 371 378 380 69 14 4.92 11 2

7 400 404 405 61 12 5.1 10 2

8 426 428 428 55 11 5 9 2

9 445 445 445 50 10 5 8 1

10 465 465 465 45 9 5 7 1

 Dist = real vehicle's distance, SVP = vertical position of the lower edge of the
vehicle's bounding box (shadow) in the image, NPW = number plate joint width, CH
= character height, CS = separation between numbers and letters and CT =
character thickness trace.

As can be observed in Table 1, for a similar vehicle distance (Dist), the vertical
position of the shadow in the image (SVP) varies depending on the lighting
conditions. This is basically because of two factors: firstly, the shadow underneath
does not perfectly match with the vehicle's vertical projection onto the road. This
factor is emphasised depending on the perspective which varies with the distance
(the point of view is at a higher angle as the vehicle ahead becomes closer to the
camera). Secondly, in sunny scenes there is a shadow around the vehicle (lateral
shadow). There is not a clear intensity limit between the shadow underneath the
vehicle and the surrounding shadow because the intensity values between both vary
smoothly. In these cases, it is very difficult to establish an automatic threshold which
perfectly separates both shadows and inevitably some pixels belonging to the
surrounding shadow are included as part of the shadow underneath. Taking these
factors into account, the distance provided by the vehicle's shadow is not accurate
enough, particularly for a close range and in sunny scenes. However, this
approximate distance provides indicative values of the vehicle's number plate
dimensions at this distance, making the next number plate detection method
adaptive to the range
Number plate features and distance-size relationship
The aim of the next license plate detection is to calculate vehicle distance to the
camera and therefore the vehicle's distance. License plates have several constant
parameters that can be checked in order to obtain the distance. The longer the
dimensions, the more accurate the measurement. The ideal dimension to be
checked would be the plate's width. Nevertheless, experience indicated that with
light coloured vehicles the result of the image processing is not satisfactory when the
aim is to obtain the plate's contour. However, the plate's characters can be easily
localised and isolated by means of morphological methods. The system proposed
was designed to work with Spanish plates but it can be adapted to plates of other
countries. Spanish plates are made up of a four numbers and three letters, and their
dimensions are fixed (Fig. 3).

Fig. 3

Open in figure viewerPowerPoint

Number plate parameters

NPW = number plate width, CH = character height, CT = character thickness trace and CS =
characters separation

In order to calculate the vehicle's distance, two dimensions of its number plate are considered
by the algorithm: the width of the number plate (NPW) and the height of the characters (CH).
The consideration of one or the other depends on the skew angle of the vehicle ahead. When
the back of the vehicle ahead is in the frontal view, the parameter considered to estimate the
distance is the NPW. In this case both parameters could be employed in the measurement but
as the NPW is longer than the CH, the accuracy provided in the distance measurement is
greater.

However, when the vehicle ahead is on a curve, the image is not a perfect frontal view of the
vehicle's rear so the NPW in the image is shorter than it should be, which generates a distance
measuring error. In these cases, the number plate parameter considered to establish the
vehicle's distance is the height (CH) of the nearest character (the highest). In a skewed
situation, the characters of the plate do not have the same size. If a rotation of the plate were
performed in order to place the plate in a frontal view, the axis of this rotation would be the
highest side of the highest character of the skewed plate. Furthermore, after this rotation the
height of the number plate in a frontal view would be the same as the height of the highest
character of the skewed number plate, so this rotation is unnecessary.

In order to know if the rear of the vehicle ahead is in a frontal view, the algorithm makes use of the
aspect constant relationship between the two parameters in a frontal view

(1)

The relationships between the NPW, CH, aspect constant and the distance to the camera were
established in Table 1. Fig. 4 is the graphical representation of NPW, CH and the distance.

Fig. 4

Open in figure viewerPowerPoint

Character's height against distance and its fitting curve

NPW against distance and its fitting curve

From Fig. 4, two mathematical relationships were obtained

(2)

(3)

where DNPW (m) is the vehicle distance provided by the width of the number plate and DCH (m) is the
vehicle distance provided by the character's height. Fig. 4 shows how NPW and CH do not vary
linearly, but decrease exponentially with the distance.

Table 1 shows how the aspect relationship of the NPW and the CH in frontal view remains
practically constant and equal to 5. Moreover, Table 1 shows the different accuracy provided
by NPW and CH. For instance, from 5 to 6 m the use of the NPW provides a measurement
precision of 0.11 m (1 m/9pix), while the CH provides a precision of 0.33 m (1 m/3pix).
3.5 Number plate detection
The number plate detection procedure proposed is based on the well-known morphological
operator, top-hat. This method is widely employed in number plate localisation under restricted
conditions where some information related to the number plate's dimensions in the image is
available. We make this method adaptive to vehicles in motion at any distance within the range. The
number plate detection is restricted to the vehicle's bounding box, thereby significantly simplifying
the background region. The top-hat operator is described as

(4)

Firstly, the morphological closing C of the image I with a circular structuring element (SE) eliminates
all the dark on light background elements smaller than SE (Fig. 5b). Then, subtracting the resulting
image from the initial one, we obtain an image D where non-filtering sensitive elements are
removed and the high frequency areas (including the plate's characters) remain enhanced

Morphological closing C of the image I with

a Vehicle candidate

b Morphological closing with SE


c Resulting image of the top-hat method

d Resulting binary image

Distance measurement
The penultimate stage of the system consists of extracting the width of the seven characters
of the number plate (real NPW) and the tallest character height (real CH). The aspect ratio
between the two parameters is obtained from (1) and the result is compared with the aspect
ratio parameter given by the SVP in Table 1. If the difference between them is less than 5%,
the scene is considered a perfect frontal view so the vehicle's distance is obtained in a
straightforward manner from (2) by means of the real NPW. In any other case, the scene is
skewed so the vehicle's distance is directly obtained from (3) by means of the real CH.

Relative speed measurement


In order to calculate the front vehicle's relative speed, the system considers successive images.
Knowing the vehicle distance variation in two consecutive images and the time elapsed between
each image acquisition, the vehicle's relative speed is calculated as

(5)

where D1 is the vehicle distance in the first frame, D2 is the vehicle distance in the second frame and
ΔT is the time between the two frames.

Refrence: https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/iet-its.2013.0098

Code:
import cv2
import numpy as np
import ctypes

# Function to get screen resolution


def get_screen_resolution():
user32 = ctypes.windll.user32
width = user32.GetSystemMetrics(0)
height = user32.GetSystemMetrics(1)
return width, height

# Video capture
cap = cv2.VideoCapture("C:\\Users\\Acer\\OneDrive\\Desktop\\vehicle_speed_detection\\
vc.mp4")

# Get screen resolution


screen_width, screen_height = get_screen_resolution()

# Set video width and height to fit the display


cap.set(cv2.CAP_PROP_FRAME_WIDTH, screen_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, screen_height)

# Parameters for background subtraction


fgbg = cv2.createBackgroundSubtractorMOG2(history=500, detectShadows=True)

# Parameters for optical flow


lk_params = dict(winSize=(15, 15), maxLevel=2, criteria=(cv2.TERM_CRITERIA_EPS |
cv2.TERM_CRITERIA_COUNT, 10, 0.03))

# Initialize variables
prev_frame = None
speed = 0

while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Apply background subtraction
fgmask = fgbg.apply(frame)

# Apply thresholding to get binary mask


_, thresh = cv2.threshold(fgmask, 128, 255, cv2.THRESH_BINARY)

# Find contours in the binary mask


contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

for contour in contours:


# Filter out small contours (noise)
if cv2.contourArea(contour) > 1000:
x, y, w, h = cv2.boundingRect(contour)

# Draw bounding box around detected vehicle


cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

# Optical flow for speed measurement


if prev_frame is not None:
prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY)
cur_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(prev_gray, cur_gray, None, 0.5, 3, 15, 3, 5,
1.2, 0)

# Calculate average flow in the bounding box


avg_flow = np.mean(flow[y:y + h, x:x + w], axis=(0, 1))

# Speed calculation (simplified assumption)


speed = np.sqrt(avg_flow[0] ** 2 + avg_flow[1] ** 2) * 15 # Assuming 15 frames
per second

# Display speed on the frame


cv2.putText(frame, f"Speed: {speed:.2f} km/h", (x, y - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2,
cv2.LINE_AA)

# Check for overspeeding


if speed > 80: # Set your own threshold
cv2.putText(frame, "Overspeeding!", (x, y - 30), cv2.FONT_HERSHEY_SIMPLEX,
0.5, (0, 0, 255), 2,
cv2.LINE_AA)

# Display the result


cv2.imshow("Vehicle Detection", frame)
prev_frame = frame.copy()

if cv2.waitKey(1) & 0xFF == ord('q'):


break

cap.release()
cv2.destroyAllWindows()

Algorithm:

Data Acquisition: The algorithm starts by acquiring data, usually through


sensors like radar, lidar, or cameras. These sensors capture information about
the vehicles passing through a particular area.
Vehicle Detection: The algorithm identifies vehicles in the acquired data. This
can involve image processing techniques such as object detection and tracking
in the case of camera data, or signal processing for radar and lidar data.

Speed Calculation: Once vehicles are detected, the algorithm calculates their
speed. This can be done by measuring the change in position of a vehicle over
time (e.g., using consecutive frames in video data), or by analyzing the Doppler
shift in radar or lidar signals.

Validation and Filtering: The algorithm may include validation steps to ensure
accurate speed measurements. This can involve filtering out erroneous
detections, such as noise or stationary objects mistaken for vehicles.

Speed Estimation: Based on the calculated speeds, the algorithm may estimate
the average speed of vehicles over a certain period, or determine the speed of
individual vehicles.

Reporting and Visualization: Finally, the algorithm outputs the speed data in a
format suitable for its intended application. This could involve generating
reports, displaying real-time speed information on digital signage, or
integrating with traffic management systems.
METHADOLOGY:
In this research endeavor, meticulous attention was devoted to crafting a
robust methodology for the detection of vehicle speeds. The process
commenced with the careful deployment of a video camera, strategically
positioned to capture side-view images of the mobile vehicle under scrutiny
[8]. To ensure precision in distance measurements, a sophisticated hand-held
laser distance meter (Bosch PLR 50) was employed, boasting an impressive
accuracy of ±0.1 millimeter [10]. This instrument facilitated the meticulous
assessment of real distances on the road, encompassing the field of view (FOV)
coverage along horizontal velocity vectors and distances between two points
parallel to the velocity vector.
The camera at the heart of this investigation boasted impressive specifications,
including a frame rate of 30 frames per second (fps) and an effective resolution
of 640x480 pixels [11]. With each frame captured at a staggering pace of 33.3
milliseconds, there existed a stringent time limit within which speed
calculations needed to be executed. In order to manage computational
resources efficiently and mitigate unnecessary redundancy, a judicious
approach to frame sampling was adopted, selecting a modest rate of 2 frames
per second [12]. This decision, made in light of the video's format (AVI) and its
30 fps rate, aimed to strike a balance between computational efficiency and
data integrity.

Moreover, recognizing the computational complexity associated with


processing colored images, a prudent decision was made to convert the
captured colored images into grayscale representations [8]. This
transformation not only streamlined subsequent computational tasks but also
facilitated a more focused analysis of the underlying data. Additionally,
meticulous attention was directed towards characterizing the camera's field of
view (FOV), denoted as '2α' degrees, which played a pivotal role in delineating
the horizontal distance 'D' on the road parallel to the camera [9]. The vertical
distance 'L' from the camera lens to the road surface served as another crucial
parameter, underscoring the intricacies involved in the experimental setup.

Conclusion

In this study, speeds of vehicles on urban roads are detected by using video cameras. Two
measurement techniques are employed to determine the speeds of vehicles. First technique has
employed simple detection of vehicles by entering and exiting a rectangular test area in camera FOV.
As the vehicle entered the test area entrance time stamp is recorded. When the vehicle exited the
test area exit time stamp is again recorded. The time difference between them is used to calculate
the vehicle speed. In the second technique, time stamps are determined at each loop iteration of the
program. The vehicle being tracked can have different speed readings at each iteration. The time
difference between these time stamps and initial time stamp are used in speed calculations. Each
time stamp is described as a discrete time and the distance of the vehicle from this time stamp to
the initial time stamp is also determined in pixel form as the vehicle distance. Once the pixel
distance is calibrated and converted into real distance on the road, adiscrete speed calculation is
carried out at each time stamp across the test area. Since the vehicle has a linear motion across the
road, average of these speeds gave an average speed value for the vehicle. Speed measurements of
two techniques are checked out with a car speedometer. A Hyundai i20 car is employed to check the
speed measurements with the developed video system. Table 1 and Table 2 are given for both
techniques. Absolute speed differences are compared in these tables. It was found that the video
system has a speed detection accuracy of ±1.2 km/hrup to 50 km/hr. After 50km/hr the video
system accuracy starts to degrade and speed error margins increases. See Figure 6. Both techniques
show the similar performances and they both determine the speeds of vehicles approximately the
same. The developed system will be a very useful system to measure the low speeds accurately and
without using expensive instruments. Eventually a smart phone

A novel monovision-based system able to detect a vehicle ahead and measure the distance
and relative speed has been presented. The use of a single common camera makes the system
cheaper than stereovision systems and other technologies such as RADAR-based approaches.
Besides, monocular vision significantly reduces the computational complexity and the
processing time of stereovision. The distance measurement method proposed is based on the
vehicle's number plate whose dimensions and shape are standardised in each country. The
algorithm simplifies the complex traffic scene focusing only on a ROI of the road
corresponding to the safety area in front of our vehicle. The ROI reduces the possibility of
errors improving the system's reliability. The vehicle detection procedure successfully utilises
the shadow underneath the target vehicle and horizontal edges regardless of weather
conditions. An adaptive shadow segmentation threshold is proposed based on the
characteristic ROI image histogram. The number plate localisation algorithm proposed adapts
the top-hat operator to vehicles in motion over the range. In-vehicle tests carried out in real
urban traffic showed excellent robustness and reliability in vehicle and number plate
detection and very good accuracy in distance measurement.

Findings:
Accuracy of Speed Estimation: The study may present findings on the accuracy
of vehicle speed estimation achieved through the implemented methodology.
This could involve comparing the calculated speeds with ground truth data or
manual measurements to assess the system's accuracy and reliability.

Effectiveness of Image Processing Techniques: Findings may highlight the


effectiveness of image processing algorithms utilized in vehicle detection and
speed estimation. This could include evaluations of different algorithms'
performance in detecting and tracking vehicles accurately across various
environmental conditions.

Impact of Camera Specifications: The study may explore the impact of camera
specifications, such as frame rate, resolution, and focal length, on the accuracy
and efficiency of speed detection. Findings could provide insights into optimal
camera settings for effective speed estimation.

Comparison with Alternative Methods: Findings may involve comparisons


between the proposed methodology and alternative methods or existing
systems for vehicle speed detection. This could include assessments of
detection accuracy, computational efficiency, and real-world applicability.

Evaluation of Field of View (FOV): Findings may discuss the implications of


accurately characterizing the camera's field of view (FOV) on speed detection
performance. This could involve analyzing how variations in FOV parameters
affect the system's ability to detect and track vehicles accurately.

Optimization Strategies: The study may present findings on optimization


strategies employed to enhance the speed detection system's performance.
This could include insights into algorithmic optimizations, parameter tuning,
and computational efficiency improvements.

Advanced Image Processing Techniques: Explore the application of advanced


image processing techniques, such as deep learning algorithms, convolutional
neural networks (CNNs), and object detection models, to enhance the accuracy
and efficiency of vehicle speed detection systems.

Multi-Sensor Fusion: Investigate the integration of multiple sensors, including


cameras, radar, LiDAR, and GPS, to develop multi-modal vehicle speed
detection systems. Fusion of sensor data could improve detection accuracy and
robustness, especially in challenging environmental conditions.
Real-Time Speed Monitoring: Focus on the development of real-time vehicle
speed monitoring systems that can provide instantaneous speed
measurements and alerts to drivers or traffic management authorities. This
could involve optimizing algorithms for rapid processing and implementing
efficient communication protocols.

Adaptive Speed Detection Systems: Research adaptive speed detection


systems that can dynamically adjust their parameters and configurations based
on changing road conditions, traffic flow, and environmental factors. Adaptive
systems could improve accuracy and responsiveness in diverse scenarios.

Integration with Intelligent Transportation Systems (ITS): Explore the


integration of vehicle speed detection systems with broader ITS frameworks to
enhance traffic management, congestion mitigation, and road safety. This
could involve developing interoperable systems that exchange data with traffic
control centers and connected vehicles.

Automated Enforcement Systems: Investigate the feasibility and effectiveness


of automated enforcement systems for monitoring and enforcing speed limits
on roadways. This could include the development of automated speed
cameras or vehicle-mounted devices capable of issuing citations to speeding
vehicles.

Validation and Field Testing: Conduct extensive validation and field testing of
vehicle speed detection systems in real-world environments to assess their
performance, reliability, and practical usability. Field studies could provide
valuable insights into system effectiveness and identify areas for improvement.

Privacy and Ethical Considerations: Address privacy concerns and ethical


considerations associated with vehicle speed detection technologies,
particularly regarding data collection, storage, and use. Future research should
explore privacy-preserving techniques and frameworks for responsible data
management.

You might also like