Nothing Special   »   [go: up one dir, main page]

CN113538193B - Traffic accident handling method and system based on artificial intelligence and computer vision - Google Patents

Traffic accident handling method and system based on artificial intelligence and computer vision Download PDF

Info

Publication number
CN113538193B
CN113538193B CN202110735377.1A CN202110735377A CN113538193B CN 113538193 B CN113538193 B CN 113538193B CN 202110735377 A CN202110735377 A CN 202110735377A CN 113538193 B CN113538193 B CN 113538193B
Authority
CN
China
Prior art keywords
traffic accident
accident
target
image
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110735377.1A
Other languages
Chinese (zh)
Other versions
CN113538193A (en
Inventor
黄红星
付少新
陈少红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yunlue Software Technology Co ltd
Original Assignee
Nanjing Yunlue Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yunlue Software Technology Co ltd filed Critical Nanjing Yunlue Software Technology Co ltd
Priority to CN202110735377.1A priority Critical patent/CN113538193B/en
Publication of CN113538193A publication Critical patent/CN113538193A/en
Application granted granted Critical
Publication of CN113538193B publication Critical patent/CN113538193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Evolutionary Biology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a traffic accident handling method and system based on artificial intelligence and computer vision, which are used for acquiring identity information of a starter for starting traffic accident handling, determining initial accident vehicles according to acquired traffic accident images after the identity information is acquired, acquiring target areas of the traffic accident images, identifying ground lane lines of the target areas, determining traffic accident responsibility of each initial accident vehicle according to the initial accident vehicles and the target ground lane lines, and finally integrating license plate numbers of each initial accident vehicle to obtain a traffic accident identification data packet. Therefore, the traffic accident handling method provided by the invention is an automatic handling method, and after the traffic accident occurs, traffic accident responsibility identification results of related vehicles can be directly obtained according to data processing without waiting for traffic police, so that the traffic accident handling efficiency is improved, the traffic influence degree is reduced, the possibility of secondary traffic accidents is further reduced, and the traffic safety is improved.

Description

Traffic accident handling method and system based on artificial intelligence and computer vision
Technical Field
The invention relates to a traffic accident handling method and system based on artificial intelligence and computer vision.
Background
When a traffic accident occurs, traffic police usually need to go to the site to divide traffic accident responsibility, but a certain time is usually needed from the occurrence of the traffic accident to the arrival of the traffic police at the site, and if the traffic accident occurs during rush hour, traffic may be seriously blocked in the process of waiting for the traffic police, and secondary traffic accidents may be caused.
Disclosure of Invention
In order to solve the technical problems, the invention provides a traffic accident processing method and system based on artificial intelligence and computer vision.
The invention adopts the following technical scheme:
A traffic accident handling method based on artificial intelligence and computer vision, comprising:
Acquiring identity information of a starter for starting traffic accident handling;
after the identity information is acquired, acquiring at least two traffic accident images, wherein the traffic accident images comprise at least two vehicles;
processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image;
The method comprises the steps of carrying out consistency comparison on features of initial accident vehicles in each traffic accident image, and if consistency comparison conditions are met, obtaining target areas of each traffic accident image, wherein the target areas are related to areas occupied by each initial accident vehicle in the traffic accident image;
Carrying out ground lane line identification on the target area of each traffic accident image to obtain target ground lane lines in each traffic accident image;
determining traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image;
Obtaining license plate numbers of all initial accident vehicles;
And integrating the license plate numbers of the initial accident vehicles, the identity information and the determined traffic accident responsibilities to obtain a traffic accident determination data packet.
Optionally, the processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image specifically includes:
acquiring characteristic data of each vehicle, wherein the characteristic data comprise image data of an area occupied by the corresponding vehicle in the corresponding traffic accident image;
The feature data of every two vehicles in the same traffic accident image are respectively combined to obtain a plurality of feature set data;
Classifying each feature set data to obtain target feature set data, and determining two vehicles corresponding to the target feature set data as initial accident vehicles, wherein the target feature set data is feature set data belonging to a preset target class in each feature set data.
Optionally, the classifying the feature set data to obtain target feature set data includes:
converting each feature set data into a feature set matrix;
Each feature set matrix is passed through a preset convolutional neural network to obtain each full-connection layer matrix;
Based on the full-connection layer matrix and the preset parameter matrix, calculating probability values of each feature set matrix belonging to each preset category;
for any feature set matrix, acquiring the highest probability value, and taking a preset category corresponding to the highest probability value as the category of feature set data corresponding to the feature set matrix;
And acquiring feature set data belonging to a preset target category in the categories of the feature set data to obtain the target feature set data.
Optionally, the feature of the initial accident vehicle in each traffic accident image is a color feature;
the step of comparing the consistency of the features of the initial accident vehicles in the traffic accident images comprises the following steps:
and identifying the colors of the initial accident vehicles in the traffic accident images, generating color sets corresponding to the traffic accident images, comparing whether the corresponding color sets in the traffic accident images are consistent, and if so, indicating that the consistency comparison condition is met.
Optionally, the identifying the ground lane line of the target area of each traffic accident image to obtain the target ground lane line in each traffic accident image includes:
acquiring a target area image of a target area of the traffic accident image;
identifying a target object in the target area image, which is different from the background of the target area image;
determining expressions of all the target objects according to the relative positions of the target objects in the target area image;
inputting the expression of each target object into a preset ground lane line identification database, and obtaining the expression corresponding to the ground lane line to obtain the target ground lane line; wherein the ground lane line identification database includes at least one expression corresponding to a ground lane line,
Optionally, the determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image includes:
According to the relative positions of two initial accident vehicles and a target ground lane line in the traffic accident image, calculating the corresponding line passing area and the relative angle of the two initial accident vehicles, wherein the line passing area is the area of the initial accident vehicle exceeding the target ground lane line in the known travelling direction, and the relative angle is the included angle between the central axis of the initial accident vehicle and the target ground lane line in the known travelling direction;
According to a preset responsibility category database, determining traffic accident responsibility categories corresponding to the corresponding line passing areas and the corresponding angles of the two initial accident vehicles in each traffic accident image; wherein the responsibility category database comprises the corresponding relation among the area interval of the passing line, the relative angle interval and the responsibility category of the traffic accident,
Optionally, the method for handling traffic accidents further includes the following steps after integrating the license plate numbers of the initial accident vehicles, the identity information and the determined traffic accident responsibilities to obtain traffic accident determination data packets:
Detecting a smart phone with Bluetooth started in a preset range, and performing Bluetooth pairing;
and after the Bluetooth pairing is completed, the traffic accident identification data packet is sent to the intelligent mobile phone.
The traffic accident handling system based on the artificial intelligence and the computer vision comprises a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor realizes the traffic accident handling method based on the artificial intelligence and the computer vision when executing the computer program.
Firstly, acquiring identity information of a starter for starting traffic accident handling, and carrying out subsequent operation only after the identity information is acquired, thereby improving the reliability of data processing; after the identity information is acquired, acquiring at least two traffic accident images, wherein each traffic accident image comprises at least two vehicles, processing the traffic accident images to obtain initial accident vehicles in the traffic accident images, carrying out consistency comparison on the characteristics of the initial accident vehicles in the traffic accident images, acquiring target areas of the traffic accident images if consistency comparison conditions are met, associating the target areas with occupied areas of the initial accident vehicles in the traffic accident images, carrying out ground lane line identification on the target areas of the traffic accident images to obtain target ground lane lines in the traffic accident images, determining traffic accident responsibility of the initial accident vehicles according to the initial accident vehicles and the target ground lane lines in the license plate images, and finally combining the license plate numbers, the identity information and the recognized traffic accident responsibility of the initial accident vehicles to obtain traffic accident recognition data packets. Therefore, the traffic accident handling method provided by the invention is an automatic handling method, and after the traffic accident occurs, traffic accident responsibility identification results of related vehicles can be directly obtained according to data processing without waiting for traffic police, so that the traffic accident handling efficiency is improved, the traffic influence degree is reduced, the possibility of secondary traffic accidents is further reduced, and the traffic safety is improved.
Drawings
Fig. 1 is a flow chart of a traffic accident handling method based on artificial intelligence and computer vision.
Detailed Description
The embodiment provides a traffic accident handling method based on artificial intelligence and computer vision, and a hardware execution subject of the traffic accident handling method can be an intelligent mobile terminal, such as an intelligent mobile phone. As shown in fig. 1, the traffic accident handling method includes:
step 1: acquiring identity information of a starter for starting traffic accident handling:
After a traffic accident, it is necessary to initiate traffic accident handling. The hardware executing body acquires the identity information of a starting person for starting traffic accident handling, and the starting person can be any party of traffic accidents. As a specific embodiment, when the APP corresponding to the traffic accident handling method is opened, an identity information collection interface may be presented for collecting identity information of the initiator, such as fingerprint information, face image information, and the like. After the identity information of the starting personnel is acquired, the subsequent traffic accident handling process can be performed.
Step 2: after the identity information is acquired, at least two traffic accident images are acquired, wherein the traffic accident images comprise at least two vehicles:
after the identity information of the starting personnel is acquired, at least two traffic accident images are acquired, and the specific number of the traffic accident images is set according to actual needs. The traffic accident image is shot by a camera.
Since a traffic accident is usually generated by two vehicles, in this embodiment, at least two vehicles are included in the traffic accident image. The traffic accident image includes at least two vehicles because: if the image of the traffic accident is taken at a relatively long distance, other normal vehicles may be included in addition to the two principal vehicles in which the traffic accident occurs.
It should be appreciated that existing target detection algorithms may be employed to identify individual vehicles in each traffic accident image.
Step 3: processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image:
and processing the traffic accident image to obtain initial accident vehicles in the traffic accident image, wherein it is understood that the number of the initial accident vehicles is two. As a specific embodiment, a specific process is given below:
(1) And acquiring characteristic data of each vehicle, wherein the characteristic data comprise image data of the area occupied by the corresponding vehicle in the corresponding traffic accident image. Therefore, the image data is the position data of the area occupied by the corresponding vehicle in the corresponding traffic accident image, and the feature data of each vehicle is determined according to the position of the vehicle in the corresponding traffic accident image. As a specific embodiment, a bounding box for defining the corresponding vehicle may be set for each vehicle. The setting mode of the boundary box belongs to the conventional technical means and is not repeated. Of course, the area occupied by the vehicle in the corresponding traffic accident image does not exceed the range defined by the boundary frame of the vehicle, and then the image data of the vehicle includes the coordinates of the center point of the corresponding boundary frame and the respective points of the respective sides of the boundary frame in the corresponding traffic accident image, the area of the boundary frame, and the like.
(2) Feature data of every two vehicles in the same traffic accident image are respectively combined to obtain a plurality of feature set data. The feature set data is formed by combining feature data of two corresponding vehicles. As a specific embodiment, if the traffic accident image includes 3 vehicles, there are a total of 3 combinations, respectively: three feature set data can be obtained for the first vehicle and the second vehicle, the second vehicle and the third vehicle, and the first vehicle and the third vehicle.
(3) Classifying the feature set data to obtain target feature set data, namely screening the feature set data to obtain required feature set data according to classification results. As a specific embodiment, a specific process is given below:
And converting the feature set data into a feature set matrix by adopting a preset data conversion algorithm. It should be understood that the feature set data includes various data, and thus, the correspondence relationship between various data and matrix data is involved in the conversion manner of the preset data into the matrix.
And (3) passing each feature set matrix through a preset convolutional neural network to obtain each full-connection layer matrix. The convolutional neural network is obtained by training a plurality of training data in advance, wherein the training data comprise a feature set training matrix and categories corresponding to the feature set training matrix. The convolutional neural network comprises a convolutional layer, a pooling layer and a fully-connected layer. And each feature set matrix sequentially passes through the convolution layer, the pooling layer and the full-connection layer to be calculated, so that each full-connection layer matrix is obtained.
And calculating probability values of each feature set matrix belonging to each preset category based on the full-connection layer matrix and the preset parameter matrix. As a specific embodiment, the following calculation formula is adopted for calculation:
Wherein σ (k|j) is the probability score that the feature set matrix k belongs to the preset category j, z k is the full-connection layer matrix corresponding to the feature set matrix k, x j is the preset parameter matrix corresponding to the preset category j, x i is the preset parameter matrix corresponding to the preset category i, M is the total number of the preset categories, and e is a natural constant.
Therefore, through the calculation formula, the probability value of each feature set matrix belonging to each preset category can be obtained, the probability value characterizes the probability of each feature set matrix belonging to each preset category, and the probability value is higher as the probability value is higher. And for any feature set matrix, acquiring the highest probability value from the obtained probability values, and taking the preset category corresponding to the highest probability value as the category of the feature set data corresponding to the feature set matrix.
In this embodiment, the feature set data is divided into two types, respectively: the feature set data composed of the feature data of the two vehicles when the two vehicles are accident vehicles and the feature set data composed of the feature data of the two vehicles when the two vehicles are not accident vehicles or are not accident vehicles respectively, and correspondingly, the preset categories are two: both vehicles are accident vehicles, and both vehicles are not all or not accident vehicles. Then, through the above processing procedure, the probability values that each feature set matrix belongs to the two classes can be obtained, and the larger value in the two probability values can be obtained, wherein the class corresponding to the larger probability value is the class of the feature set data, specifically: both vehicles are accident vehicles, or both vehicles are not all or not accident vehicles.
And acquiring feature set data belonging to a preset target class in the classes of the feature set data, wherein the preset target class is that two vehicles are accident vehicles, namely acquiring feature set data belonging to the accident vehicles in the classes of the feature set data, and acquiring target feature set data. Accordingly, two vehicles corresponding to the target feature set data are determined as initial accident vehicles.
Step 4: the method comprises the steps of carrying out consistency comparison on features of initial accident vehicles in each traffic accident image, and if consistency comparison conditions are met, obtaining target areas of each traffic accident image, wherein the target areas are related to areas occupied by the initial accident vehicles in the traffic accident images:
in order to avoid the identification errors of the initial accident vehicles in the single traffic accident images, after the initial accident vehicles in the traffic accident images are obtained, the characteristics of the initial accident vehicles in the traffic accident images are subjected to consistency comparison, and only if the consistency comparison condition is met, the two initial accident vehicles obtained through identification are determined to be real accident vehicles.
In this embodiment, the feature of the initial accident vehicle in each traffic accident image is a color feature, that is, the color of the vehicle. Then, the comparison process for comparing the characteristics of the initial accident vehicles in the traffic accident images in a consistent manner comprises the following steps: the colors of the initial accident vehicles in the traffic accident images are identified, and a color set corresponding to the traffic accident images is generated, wherein the color set comprises the colors of the two initial accident vehicles. And comparing whether the corresponding color sets in the traffic accident images are consistent, if so, indicating that the consistency comparison condition is met. The vehicle color recognition algorithm may adopt an existing color recognition algorithm, and will not be described again.
And if the consistency comparison condition is met, acquiring a target area of each traffic accident image, wherein the target area is related to the occupied area of each initial accident vehicle in the traffic accident image. Since two initial accident vehicles must be adjacent, the areas occupied by the two initial accident vehicles must be intersecting or have overlapping areas. In this embodiment, a union region of regions occupied by two initial accident vehicles in corresponding traffic accident images is taken as a target region, and the target region is a region occupied by two initial accident vehicles. Then, each traffic accident image corresponds to a target area.
As another embodiment, the feature of the initial accident vehicle in each traffic accident image may be license plate number information, and the license plate number of the initial accident vehicle in the traffic accident image may be compared to obtain a consistency comparison result.
Step 5: carrying out ground lane line identification on the target area of each traffic accident image to obtain target ground lane lines in each traffic accident image:
In this embodiment, the ground lane line and the positions of the two initial accident vehicles are combined to determine the traffic accident responsibility division, so that the ground lane line identification is required for the target area of each traffic accident image to obtain the target ground lane line in each traffic accident image. As a specific embodiment, the following is given as a specific implementation procedure of this step:
(1) And acquiring a target area image of a target area of the traffic accident image. It should be appreciated that the target area image includes ground lane lines in addition to the two initial accident vehicles. In order to facilitate subsequent image processing, a feature image of the target area image can be acquired through a preset image feature acquisition algorithm.
(2) And identifying the target object which is different from the background of the target area image in the target area image. Since the colors of various indication lines (such as lane lines, zebra lines, arrow indication lines, etc.) in the road are white or yellow and the road body color is dark gray, the difference between the colors of the various indication lines in the road and the road body color is large, that is, the difference in pixel values is large. Then, by using various indication lines in the road as target objects and the road main body as image background, the target objects which are different from the image background of the target area in the target area image can be identified. As a specific implementation manner, the target object is in two pixel value ranges, and the two pixel value ranges correspond to white and yellow respectively, so that the pixel values of all the pixel points of the target area image are obtained, and the target object can be identified by comparing the pixel values with the two pixel value ranges, namely the indication line in the road is identified. It should be understood that the number of target objects in the target area image may be only one or may be plural.
The identification process of the target object can improve the identification accuracy, and as other implementation manners, other existing target identification algorithms can be adopted to identify various indication lines in the road, namely, the target object which is different from the background of the target area image in the target area image.
(3) And determining the expression of each target object according to the relative position of the target object in the target area image.
As a specific embodiment, a two-dimensional coordinate system is constructed by using the target area image, specifically, a pixel point at the far left bottom of the target area image can be used as an origin of the two-dimensional coordinate system, and a straight line where the length of the target area image is located and a straight line where the width of the target area image is located can be respectively used as an X axis and a Y axis of the origin of the two-dimensional coordinate system. Then, the coordinates of each pixel point in the target area image, that is, the coordinates of each pixel point of the target object, can be determined. Then, fitting the expressions of the target objects, namely fitting the linear equation of each target object, to obtain the expressions of each target object. The fitting may be performed using an existing fitting algorithm, such as a RANSAC curve fitting algorithm. For any one target object, in the target area image, the pixel points of the target object are positioned at two sides of a straight line corresponding to the corresponding expression.
(4) Inputting the expressions of each target object into a preset ground lane line identification database, and obtaining the expressions corresponding to the ground lane lines to obtain target ground lane lines, wherein the ground lane line identification database comprises at least one expression corresponding to the ground lane lines.
Since the indication lines include various types, such as ground lane lines, zebra lines, arrow indication lines, and the like, it is necessary to screen the target ground lane lines from the target objects, that is, screen the target ground lane lines from the obtained indication lines.
The shapes of different indication lines are different, namely the shapes of ground lane lines, zebra lines and arrow indication lines are different, and accordingly, the expression sets corresponding to different kinds of target objects are different, and at least one expression corresponding to the corresponding target object is included in the expression sets. It should be appreciated that since the shape of the different types of indicator lines is known, then each expression in the corresponding set of expressions for the various types of indicator lines is also known. Therefore, a ground lane line identification database can be constructed according to various types of indication lines and expressions corresponding to the various types of indication lines, the ground lane line identification database comprises at least one expression corresponding to the ground lane line, and the number of the expressions is determined according to the actual situation of the ground lane line. It should be appreciated that the ground lane line identification database encompasses all of the expressions of the ground lane lines that are currently available. Of course, the ground lane line identification database may also include at least one expression corresponding to other kinds of indication lines. Then, the expressions of the target objects are input into the ground lane line identification database, so that the expressions corresponding to the ground lane lines in the target objects can be obtained, and the target ground lane lines can be obtained.
The ground lane line identification process can improve the accuracy of ground lane line identification. As another embodiment, the existing ground lane line recognition algorithm may be used to recognize the ground lane line in the target area of each traffic accident image.
Step 6: according to the initial accident vehicles and the target ground lane lines in each traffic accident image, determining traffic accident responsibility of each initial accident vehicle:
For any one traffic accident image, according to two initial accident vehicles in the traffic accident image and the acquired target ground lane line, namely according to the position relation between the two initial accident vehicles and the target ground lane line, the traffic accident responsibility of each initial accident vehicle can be determined, for example: the traffic accident responsibility of the initial accident vehicle with the line pressing or the line exceeding is defined as "full responsibility", and the traffic accident responsibility of the initial accident vehicle without the line pressing is defined as "no responsibility".
As a specific embodiment, a specific implementation procedure of traffic accident responsibility identification is given below:
According to the relative positions of two initial accident vehicles and a target ground lane line in the traffic accident image, calculating the corresponding line passing area and the relative angle of the two initial accident vehicles, wherein the line passing area is the area of the initial accident vehicle exceeding the target ground lane line in the known travelling direction, and the relative angle is the included angle between the central axis of the initial accident vehicle and the target ground lane line in the known travelling direction. It should be understood that the known direction of travel is known, i.e. the known direction of travel is known in the traffic accident image. Since the area occupied by each initial accident vehicle in the traffic accident image is known, the area of each two initial accident vehicles exceeding the target ground lane line in the known travelling direction can be calculated. The central axis of the initial accident vehicle can be obtained by processing an image of the initial accident vehicle, a linear equation of the initial accident vehicle can be obtained by fitting through a RANSAC curve fitting algorithm, a straight line corresponding to the traffic accident image where the linear equation is located is taken as the central axis of the corresponding initial accident vehicle, and then an included angle between the central axis of the initial accident vehicle and a target ground lane line in a known travelling direction is calculated to obtain a relative angle.
The method comprises the steps of presetting a responsibility category database, wherein the responsibility category database comprises corresponding relations of a passing area section, a relative angle section and a traffic accident responsibility category, namely a plurality of corresponding relations, and each corresponding relation is the corresponding relation of the passing area section, the relative angle section and the traffic accident responsibility category. Then, the obtained corresponding passing area and relative angle of the two initial accident vehicles are input into a preset responsibility category database, and the corresponding passing area and corresponding traffic accident responsibility category of the two initial accident vehicles are determined in each traffic accident image.
It should be appreciated that the traffic accident responsibility category settings contained in the responsibility category database may be more refined, such as "primary responsibility", "secondary responsibility", etc. in addition to "full responsibility" and "no responsibility". Then, the responsibility category database contains a cross-line area section and a relative angle section corresponding to "no responsibility", a cross-line area section and a relative angle section corresponding to "secondary responsibility", a cross-line area section and a relative angle section corresponding to "primary responsibility", and a cross-line area section and a relative angle section corresponding to "full responsibility".
Step 7: obtaining license plate numbers of all initial accident vehicles:
After the traffic accident responsibility category of each initial accident vehicle is obtained, the license plate number of each initial accident vehicle is obtained by carrying out license plate number identification on the traffic accident image. It should be appreciated that license plate number identification is performed using currently known license plate number identification algorithms.
Step 8: integrating the license plate numbers of the initial accident vehicles, the identity information and the determined traffic accident responsibilities to obtain traffic accident determination data packets:
And (3) integrating the license plate numbers of the initial accident vehicles obtained in the step (7), the identity information obtained in the step (1) and the traffic accident responsibility obtained in the step (6), such as data compression, to obtain a traffic accident identification data packet. It should be appreciated that the traffic accident identification data packet includes: license plate numbers of two initial accident vehicles, identity information of a starter who starts traffic accident responsibility identification, and traffic accident responsibility of the two initial accident vehicles.
After the traffic accident identification data packet is obtained, the traffic accident identification data packet can be uploaded to a traffic police system or other servers or stored locally.
In this embodiment, after step 8, the traffic accident handling method further includes the following steps:
Step 9: detecting a smart phone with Bluetooth started in a preset range, and performing Bluetooth pairing:
and detecting the intelligent mobile phone with Bluetooth started in a preset range of a hardware execution main body of the traffic accident handling method, and carrying out Bluetooth pairing with the intelligent mobile phone. It should be understood that the smart phone may be a mobile phone of other people related to traffic accidents, or may be a mobile phone of a traffic police.
Step 10: after the Bluetooth pairing is completed, the traffic accident identification data packet is sent to the intelligent mobile phone:
After the Bluetooth pairing is completed, bluetooth connection with the smart phone can be established, and then the obtained traffic accident identification data packet is sent to the smart phone.
The embodiment also provides a traffic accident handling system based on artificial intelligence and computer vision, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the traffic accident handling method based on artificial intelligence and computer vision provided by the embodiment.

Claims (6)

1. A traffic accident handling method based on artificial intelligence and computer vision, comprising:
Acquiring identity information of a starter for starting traffic accident handling;
after the identity information is acquired, acquiring at least two traffic accident images, wherein the traffic accident images comprise at least two vehicles;
processing the traffic accident image to obtain an initial accident vehicle in the traffic accident image;
The method comprises the steps of carrying out consistency comparison on features of initial accident vehicles in each traffic accident image, and if consistency comparison conditions are met, obtaining target areas of each traffic accident image, wherein the target areas are related to areas occupied by each initial accident vehicle in the traffic accident image;
Carrying out ground lane line identification on the target area of each traffic accident image to obtain target ground lane lines in each traffic accident image;
determining traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image;
Obtaining license plate numbers of all initial accident vehicles;
integrating the license plate numbers of the initial accident vehicles, the identity information and the determined traffic accident responsibilities to obtain traffic accident determination data packets;
The processing of the traffic accident image to obtain an initial accident vehicle in the traffic accident image comprises the following specific steps:
acquiring characteristic data of each vehicle, wherein the characteristic data comprise image data of an area occupied by the corresponding vehicle in the corresponding traffic accident image;
The feature data of every two vehicles in the same traffic accident image are respectively combined to obtain a plurality of feature set data;
Classifying each feature set data to obtain target feature set data, and determining two vehicles corresponding to the target feature set data as initial accident vehicles, wherein the target feature set data is feature set data belonging to a preset target class in each feature set data;
the classifying the feature set data to obtain target feature set data includes:
converting each feature set data into a feature set matrix;
Each feature set matrix is passed through a preset convolutional neural network to obtain each full-connection layer matrix;
Based on the full-connection layer matrix and the preset parameter matrix, calculating probability values of each feature set matrix belonging to each preset category;
for any feature set matrix, acquiring the highest probability value, and taking a preset category corresponding to the highest probability value as the category of feature set data corresponding to the feature set matrix;
And acquiring feature set data belonging to a preset target category in the categories of the feature set data to obtain the target feature set data.
2. The traffic accident handling method based on artificial intelligence and computer vision according to claim 1, wherein the feature of the initial accident vehicle in each traffic accident image is a color feature;
the step of comparing the consistency of the features of the initial accident vehicles in the traffic accident images comprises the following steps:
and identifying the colors of the initial accident vehicles in the traffic accident images, generating color sets corresponding to the traffic accident images, comparing whether the corresponding color sets in the traffic accident images are consistent, and if so, indicating that the consistency comparison condition is met.
3. The traffic accident handling method based on artificial intelligence and computer vision according to claim 1, wherein the performing ground lane line recognition on the target area of each traffic accident image to obtain the target ground lane line in each traffic accident image comprises:
acquiring a target area image of a target area of the traffic accident image;
identifying a target object in the target area image, which is different from the background of the target area image;
determining expressions of all the target objects according to the relative positions of the target objects in the target area image;
Inputting the expression of each target object into a preset ground lane line identification database, and obtaining the expression corresponding to the ground lane line to obtain the target ground lane line; wherein the ground lane line identification database includes at least one expression corresponding to a ground lane line.
4. The traffic accident handling method based on artificial intelligence and computer vision according to claim 1, wherein the determining the traffic accident responsibility of each initial accident vehicle according to the initial accident vehicle and the target ground lane line in each traffic accident image comprises:
According to the relative positions of two initial accident vehicles and a target ground lane line in the traffic accident image, calculating the corresponding line passing area and the relative angle of the two initial accident vehicles, wherein the line passing area is the area of the initial accident vehicle exceeding the target ground lane line in the known travelling direction, and the relative angle is the included angle between the central axis of the initial accident vehicle and the target ground lane line in the known travelling direction;
According to a preset responsibility category database, determining traffic accident responsibility categories corresponding to the corresponding line passing areas and the corresponding angles of the two initial accident vehicles in each traffic accident image; the responsibility category database comprises corresponding relations among the area section of the passing line, the relative angle section and the responsibility category of the traffic accident.
5. The traffic accident handling method based on artificial intelligence and computer vision according to claim 1, wherein the data integration of license plate numbers of the initial accident vehicles, the identity information and the determined traffic accident responsibilities is performed, and after the traffic accident determination data packet is obtained, the traffic accident handling method further comprises the following steps:
Detecting a smart phone with Bluetooth started in a preset range, and performing Bluetooth pairing;
and after the Bluetooth pairing is completed, the traffic accident identification data packet is sent to the intelligent mobile phone.
6. A traffic accident handling system based on artificial intelligence and computer vision, comprising a memory and a processor, and a computer program stored on the memory and running on the processor, characterized in that the processor implements the traffic accident handling method based on artificial intelligence and computer vision according to any one of claims 1-5 when executing the computer program.
CN202110735377.1A 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision Active CN113538193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110735377.1A CN113538193B (en) 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110735377.1A CN113538193B (en) 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision

Publications (2)

Publication Number Publication Date
CN113538193A CN113538193A (en) 2021-10-22
CN113538193B true CN113538193B (en) 2024-07-16

Family

ID=78097345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110735377.1A Active CN113538193B (en) 2021-06-30 2021-06-30 Traffic accident handling method and system based on artificial intelligence and computer vision

Country Status (1)

Country Link
CN (1) CN113538193B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116189114B (en) * 2023-04-21 2023-07-14 西华大学 Method and device for identifying collision trace of vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135418A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Traffic accident fix duty method, apparatus, equipment and storage medium based on picture
CN112487498A (en) * 2020-12-16 2021-03-12 京东数科海益信息科技有限公司 Traffic accident handling method, device and equipment based on block chain and storage medium
CN112784724A (en) * 2021-01-14 2021-05-11 上海眼控科技股份有限公司 Vehicle lane change detection method, device, equipment and storage medium

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2608996B2 (en) * 1991-03-13 1997-05-14 三菱電機株式会社 Vehicle running state storage device
KR100698946B1 (en) * 2005-09-08 2007-03-26 주식회사 피엘케이 테크놀로지 A device for recording the accident information of a vehicle
KR101599628B1 (en) * 2014-08-29 2016-03-04 정유철 Reporting System For Traffic Accident Image
CN104463935A (en) * 2014-11-11 2015-03-25 中国电子科技集团公司第二十九研究所 Lane rebuilding method and system used for traffic accident restoring
CN106157386A (en) * 2015-04-23 2016-11-23 中国电信股份有限公司 Vehicular video filming control method and device
CN106021548A (en) * 2016-05-27 2016-10-12 大连楼兰科技股份有限公司 Remote damage assessment method and system based on distributed artificial intelligent image recognition
CN108154696A (en) * 2017-12-25 2018-06-12 重庆冀繁科技发展有限公司 Car accident manages system and method
CN108399382A (en) * 2018-02-13 2018-08-14 阿里巴巴集团控股有限公司 Vehicle insurance image processing method and device
CN110942623B (en) * 2018-09-21 2022-07-26 斑马智行网络(香港)有限公司 Auxiliary traffic accident handling method and system
CN109671006B (en) * 2018-11-22 2021-03-02 斑马网络技术有限公司 Traffic accident handling method, device and storage medium
CN109743673B (en) * 2018-12-17 2020-11-13 江苏云巅电子科技有限公司 High-precision indoor positioning technology-based traffic accident tracing system and method for parking lot
CN111046212A (en) * 2019-12-04 2020-04-21 支付宝(杭州)信息技术有限公司 Traffic accident processing method and device and electronic equipment
CN111444808A (en) * 2020-03-20 2020-07-24 平安国际智慧城市科技股份有限公司 Image-based accident liability assignment method and device, computer equipment and storage medium
CN212220190U (en) * 2020-05-14 2020-12-25 李娜 Traffic accident traceability system
CN111681336A (en) * 2020-05-14 2020-09-18 李娜 Traffic accident traceability system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135418A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Traffic accident fix duty method, apparatus, equipment and storage medium based on picture
CN112487498A (en) * 2020-12-16 2021-03-12 京东数科海益信息科技有限公司 Traffic accident handling method, device and equipment based on block chain and storage medium
CN112784724A (en) * 2021-01-14 2021-05-11 上海眼控科技股份有限公司 Vehicle lane change detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113538193A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
KR102138082B1 (en) Method, system, device and readable storage medium to realize insurance claim fraud prevention based on multiple image consistency
CN106599792B (en) Method for detecting hand driving violation behavior
CN108537197B (en) Lane line detection early warning device and method based on deep learning
WO2021212659A1 (en) Video data processing method and apparatus, and computer device and storage medium
CN109711264B (en) Method and device for detecting occupation of bus lane
CN108268867B (en) License plate positioning method and device
RU2431190C2 (en) Facial prominence recognition method and device
US10445602B2 (en) Apparatus and method for recognizing traffic signs
CN103902970B (en) Automatic fingerprint Attitude estimation method and system
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN111401188B (en) Traffic police gesture recognition method based on human body key point characteristics
CN106529532A (en) License plate identification system based on integral feature channels and gray projection
CN108764096B (en) Pedestrian re-identification system and method
CN106529461A (en) Vehicle model identifying algorithm based on integral characteristic channel and SVM training device
CN111008956B (en) Beam bottom crack detection method, system, device and medium based on image processing
CN108304749A (en) The recognition methods of road speed line, device and vehicle
CN106503748A (en) A kind of based on S SIFT features and the vehicle targets of SVM training aids
CN111950499A (en) Method for detecting vehicle-mounted personnel statistical information
CN112597995B (en) License plate detection model training method, device, equipment and medium
CN112069988A (en) Gun-ball linkage-based driver safe driving behavior detection method
CN113538193B (en) Traffic accident handling method and system based on artificial intelligence and computer vision
CN109784171A (en) Car damage identification method for screening images, device, readable storage medium storing program for executing and server
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
CN107491714B (en) Intelligent robot and target object identification method and device thereof
CN115187549A (en) Image gray processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240617

Address after: Room 906, Building A, Qilin Science and Technology Innovation Park, No. 100 Tianjiao Road, Jiangning District, Nanjing City, Jiangsu Province, 210000

Applicant after: Nanjing Yunlue Software Technology Co.,Ltd.

Country or region after: China

Address before: 523000 Room 308, building 3, No. 2, R & D fifth road, Songshanhu Park, Dongguan City, Guangdong Province

Applicant before: Dongguan green light network technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant