Nothing Special   »   [go: up one dir, main page]

CN113052118A - Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera - Google Patents

Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera Download PDF

Info

Publication number
CN113052118A
CN113052118A CN202110373598.9A CN202110373598A CN113052118A CN 113052118 A CN113052118 A CN 113052118A CN 202110373598 A CN202110373598 A CN 202110373598A CN 113052118 A CN113052118 A CN 113052118A
Authority
CN
China
Prior art keywords
traffic
detection
reconstruction
dome camera
scene change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110373598.9A
Other languages
Chinese (zh)
Inventor
顾青
朱广文
张建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Haofang Information Technology Co ltd
Original Assignee
Shanghai Haofang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Haofang Information Technology Co ltd filed Critical Shanghai Haofang Information Technology Co ltd
Priority to CN202110373598.9A priority Critical patent/CN113052118A/en
Publication of CN113052118A publication Critical patent/CN113052118A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for realizing scene change video analysis and detection based on a high-speed dome camera, which comprises the steps of carrying out multi-angle high-precision detection and identification on a basic target, carrying out unsupervised cluster learning according to a detection target and a driving track, and finishing the spatial reconstruction of a traffic monitoring scene; analyzing the driving track model, determining the lane direction, and completing the reconstruction of the traffic incident detection rule; the method comprises the steps of detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, and judging traffic events through analyzing the traffic flow data. The invention also relates to a corresponding system, device, processor and computer readable storage medium thereof. By adopting the method, the system, the device, the processor and the computer readable storage medium thereof, the difficult demand and the technical problem existing in the current traffic management are effectively solved, the current situation of the traffic management is improved, the technical bottleneck in the industry is effectively broken through, and the updating and rapid development of the intelligent traffic industry are greatly promoted.

Description

Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera
Technical Field
The invention relates to the field of traffic industry, in particular to the field of intelligent analysis application, and specifically relates to a method, a system, a device, a processor and a computer readable storage medium for realizing scene change video analysis and detection based on a high-speed dome camera.
Background
The intelligent analysis application of the traffic incident at the present stage is widely applied in the traffic industry, and various products are also released by intelligent traffic product manufacturers. However, as more and more traffic cameras are deployed in a scene, the human effort of scene modeling, the adaptive capabilities of the model, and even the lifecycle of the project product can be significantly impacted.
(1) Firstly, in a scene algorithm provided by an intelligent transportation manufacturer at present, modeling configuration is basically performed manually, and according to actual conditions of a scene and according to certain criteria and experience, parameter information such as lane lines, lane distances, lane directions, sensitivity and the like is drawn or configured on a video image. Therefore, the more cameras, the larger the configuration work, and the time and the labor are consumed;
(2) second, because the configuration is done manually, once the modeling is complete, the configuration information must match the real-world scenario. The scene cannot be changed at will, so that the modeling is limited to the fixed bolt face. If the visual angle or scene changes, the correction and adjustment are needed again manually;
(3) the fast dome camera has the characteristics of flexible visual angle and free focal length for more and more use in the current traffic scene, and is widely used in a plurality of industrial fields. Therefore, there is a limit to the application of conventional scene modeling to a speed dome camera. However, it is now common practice to define a plurality of preset positions on the speed dome and model the plurality of preset positions. In practice, this is equivalent to converting one speed dome into a plurality of fixed bolt locks, but adds more work. In addition, the temporary self-defined visual angle cannot be quickly modeled, and a plurality of defects still exist.
(4) The artificial modeling still has limitations, and although the influences of weather, illumination and artificial control cannot achieve self-adaptive self-learning capability and cannot deal with large-scale deployment. Once the precision descends, will lead to more wrong reports to the smart analysis product falls as the useless thing. It is necessary to constantly make adjustments by experienced engineers for a long time.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method, a system, a device, a processor and a computer readable storage medium for realizing scene change video analysis and detection based on a high-speed dome camera, which have the advantages of good robustness, strong adaptability and good autonomous modeling capability.
In order to achieve the above object, the method, system, apparatus, processor and computer readable storage medium for implementing scene change video analysis and detection based on high speed dome camera of the present invention are as follows:
the method for realizing scene change video analysis and detection based on the high-speed dome camera is mainly characterized by comprising the following steps of:
(1) carrying out multi-angle high-precision detection and identification on a basic target by using a target detection algorithm, carrying out unsupervised cluster learning according to a detected target and a driving track, completing spatial reconstruction of a traffic monitoring scene, and completing first-layer spatial relationship reconstruction;
(2) analyzing a traffic track model, determining the lane direction, and completing reconstruction of a traffic incident detection rule and reconstruction of a second-layer traffic rule by combining a spatial structure obtained by the first-layer reconstruction;
(3) the method comprises the steps of detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, judging traffic events by analyzing the traffic flow data, and completing traffic event detection.
Preferably, the step (2) specifically comprises the following steps:
(2.1) constructing a lane line detection model through the characteristics of the structured road, detecting the edges of the lane lines, and greatly eliminating interference edge signals in the environment;
(2.2) detecting double-edge lane lines;
and (2.3) converting the detected lane line into a world coordinate system through inverse perspective, eliminating interference and a false lane line 2, and realizing accurate detection of the lane line.
Preferably, the step (1) specifically comprises the following steps:
(1.1) realizing multi-angle high-precision detection and identification on basic targets under various scenes by using a target detection algorithm;
and (1.2) performing unsupervised clustering learning according to the detection target and the driving track to complete spatial reconstruction of the traffic monitoring scene.
Preferably, the step (3) specifically includes the following steps:
(3.1) detecting vehicles, pedestrians and other targets appearing in a traffic scene by utilizing deep learning and motion detection, and establishing traffic flow data containing motion tracks of various targets;
and (3.2) judging the traffic incident by analyzing the traffic flow data to finish the traffic incident detection.
Preferably, the large-scale area detected and identified with high precision in the step (1.1) is sky, road and other area.
The system for realizing scene change video analysis and detection based on the high-speed dome camera is mainly characterized by comprising the following components:
the spatial relationship reconstruction function module is used for carrying out multi-angle high-precision detection and identification on a basic target by utilizing a target detection algorithm, carrying out unsupervised cluster learning according to the detected target and the driving track, completing spatial reconstruction of a traffic monitoring scene and completing first-layer spatial relationship reconstruction;
the traffic rule reconstruction function model is used for analyzing the traffic track model, determining the lane direction, and completing the reconstruction of the traffic incident detection rule and the reconstruction of the second-layer traffic rule by combining the spatial structure obtained by the spatial relationship reconstruction function module;
and the traffic incident detection function module is used for detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, judging traffic incidents by analyzing the traffic flow data and completing traffic incident detection.
The device for realizing scene change video analysis and detection based on the high-speed dome camera is mainly characterized by comprising the following components:
a processor configured to execute computer-executable instructions;
a memory storing one or more computer-executable instructions that, when executed by the processor, perform the steps of the above-described method for performing scene change video analytics detection based on a high-speed dome camera.
The processor for realizing scene change video analysis and detection based on the high-speed dome camera is mainly characterized in that the processor is configured to execute computer executable instructions, and the computer executable instructions are executed by the processor to realize the steps of the method for realizing scene change video analysis and detection based on the high-speed dome camera.
The computer-readable storage medium is primarily characterized by a computer program stored thereon, which is executable by a processor to perform the steps of the above-described method for performing scene change video analysis detection based on a high-speed dome camera.
The method, the system, the device, the processor and the computer readable storage medium for realizing scene change video analysis and detection based on the high-speed dome camera effectively solve the difficult and difficult technical requirements and problems in the current traffic management, improve the current situation of the traffic management, enable the interaction relationship between AI and equipment in the traffic system to be presented in a brand new way, effectively break through the technical bottleneck in the industry, greatly promote the update and rapid development of the intelligent traffic industry, fill the defects and the blank in the domestic application of the existing artificial intelligence technology and the intelligent traffic field, and realize the capacity upgrade of the intelligent traffic industry.
Drawings
Fig. 1 is a flowchart of a method for implementing scene change video analysis and detection based on a high-speed dome camera according to the present invention.
Fig. 2 is a schematic diagram of an embodiment of picture processing of the method for implementing scene change video analysis and detection based on a high-speed dome camera according to the present invention.
Fig. 3 is a schematic diagram of an embodiment of the invention after inverse transformation of picture processing in the method for realizing analysis and detection of scene change video based on a high-speed dome camera.
Detailed Description
In order to more clearly describe the technical contents of the present invention, the following further description is given in conjunction with specific embodiments. The invention discloses a method for realizing scene change video analysis and detection based on a high-speed dome camera, which comprises the following steps:
(1) carrying out multi-angle high-precision detection and identification on a basic target by using a target detection algorithm, carrying out unsupervised cluster learning according to a detected target and a driving track, completing spatial reconstruction of a traffic monitoring scene, and completing first-layer spatial relationship reconstruction;
(1.1) realizing multi-angle high-precision detection and identification on basic targets under various scenes by using a target detection algorithm;
(1.2) performing unsupervised clustering learning according to the detection target and the driving track to complete spatial reconstruction of a traffic monitoring scene;
(2) analyzing a traffic track model, determining the lane direction, and completing reconstruction of a traffic incident detection rule and reconstruction of a second-layer traffic rule by combining a spatial structure obtained by the first-layer reconstruction;
(2.1) constructing a lane line detection model through the characteristics of the structured road, detecting the edges of the lane lines, and greatly eliminating interference edge signals in the environment;
(2.2) detecting double-edge lane lines;
(2.3) converting the detected lane line into a world coordinate system through inverse perspective, eliminating interference and a false lane line 2, and realizing accurate detection of the lane line;
(3) detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, judging traffic events by analyzing the traffic flow data, and completing traffic event detection;
(3.1) detecting vehicles, pedestrians and other targets appearing in a traffic scene by utilizing deep learning and motion detection, and establishing traffic flow data containing motion tracks of various targets;
and (3.2) judging the traffic incident by analyzing the traffic flow data to finish the traffic incident detection.
In a preferred embodiment of the present invention, the large-scale area detected and identified with high accuracy in step (1.1) is sky, road, or other area.
As a preferred embodiment of the present invention, the system for implementing scene change video analysis and detection based on a high-speed dome camera includes:
the spatial relationship reconstruction function module is used for carrying out multi-angle high-precision detection and identification on a basic target by utilizing a target detection algorithm, carrying out unsupervised cluster learning according to the detected target and the driving track, completing spatial reconstruction of a traffic monitoring scene and completing first-layer spatial relationship reconstruction;
the traffic rule reconstruction function model is used for analyzing the traffic track model, determining the lane direction, and completing the reconstruction of the traffic incident detection rule and the reconstruction of the second-layer traffic rule by combining the spatial structure obtained by the spatial relationship reconstruction function module;
and the traffic incident detection function module is used for detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, judging traffic incidents by analyzing the traffic flow data and completing traffic incident detection.
As a preferred embodiment of the present invention, the apparatus for implementing scene change video analysis and detection based on a high-speed dome camera includes:
a processor configured to execute computer-executable instructions;
a memory storing one or more computer-executable instructions that, when executed by the processor, perform the steps of the above-described method for performing scene change video analytics detection based on a high-speed dome camera.
As a preferred embodiment of the present invention, the processor for implementing scene change video analysis detection based on a high speed dome camera is configured to execute computer-executable instructions, and when the computer-executable instructions are executed by the processor, the steps of the method for implementing scene change video analysis detection based on a high speed dome camera are implemented.
As a preferred embodiment of the present invention, the computer readable storage medium has stored thereon a computer program which is executable by a processor to implement the steps of the above-mentioned method for implementing scene change video analysis detection based on a high speed dome camera.
In the specific implementation of the present invention, aiming at the problems existing in the industry at present, multi-party experience is accumulated in long-term service, the following scheme is proposed: the method solves the problems of automatically identifying parameters such as lane lines, lane distances, lane directions and the like, and improves the autonomous modeling capability; the self-adaptive capacity of various devices such as a rapid ball machine, a fixed gun, a cradle head gun and the like is realized; the problem that the model is invalid due to scene change after the first modeling is solved, the scene is quickly reconstructed, and the system robustness is improved; the adaptability is improved due to the influence of environmental factors such as weather change, day and night change, artificial light sources and the like.
The traditional traffic incident detection is basically realized based on a fixed gunlock visual angle, so that parameters such as lane lines, lane directions, virtual lane lines and real lane lines need to be manually set, once the gunlock is moved, all rules are invalid, and the detection precision is greatly reduced and even is invalid. The scene change video analysis and detection technology solves the problems through a three-layer reconstruction framework, and innovatively provides various scene change video analysis and detection technologies suitable for a dome camera, a gunlock, a pan-tilt-zoom camera and the like.
Firstly, the system utilizes deep learning-based object detection algorithms such as yolo, ssd and the like to realize multi-angle high-precision detection and identification on basic objects in various scenes, identifies three large-range areas such as sky, roads and other areas, and performs unsupervised clustering learning according to detected objects and driving tracks to complete spatial reconstruction of traffic monitoring scenes, namely first-layer spatial relationship reconstruction;
secondly, determining the lane direction by analyzing a driving track model, and finishing the reconstruction of a traffic incident detection rule by combining a space structure obtained by the reconstruction of the first layer, namely the reconstruction of a traffic rule of the second layer;
based on the characteristic that the structured highway has a flat road, the lane line detection algorithm firstly constructs a lane line detection model according to the characteristic of the structured road; secondly, Sobel is adoptedxThe method detects the interference edge information in the eliminating environment with the great lane line edge; then, adopting an improved Hough transformation method to detect the double-edge lane lines; and finally, the detected lane lines are transformed into a world coordinate system through inverse perspective by the algorithm, interference and false lane line 2 are eliminated through the characteristic that two sides of the lane lines are parallel to each other in the world coordinate system, and accurate detection of the lane lines is finally realized.
And finally, detecting vehicles, pedestrians and other targets appearing in the traffic scene by utilizing deep learning and motion detection, establishing traffic flow data containing various target motion tracks, judging traffic events such as parking, retrograde motion, pedestrians, sprinkles, accidents and the like by analyzing the traffic flow data, and finishing traffic event detection, namely third-layer traffic event reconstruction.
Through the three-layer traffic incident reconstruction model, when the visual angle of the dome camera is changed, the system can automatically identify lanes in the current scene, draw lane lines and other road information, and finally realize detection and analysis of various traffic incidents in the scene.
The lane images collected by the front edge device are analyzed, and the collected images include data information of houses, trees, sky in front and the like at two sides of a lane, and the data information basically does not include lane information. If the collected images are directly processed, the complexity of a detection algorithm is increased, the lane detection efficiency is reduced, and the irrelevant information can interfere the accurate extraction of lane lines. Therefore, it is necessary to extract a valid vehicle-to-area of interest and filter out interference information before detecting the lane line. On a structured expressway, lane line information is mainly distributed on the lower half part of an image, so that an interested area is set to be 0-H/n, the data processing capacity is effectively reduced by 50% and the interference of tree houses in the front sky and on two sides of the road can be eliminated.
An expressway is a structured road with standardized road marking lines and boundaries. Wherein the road side landmarks define a road area and the lane lines divide the road area into different lanes. The road boundary and the lane line jointly stipulate the driving area of the vehicle, and provide a criterion for judging the violation behaviors of the vehicle based on vision.
The actual road conditions are complex and changeable, but the road model can be constrained by virtue of the priori knowledge, so that the problem can be greatly optimized. First, image processing is a large data task, and the front-end edge device processes a large amount of data in a short time, so that reducing the data processing amount is an important method for improving the efficiency of the system, and the selection of the region of interest can effectively reduce the data processing amount. Secondly, the actual roads are not all flat, most roads have certain gradients, and the road surface of the structured road can be approximately flat in a limited visual field range, so that the corresponding relation between the world coordinate and the camera coordinate system is simplified. Based on the two priori knowledge constraints, the system operation amount can be effectively reduced, and the equipment identification efficiency is improved.
1. Building spatial relationships
The lane images collected by the front edge device are analyzed, and the collected images include data information of houses, trees, sky in front and the like at two sides of a lane, and the data information basically does not include lane information. If the collected images are directly processed, the complexity of a detection algorithm is increased, the lane detection efficiency is reduced, and the irrelevant information can interfere the accurate extraction of lane lines. Therefore, it is necessary to extract a valid vehicle-to-area of interest and filter out interference information before detecting the lane line. On a structured expressway, lane line information is mainly distributed on the lower half part of an image, so that an interested area is set to be 0-H/n, the data processing capacity is effectively reduced by 50% and the interference of tree houses in the front sky and on two sides of the road can be eliminated.
1.1 inverse transform perspective
The camera imaging process can be regarded as pinhole imaging, and the image acquisition is a perspective projection transformation process in which a world coordinate system W is mapped to an image coordinate system I. Through perspective transformation, the lane lines with uniform thickness and parallelism become thinner gradually at a far place and intersect at a vanishing point. In order to facilitate the detection of lane lines and the segmentation of multiple lanes, perspective transformation of the acquired images is required.
Due to the limited field of view for acquiring images and the limited region of interest for processing on the structured highway, the road can be considered flat. Thus the perspective transformation mapping process can be simplified, assuming (X, Y) is a point in the image coordinate system I and (X, Y) is a point in the world coordinate system W, the perspective mapping can be expressed as:
PI=HQW
in the formula, there are:
PI=[x,y,1]τ
QW=[X,Y,1]τ
Figure BDA0003010293940000071
h is a perspective transformation matrix, which includes 8 parameters. Solving the parameters in the transformation matrix requires 4 sets of graph coordinate points corresponding to the world coordinate system and the image coordinate system.
The inverse perspective transformation restores the real world characteristics of the object and establishes the mapping relation between the image two-dimensional image coordinate system and the three-dimensional world coordinate system. In the objective world, all parallel lines will intersect at the vanishing point after being projected onto the image plane. Therefore, according to the definition of the vanishing point, the straight lines passing through the vanishing point in the image are mutually parallel straight lines in the world coordinate system, and accordingly, the lane segmentation of the structured road can be completed through the vanishing point and the detected lane line.
2. Lane line detection and identification
The structured highway lane line has several characteristics: the road surface is flat, the left lane line and the right lane line are approximately parallel, the color contrast between the road surface and the lane lines is high, and the curvature radius of the curve is not lower than 650.
Since the structured road lane line has obvious features with high contrast, the edge feature of the lane line can be easily extracted by the method based on edge detection. Under the condition that the road is shaded and the lane line is damaged or the continuity of the lane line is poor, the lane line can be well fitted by the inverse perspective transformation method and the Hough straight line detection method. Therefore, the lane line detection method based on the edge characteristics can effectively overcome the influence of external environments such as lane line defects, road shadows and the like, and has good robustness in the environment.
2.1 preprocessing and denoising
The color collected image contains a large amount of visual information, and as the contrast between the lane lines and the road surface in the structured road image is high, complete lane lines can be easily extracted from the gray level image, so that the detection speed is increased, the data processing amount is effectively reduced, and the detection efficiency of the lane lines is improved through gray level image processing.
The current image noise processing method is mainly divided into two categories of a space domain and a frequency domain. The denoising method of the frequency domain transforms the image into the frequency domain according to the frequency characteristics of the image, filters the noise frequency through a construction filter, and then converts the denoised image back to the space domain. The denoising method of the space domain is to eliminate noise according to the relationship between the pixel points and the pixel points of the neighborhood. Because the edge device has higher real-time requirement on the detection algorithm and the quality of the acquired image is stable, the spatial domain denoising method based on the median filtering is adopted, the pulse noise suppression effect is obvious, and the method is equivalent to a Gaussian blur denoising method and can better store the lane line edge information.
2.2 Lane line detection
Aiming at the problem of the classical Hough transformation in the aspect of lane line detection, two methods are adopted for improvement:
1) changing the edge detection algorithm, reducing the information of non-lane line edge points in the environments of trees, shadows, deceleration strips and the like
2) The method has the advantages of establishing a lane line detection interest area model, reducing the scanning quantity of edge points in the Hough transformation process, and improving the linear detection speed
(1) Lane line edge extraction
At present, a Canny operator-based edge extraction method is commonly adopted in lane line edge detection. The Canny edge extraction method can obtain a relatively complete lane edge, and meanwhile, the method also completely extracts the edge information of interfering objects such as trees, shadows, speed bumps and the like.
By analysis, the Canny edge detection method can detect more complete lane line edges, but also detect very many interference edges. Sobel used in comparison to the standard Sobel and Canny edge methodsxThe edge detection method can obviously reduce the detection of the interferent under the condition of basically not influencing the detection of the lane line edge, and the detected interferent edge is mostly in a discontinuous shape.
(2) Improved lane line detection
By the improved lane edge extraction method, on one hand, the detection of non-lane edge information on lane lines is reduced, and the interference on the detection of the lane lines is reduced, and on the other hand, the Hough conversion efficiency is improved by reducing edge information points of the non-lane lines.
In the lane line model, a plurality of lane lines may be detected in each lane line candidate region, and after Hough line detection, a line tree is detected for each lane. Since all parallel lines in the objective world intersect at the vanishing point after being projected onto the image plane, the detected lane line straight lines are mutually parallel straight lines in the world coordinate system. According to the principle, the detected lane lines are converted into a world coordinate system through the inverse perspective transformation matrix H, accurate lane line detection can be achieved according to the constraint characteristic that the two edges of the lane lines are parallel to each other in the world coordinate system, meanwhile, the interferents and the false lane lines can be eliminated, and accurate positioning of the lane lines is completed.
3. Optimization of the effects on the illumination climate
Although the image-based weather state identification has immeasurable value, the problem of the image-based weather identification is not completely solved, and the influence of factors such as illumination and climate on traffic identification can be optimized. Extracting five characteristics of sky, shadow, reflection, contrast and texture of the image as the characteristics of the image, and then classifying the characteristics by utilizing a voting mechanism; aiming at matching of images shot by the same sensor from different perspectives, a Harris-SIFT algorithm is applied. And identifying the weather states of the images in the system on sunny days and rainy days by using the characteristics of the HSI color histogram and the like. A method for detecting, tracking and classifying vehicles in a traffic image sequence shot by a fixed single camera is realized. And judging the weather state of the image by matching a bag-of-words model and a spatial pyramid. Firstly, clustering a bag-of-words model by extracting SIFT feature descriptors of an image to form a dictionary, and then forming a statistical histogram by using the dictionary pair. And carrying out hierarchical statistics on the histogram through a space golden sub-tower matching model, and finally carrying out image training and testing on the features generated by the model.
For a specific implementation of this embodiment, reference may be made to the relevant description in the above embodiments, which is not described herein again.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by suitable instruction execution devices. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, and the corresponding program may be stored in a computer readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The method, the system, the device, the processor and the computer readable storage medium for realizing scene change video analysis and detection based on the high-speed dome camera effectively solve the difficult and difficult technical requirements and problems in the current traffic management, improve the current situation of the traffic management, enable the interaction relationship between AI and equipment in the traffic system to be presented in a brand new way, effectively break through the technical bottleneck in the industry, greatly promote the update and rapid development of the intelligent traffic industry, fill the defects and the blank in the domestic application of the existing artificial intelligence technology and the intelligent traffic field, and realize the capacity upgrade of the intelligent traffic industry.
In this specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (9)

1. A method for realizing scene change video analysis and detection based on a high-speed dome camera is characterized by comprising the following steps:
(1) carrying out multi-angle high-precision detection and identification on a basic target by using a target detection algorithm, carrying out unsupervised cluster learning according to a detected target and a driving track, completing spatial reconstruction of a traffic monitoring scene, and completing first-layer spatial relationship reconstruction;
(2) analyzing a traffic track model, determining the lane direction, and completing reconstruction of a traffic incident detection rule and reconstruction of a second-layer traffic rule by combining a spatial structure obtained by the first-layer reconstruction;
(3) the method comprises the steps of detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, judging traffic events by analyzing the traffic flow data, and completing traffic event detection.
2. The method for realizing scene change video analysis and detection based on the high-speed dome camera according to claim 1, wherein the step (2) specifically comprises the following steps:
(2.1) constructing a lane line detection model through the characteristics of the structured road, detecting the edges of the lane lines, and greatly eliminating interference edge signals in the environment;
(2.2) detecting double-edge lane lines;
and (2.3) converting the detected lane line into a world coordinate system through inverse perspective, eliminating interference and a false lane line 2, and realizing accurate detection of the lane line.
3. The method for realizing scene change video analysis and detection based on the high-speed dome camera according to claim 1, wherein the step (1) specifically comprises the following steps:
(1.1) realizing multi-angle high-precision detection and identification on basic targets under various scenes by using a target detection algorithm;
and (1.2) performing unsupervised clustering learning according to the detection target and the driving track to complete spatial reconstruction of the traffic monitoring scene.
4. The method for realizing scene change video analysis and detection based on the high-speed dome camera according to claim 1, wherein the step (3) specifically comprises the following steps:
(3.1) detecting vehicles, pedestrians and other targets appearing in a traffic scene by utilizing deep learning and motion detection, and establishing traffic flow data containing motion tracks of various targets;
and (3.2) judging the traffic incident by analyzing the traffic flow data to finish the traffic incident detection.
5. The method of claim 3, wherein the large area identified by the high-precision detection in step (1.1) is sky, road or other area.
6. A system for realizing scene change video analysis and detection based on a high-speed dome camera is characterized by comprising:
the spatial relationship reconstruction function module is used for carrying out multi-angle high-precision detection and identification on a basic target by utilizing a target detection algorithm, carrying out unsupervised cluster learning according to the detected target and the driving track, completing spatial reconstruction of a traffic monitoring scene and completing first-layer spatial relationship reconstruction;
the traffic rule reconstruction function model is used for analyzing the traffic track model, determining the lane direction, and completing the reconstruction of the traffic incident detection rule and the reconstruction of the second-layer traffic rule by combining the spatial structure obtained by the spatial relationship reconstruction function module;
and the traffic incident detection function module is used for detecting targets appearing in a traffic scene, establishing traffic flow data containing various target motion tracks, judging traffic incidents by analyzing the traffic flow data and completing traffic incident detection.
7. An apparatus for implementing scene change video analysis and detection based on a high-speed dome camera, the apparatus comprising:
a processor configured to execute computer-executable instructions;
a memory storing one or more computer-executable instructions that, when executed by the processor, perform the steps of the method of performing scene change video analytics detection based on a high speed dome camera of any one of claims 1 to 5.
8. A processor for implementing scene change video analysis detection based on a high speed dome camera, wherein the processor is configured to execute computer executable instructions, which when executed by the processor, implement the steps of the method for implementing scene change video analysis detection based on a high speed dome camera according to any one of claims 1 to 5.
9. A computer-readable storage medium, having stored thereon a computer program executable by a processor to perform the steps of the method for high speed dome camera based scene change video analytics detection as claimed in any one of claims 1 to 5.
CN202110373598.9A 2021-04-07 2021-04-07 Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera Withdrawn CN113052118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110373598.9A CN113052118A (en) 2021-04-07 2021-04-07 Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110373598.9A CN113052118A (en) 2021-04-07 2021-04-07 Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera

Publications (1)

Publication Number Publication Date
CN113052118A true CN113052118A (en) 2021-06-29

Family

ID=76518818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110373598.9A Withdrawn CN113052118A (en) 2021-04-07 2021-04-07 Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera

Country Status (1)

Country Link
CN (1) CN113052118A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114038197A (en) * 2021-11-24 2022-02-11 浙江大华技术股份有限公司 Scene state determination method and device, storage medium and electronic device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230254A (en) * 2017-08-31 2018-06-29 北京同方软件股份有限公司 A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching
CN108961758A (en) * 2018-08-03 2018-12-07 东华理工大学 A kind of crossing broadening lane detection method promoting decision tree based on gradient
CN109584558A (en) * 2018-12-17 2019-04-05 长安大学 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
CN110070642A (en) * 2019-03-22 2019-07-30 天津大学 A kind of traffic accident responsibility appraisal procedure and device based on deep learning
CN110378824A (en) * 2019-06-26 2019-10-25 公安部交通管理科学研究所 A kind of public security traffic control data brain and construction method
US20200180612A1 (en) * 2018-12-10 2020-06-11 Mobileye Vision Technologies Ltd. Navigation in vehicle crossing scenarios
CN111368742A (en) * 2020-03-05 2020-07-03 江苏警官学院 Double-yellow traffic marking reconstruction identification method and system based on video analysis
CN111583693A (en) * 2020-05-07 2020-08-25 中国农业大学 Intelligent traffic cooperative operation system for urban road and intelligent vehicle control method
CN111667512A (en) * 2020-05-28 2020-09-15 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111841012A (en) * 2020-06-23 2020-10-30 北京航空航天大学 Automatic driving simulation system and test resource library construction method thereof
CN112053556A (en) * 2020-08-17 2020-12-08 青岛海信网络科技股份有限公司 Traffic monitoring compound eye dynamic identification traffic accident self-evolution system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230254A (en) * 2017-08-31 2018-06-29 北京同方软件股份有限公司 A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching
CN108961758A (en) * 2018-08-03 2018-12-07 东华理工大学 A kind of crossing broadening lane detection method promoting decision tree based on gradient
US20200180612A1 (en) * 2018-12-10 2020-06-11 Mobileye Vision Technologies Ltd. Navigation in vehicle crossing scenarios
CN109584558A (en) * 2018-12-17 2019-04-05 长安大学 A kind of traffic flow statistics method towards Optimization Control for Urban Traffic Signals
CN110070642A (en) * 2019-03-22 2019-07-30 天津大学 A kind of traffic accident responsibility appraisal procedure and device based on deep learning
CN110378824A (en) * 2019-06-26 2019-10-25 公安部交通管理科学研究所 A kind of public security traffic control data brain and construction method
CN111368742A (en) * 2020-03-05 2020-07-03 江苏警官学院 Double-yellow traffic marking reconstruction identification method and system based on video analysis
CN111583693A (en) * 2020-05-07 2020-08-25 中国农业大学 Intelligent traffic cooperative operation system for urban road and intelligent vehicle control method
CN111667512A (en) * 2020-05-28 2020-09-15 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111841012A (en) * 2020-06-23 2020-10-30 北京航空航天大学 Automatic driving simulation system and test resource library construction method thereof
CN112053556A (en) * 2020-08-17 2020-12-08 青岛海信网络科技股份有限公司 Traffic monitoring compound eye dynamic identification traffic accident self-evolution system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KONG F等: "Short-term traffic flow prediction in smart multimedia system for Internet of Vehicles based on deep belief network", 《FUTURE GENERATION COMPUTER SYSTEMS》, vol. 93, 31 December 2019 (2019-12-31), pages 460 - 472, XP093107005, DOI: 10.1016/j.future.2018.10.052 *
钱基德等: "基于感兴趣区域模型的车道线快速检测算法", 《电子科技大学学报》, vol. 47, no. 3, 30 May 2018 (2018-05-30), pages 357 *
高冬冬: "基于车辆跟踪轨迹的停车和逆行检测研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, no. 2, 15 February 2016 (2016-02-15), pages 40 - 46 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114038197A (en) * 2021-11-24 2022-02-11 浙江大华技术股份有限公司 Scene state determination method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN111178236B (en) Parking space detection method based on deep learning
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
Hadi et al. Vehicle detection and tracking techniques: a concise review
CN112287860B (en) Training method and device of object recognition model, and object recognition method and system
WO2018023916A1 (en) Shadow removing method for color image and application
CN109446917B (en) Vanishing point detection method based on cascading Hough transform
Peng et al. Drone-based vacant parking space detection
CN115049700A (en) Target detection method and device
Timofte et al. Combining traffic sign detection with 3D tracking towards better driver assistance
Wang et al. An overview of 3d object detection
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
CN111488808A (en) Lane line detection method based on traffic violation image data
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN114049572A (en) Detection method for identifying small target
CN112287859A (en) Object recognition method, device and system, computer readable storage medium
CN112395962A (en) Data augmentation method and device, and object identification method and system
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
Dimitrievski et al. Semantically aware multilateral filter for depth upsampling in automotive lidar point clouds
FAN et al. Robust lane detection and tracking based on machine vision
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
CN118411507A (en) Semantic map construction method and system for scene with dynamic target
Xuan et al. Robust lane-mark extraction for autonomous driving under complex real conditions
CN113052118A (en) Method, system, device, processor and storage medium for realizing scene change video analysis and detection based on high-speed dome camera
CN117152949A (en) Traffic event identification method and system based on unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210629

WW01 Invention patent application withdrawn after publication