CN111476107B - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN111476107B CN111476107B CN202010190279.XA CN202010190279A CN111476107B CN 111476107 B CN111476107 B CN 111476107B CN 202010190279 A CN202010190279 A CN 202010190279A CN 111476107 B CN111476107 B CN 111476107B
- Authority
- CN
- China
- Prior art keywords
- image
- image sets
- illegal
- images
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 55
- 230000008569 process Effects 0.000 claims abstract description 27
- 230000008859 change Effects 0.000 claims abstract description 18
- 238000002372 labelling Methods 0.000 claims description 56
- 238000012545 processing Methods 0.000 claims description 34
- 238000003860 storage Methods 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 15
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 5
- 230000003993 interaction Effects 0.000 claims 1
- 238000012360 testing method Methods 0.000 abstract description 28
- 238000007781 pre-processing Methods 0.000 abstract description 7
- 230000015654 memory Effects 0.000 description 14
- 230000006399 behavior Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000009471 action Effects 0.000 description 7
- 238000013515 script Methods 0.000 description 7
- 238000005520 cutting process Methods 0.000 description 6
- 241000283070 Equus zebra Species 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000012854 evaluation process Methods 0.000 description 1
- 238000011049 filling Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000013095 identification testing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2193—Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention discloses an image processing method, which comprises the following steps: determining a first image set, wherein the first image set comprises a plurality of first images, and the plurality of first images are shot images aiming at a target vehicle in the same time period under the same scene; determining one or more second image sets from the first image sets, wherein each second image set in the one or more second image sets comprises a plurality of second images, and each second image set records a pending illegal driving process of the target vehicle; marking the reference position of the traffic sign in at least one second image set in the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets to judge whether the pending illegal running corresponding to the one or more second image sets is illegal. By adopting the embodiment of the invention, the test data preprocessing efficiency can be improved and the algorithm can be effectively evaluated.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to an image processing method and device.
Background
In the current traffic management system, a plurality of pictures shot by a camera are generally identified by using a picture identification algorithm so as to judge whether the driving behavior of the corresponding vehicle in the picture is illegal or illegal. Then, a proper recognition algorithm is selected from a plurality of image recognition algorithms, so that the traffic illegal behaviors are accurately recognized and judged, and a vital effect is played. When the algorithm is selected, the precision and the accuracy of each algorithm are effectively evaluated according to a unified algorithm precision standard, so that the recognition matching degree of the algorithm and the corresponding scene is judged.
In the evaluation process of a specific recognition algorithm, a large amount of test data (such as picture data) needs to be cleaned, screened, classified and the like before testing, and a certain time is needed in the step; then, the preprocessed data is cut and then relevant information is marked, but the manual operation is long in time consumption and possible to be in error, so that the problem of inaccurate marking of the reference object in the picture is caused.
Therefore, how to improve the data preprocessing efficiency before the test, so as to effectively evaluate the image recognition algorithm, is a problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which can improve the data preprocessing efficiency before testing, thereby effectively evaluating a picture recognition algorithm.
In a first aspect, an embodiment of the present invention provides an image processing method, which may include:
determining a first image set, wherein the first image set comprises a plurality of first images, and the plurality of first images are shot images aiming at a target vehicle in the same time period under the same scene;
determining one or more second image sets from the first image sets, wherein each second image set in the one or more second image sets comprises a plurality of second images, and each second image set records a pending illegal driving process of the target vehicle;
Marking the reference position of the traffic sign in at least one second image set in the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets to judge whether the pending illegal running corresponding to the one or more second image sets is illegal.
By implementing the embodiment of the invention, a plurality of pictures with consistent image size and image resolution are screened from the picture data according to the image size and the image resolution; wherein the size of pictures taken by the same camera (i.e. the bayonet) in a particular scene is the same and the resolution at a particular moment is almost the same. Then, the first pictures can be regarded as being shot by the same camera. And screening one or more groups of second pictures from the plurality of first pictures according to the shooting rules of the second pictures (for example, a camera shoots a second picture before illegal driving occurs, shoots a second picture in the illegal driving process and shoots a second picture after the illegal driving process is finished). Each group of second pictures completely records one illegal driving process. And selecting one picture from any group of second pictures, and marking a reference object related to the illegal driving process in the pictures. Further specifically, the remaining pictures are labeled in batches according to the already labeled pictures. Or labeling all the references contained in the pictures according to labeling rules at one time. Specifically, the relevant labeling information is automatically generated by labeling the positions of references, such as the traffic light position, the stop line position, the white solid line position and the like, which are related to the illegal standard in the pictures, and the labeling information is adaptively matched to all the pictures through a packaged adaptive algorithm according to the relation of the picture resolution, so that the operation of batch labeling is completed. The invention can greatly shorten the preparation and pretreatment time of the test data and improve the test efficiency by preprocessing the test data through the program. And the script of the labeling information is read in batches, and the test data are transmitted to the measured algorithm in batches, so that the manual operation steps of the tester are simplified, and errors are avoided. By the automatic labeling and statistics method of the script, a standard test report is automatically generated, test results are recorded, an algorithm confusion matrix which is convenient to analyze is generated, the test results are classified according to requirements, an algorithm team is convenient to analyze the tested picture results, the algorithm precision is improved, and the product identification precision is optimized.
In one possible implementation, each of the plurality of first images includes an identification indicating the first image; the determining the first set of images includes: and determining the first image set according to the identification of the first image.
In one possible implementation, the image size and the image resolution of each of the plurality of first images are the same.
In one possible implementation manner, the marking the reference position of the traffic sign in at least one second image set of the one or more second image sets, and identifying the position change of the target vehicle in the one or more second image sets respectively, so as to determine whether the pending violation running corresponding to the one or more second image sets respectively is violated, includes:
labeling a reference position of a traffic sign in a target second image set in the one or more second image sets;
And labeling the reference positions of the traffic marks in the second image sets except the target second image set in the one or more second image sets according to the reference positions of the traffic marks in the target second image set and a position labeling algorithm.
In one possible implementation manner, the identifying the position change of the target vehicle in the one or more second image sets respectively to determine whether the pending violation running corresponding to the one or more second image sets respectively is violated includes: respectively identifying the position change of the target vehicle in the one or more second image sets through a traffic identification algorithm, and obtaining the judgment result of the undetermined illegal running corresponding to the one or more second image sets respectively; the judging result comprises illegal driving and non-illegal driving.
In one possible implementation, the method further includes:
Comparing the judging result of the undetermined illegal running corresponding to the one or more second image sets with the reference result corresponding to the one or more second image sets respectively;
And evaluating the accuracy of the traffic recognition algorithm.
In one possible implementation, the evaluating the accuracy of the traffic recognition algorithm includes:
And evaluating the accuracy of the traffic recognition algorithm through the precision, recall and balance F score.
In one possible implementation manner, a plurality of third pictures are acquired, and the image size and the image resolution of each third picture in the plurality of third pictures are the same or different; and determining the plurality of first pictures with the same image size and image resolution from the plurality of third pictures.
In a possible implementation manner, the shooting rule of the second picture is that one or more second pictures are shot before the first illegal driving process occurs; taking one or more second pictures when the one-time illegal driving process occurs; and taking one or more second pictures after the one-time illegal driving process occurs.
In one possible implementation, the traffic sign includes a traffic light and/or a traffic marking. For example, the traffic signal light may include a traffic light; the traffic markings may include stop lines and white solid lines.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, which may include:
The determining unit is used for determining a first image set, wherein the first image set comprises a plurality of first images, and the plurality of first images are taken images aiming at a target vehicle in the same time period under the same scene;
A screening unit, configured to determine one or more second image sets from the first image sets, where each second image set in the one or more second image sets includes a plurality of second images, and each second image set records a pending violation driving process of the target vehicle;
The marking unit is used for marking the reference position of the traffic sign in at least one second image set in the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets so as to judge whether the pending illegal running corresponding to the one or more second image sets is illegal.
In one possible implementation, each of the plurality of first images includes an identification indicating the first image; the determining unit is specifically configured to determine the first image set according to the identifier of the first image.
In one possible implementation, the image size and the image resolution of each of the plurality of first images are the same.
In a possible implementation manner, the labeling unit is specifically configured to:
labeling a reference position of a traffic sign in a target second image set in the one or more second image sets;
And labeling the reference positions of the traffic marks in the second image sets except the target second image set in the one or more second image sets according to the reference positions of the traffic marks in the target second image set and a position labeling algorithm.
In a possible implementation manner, the labeling unit is specifically configured to: respectively identifying the position change of the target vehicle in the one or more second image sets through a traffic identification algorithm, and obtaining the judgment result of the undetermined illegal running corresponding to the one or more second image sets respectively; the judging result comprises illegal driving and non-illegal driving.
In a possible implementation, the apparatus further comprises an evaluation unit for:
Comparing the judging result of the undetermined illegal running corresponding to the one or more second image sets with the reference result corresponding to the one or more second image sets respectively;
And evaluating the accuracy of the traffic recognition algorithm.
In a possible implementation, the evaluation unit is specifically configured to:
And evaluating the accuracy of the traffic recognition algorithm through the precision, recall and balance F score.
In a third aspect, an embodiment of the present invention provides an image processing apparatus, including a processor and a memory, where the processor and the memory are connected to each other, and the memory is configured to store a computer program, where the computer program includes program instructions, and where the processor is configured to invoke the program instructions to perform the method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method according to the first aspect.
In a fifth aspect, embodiments of the present invention provide a computer program comprising instructions which, when executed by a computer, cause the computer to perform part or all of the steps of any one of the image processing methods of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below.
FIG. 1 is a schematic diagram of a system architecture for image processing according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a scene to which an image processing method according to an embodiment of the present invention is applied;
Fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural view of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an image processing method and device, which can be used for rapidly and effectively identifying text meanings of text carriers in pictures to be identified.
The terms "comprising" and "having" and any variations thereof, as used in the description of embodiments of the invention, the claims and the drawings, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order. The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
First, some terms in the embodiments of the present invention are explained for easy understanding by those skilled in the art.
(1) The public security gate is a short term of a road traffic public security gate monitoring system, and refers to a road traffic scene monitoring system which shoots, records and processes all motor vehicles passing through gate points depending on the gate points of specific places such as toll stations, traffic or public security checkpoints and the like on a road.
(2) The traditional image recognition flow is divided into four steps: image acquisition, image preprocessing, feature extraction and image recognition. Image recognition may be based on the main features of the image. Each image has its features, such as the letter a having a sharp point, P having a circle, and Y having an acute angle at its center, etc.
(3) Precision (i.e., accuracy) is an indicator of the signal-to-noise ratio of a search system, i.e., the percentage of documents detected that are relevant to the total documents detected. Is generally expressed as: accuracy= (amount of related information retrieved/total amount of information retrieved) x100%.
(4) The Recall Ratio (Recall Ratio) refers to the Ratio of the amount of information associated with the detection from the database to the total amount. The recall absolute value can be estimated based on the database content, quantity.
(5) The F1 Score (F1 Score) is a measure of the accuracy of the two classification models in statistics. The method and the device simultaneously consider the accuracy and recall rate of the classification model. The F1 score can be considered as a harmonic mean of the model accuracy and recall, with a maximum of 1 and a minimum of 0.
(6) Traffic signals may be categorized into traffic lights, traffic signs, traffic markings, etc.
(7) A hypertext transfer protocol request (http request) refers to a request message from a client to a server. Comprising the following steps: in the message head line, a request method for a resource, an identifier of the resource and a protocol used.
(8) Python is a cross-platform computer programming language. Is an object-oriented dynamic type language, originally designed for writing automation scripts (shell), and is increasingly used for independent, large-scale project development with the continuous updating of versions and the addition of new language functions.
The following describes one of the system architectures on which the embodiments of the present invention are based, and the image processing method provided in the embodiments of the present invention may be applied to the system architecture. Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture of image processing according to an embodiment of the present invention, where, as shown in fig. 1, the system architecture includes a terminal and a server; the terminal needs to have the functions of taking pictures and communication (or networking); the function of shooting pictures can shoot vehicles, pedestrians or target objects in a specific area or a scene; and the communication (networking) function can send the shot pictures to the server, so that the server can further process the pictures. The terminal mentioned in the embodiment of the invention can be a camera, a mobile phone, a tablet computer, a notebook computer, a palm computer, a mobile internet device or other mobile terminals; wherein,
The terminal may be a device located at the outermost periphery of the network in the computer network, or may be used for inputting information (e.g., image data). May also be referred to as a system, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, mobile terminal, wireless communication device, user agent, user device, plug-in mountable service device, or User Equipment (UE). For example, the terminal may be a cellular phone, a mobile phone, a cordless phone, a smart watch, a wearable device, a tablet device, a session initiation protocol (session initiation protocol) phone, a wireless local loop (wireless local loop, WLL) station, a Personal Digital Assistant (PDA), a handheld device with wireless communication capabilities, a computing device, an in-vehicle communication module, a smart meter, or other processing device with basic functions of photographing, communication, etc., connected to a wireless modem. In the system framework shown in fig. 1, the terminal may upload the image to the server in a certain upload period or in real time. The number of terminals is not limited in the embodiment of the present invention, and terminal 1 (i.e. bayonet 1), terminal 2 (bayonet 2), … … and terminal N (bayonet N) are exemplary numbers in the figure, and do not represent the number of terminals specifically connected to a server or a server group.
A server, or group of servers, is a device that provides computing services. Since the server needs to respond to the service request and process it, the server should generally have the ability to afford the service and secure the service. The server includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general-purpose computer architecture, but is required to provide highly reliable services, and thus has high demands in terms of processing capacity, stability, reliability, security, scalability, manageability, and the like. Under the network environment, the service types provided by the servers are different and are divided into file servers, database servers, application program servers, WEB servers and the like. The server shown in the figure carries out algorithm identification on the picture data according to a target algorithm; specifically, collecting image data of each bayonet, and then respectively storing the pictures of different bayonets under corresponding bayonet folders according to the picture sizes by a packaged method according to the bayonet names corresponding to the pictures and the picture sizes (determined by the image sizes and the image resolutions); according to different arrangement modes of the pictures in each bayonet picture, cutting the pictures automatically (it can be understood that one camera can shoot a plurality of pictures to obtain an image data stream comprising the plurality of pictures), and selecting one picture from the pictures as a labeling picture for extracting labeling information; related labeling information is automatically generated through corresponding illegal standards in labeling pictures, such as traffic light positions, stop line positions, white solid line positions and the like, and the labeling information is adaptively matched to all the pictures through a packaged adaptive algorithm according to the relation of picture resolutions, so that the operation of batch labeling is completed.
And then, reading the marked files according to different folders (corresponding to the picture sets shot by a plurality of cameras), testing all pictures in the folders in batches by using scripts, and counting the required results.
Finally, after the test is finished, the test result is automatically matched with the expected result, and parameters required by the following formula are counted, so that the matching degree of the target algorithm and the application scene is obtained.
It will be appreciated that the illustration of fig. 1 is merely an exemplary implementation of an embodiment of the present invention. The system architecture in the embodiments of the present invention may include, but is not limited to, the above system architecture.
To facilitate understanding of the embodiments of the present application, the following exemplary examples of the scenes to which the image processing method of the present application is applied may include scenes in which a vehicle passes through a traffic light in violation:
Referring to fig. 2, fig. 2 is a schematic diagram of an application scenario of an image processing method according to an embodiment of the present invention, where the application scenario includes a terminal (in fig. 2, a terminal is taken as a camera as an example) and a server, and the terminal and the server may be connected in a wireless manner such as through a network. As shown in fig. 2, when the traffic light in front of the zebra crossing indicates that the pedestrian can pass, the pedestrian passes through the zebra crossing to the opposite side in a preset time period. At this time, the vehicle must stop within a certain distance from the zebra stripes while being prohibited from passing through the traffic lights. The condition that the vehicle runs a red light at the moment is assumed, namely, the vehicle continuously drives into a zebra crossing on the premise that the signal lamp prohibits the vehicle from continuously moving; then a camera placed at the roadside or a snapshot device mounted on the traffic light rail will record one or more pictures of the occurrence of the violation. Alternatively, the camera may take one or more pictures of the scene after the violation has ended, when the vehicle has moved away from the current violation scene. For example, pictures taken after an offending driving maneuver has occurred may be used to record the current status of the offending scene; the scene may include damage to the scene or pedestrians caused by the vehicle, facilitating subsequent accident grading. Optionally, the camera automatically takes one or more pictures when the vehicle is present within the camera's shooting range. Further alternatively, the camera photographs various angles or scenes within the photographing range according to a certain photographing period.
It can be appreciated that the application scenario in fig. 2 is just a few exemplary implementations of the embodiment of the present invention, and the application scenario in the embodiment of the present invention includes, but is not limited to, the above application scenario.
The technical problems set forth in the embodiments of the present invention are specifically analyzed and solved in the following in conjunction with the system architecture and the embodiments of the image processing method provided in the embodiments of the present invention.
Referring to fig. 3, fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present invention, where the image processing method can be applied to an image processing system (including the above architecture); the image processing method of the embodiment of the invention can be particularly applied to scenes of traffic violation identification. In the following, with reference to fig. 3, taking a server as an execution body as an example, the method may include the following steps S301 to S305; wherein the optional steps may include step S304 and step S305.
Step S301: a first set of images is determined.
Specifically, after receiving image data sent by a plurality of cameras (bayonets), the server classifies the image data first, and determines shooting image data in a preset time period under a certain scene. The first image set comprises a plurality of first images, and the plurality of first images are shot images aiming at a target vehicle in the same time period in the same scene. For example, first, image data is analyzed, and according to an image identifier (for example, a corresponding bayonet number of an image or a name of a camera shooting a certain scene, the embodiment of the present invention is described by taking a bayonet as an example), the image is saved under a folder of the corresponding bayonet. Then, arranging the pictures under each bayonet folder; for example, the pictures under the folder are classified according to the resolution of the pictures and a preset classification rule. Pictures with different resolutions have different positions corresponding to the marked standard information. For example, the resolution of the pictures taken in the daytime and the pictures taken in the night are different, so that the positions of the pictures taken by the same camera, such as traffic lights, may deviate to a certain extent. In order to facilitate judgment of the illegal recognition algorithm, the resolution of the picture is preprocessed in advance. For example, the same bayonet (i.e., multiple images captured by a camera) may be stored in two types of folders, such as a captured image of the day and a captured image of the night.
Optionally, the 2 steps can be completed synchronously by saving the pictures under the corresponding folders and sorting the pictures under the folders. It is understood that the bayonet may refer to a camera at a road traffic security bayonet; for example, all motor vehicles passing through a specific location on a road, such as a toll station, a traffic or security check station, etc., are photographed and recorded. Each camera is provided with a respective camera number; and the type of picture that each camera captures is fixed, e.g., format of picture, image resolution, picture pixels, etc. The content of the picture type may also include the shooting scene of the camera. For example, when the camera A is installed at the expressway, the picture shot by the camera A is only related to the vehicle entering and exiting the expressway; for another example, the camera B is installed at an intersection, and then the pictures shot by B are the running situations of the motor vehicle, the non-motor vehicle, the pedestrian and the like of the road section or the situation of the snapshot illegal running in different time periods.
Optionally, the picture shot by each camera carries the serial number information of the camera, so as to indicate which camera is shot.
Optionally, the first image in the first image set may be an image of the same image size and image resolution; in the same scene, different camera machines (keeping the image size and the image resolution consistent, facilitating the subsequent server to screen the feature as a screening standard for a series of pictures taken in the scene) can take pictures from different angles.
Each of the plurality of first images including an identification indicating the first image; the determining the first set of images includes: and determining the first image set according to the identification of the first image. Specifically, the identification of the first image may include a number of the camera, a shooting scene name, and a name of the offence corresponding to the camera.
In one possible implementation manner, a plurality of third pictures are acquired, and the image size and the image resolution of each third picture in the plurality of third pictures are the same or different; and determining the plurality of first pictures with the same image size and image resolution from the plurality of third pictures. Wherein the third picture may include images photographed by all cameras. Before determining the first image set, a captured image of each camera is acquired.
Optionally, classifying the pictures according to the shooting areas of different shooting machines (bayonets and cameras); or dividing the plurality of pictures into N types according to the picture area and the picture resolution, wherein each of the N types corresponds to the preset picture area and picture resolution; each class includes multiple groups of pictures; each group of pictures comprises a plurality of pictures; each group of pictures corresponds to one illegal action.
Step S302: one or more second image sets are determined from the first image set.
Specifically, the server screens out relevant pictures of the vehicle running in suspected violations from the first image set, and takes all pictures related to each suspected violation as the second image set, so that one or more second image sets are determined. That is, one or more images of the pending driving violation behavior of the one or more target vehicles are included in the first image set. Alternatively, a bayonet-type imaging device only initially recognizes and captures one type of pending driving violation.
Optionally, one or more second image sets are determined from a subset of the first image sets according to a shooting rule of the picture. For example, when it is preliminarily recognized that the vehicle a has made an offending action (such as running a red light), shooting is performed. Shooting when the vehicle A enters the visual field and the red light is lighted; shooting is performed when the vehicle A goes through the red light to continue to advance. The embodiment of the invention does not limit the quantity of the shot images. In a possible implementation manner, the shooting rule of the second picture is that one or more second pictures are shot before the first illegal driving process occurs; taking one or more second pictures when the one-time illegal driving process occurs; and taking one or more second pictures after the one-time illegal driving process occurs.
For example, a picture data stream shot by each bayonet in a period of time is acquired; the bayonet makes corresponding marks for the shot picture data, particularly, for illegal actions, the bayonet distinguishes the shot picture data after shooting; for example, in the case where no illegal action is detected, the identification of the photographed picture data is a; under the condition of shooting illegal behaviors, the obtained picture data is marked as B; and when the illegal act is finished and shooting is carried out, the obtained picture data is marked as C. Optionally, at least one picture identified as a or B or C.
For another example, in the case that the three types of pictures are one, the picture arrangement mode of the bayonet is one row and three columns; under the condition that the three types of pictures are 2, the picture arrangement mode of the bayonets is two rows and three columns.
It will be appreciated that embodiments of the present invention are not limited in the number of labels, such as ABC, etc. The whole illegal activity (including pre-illegal, upon-illegal and post-illegal subdivision) may also be subdivided. Under the condition of subdivision, the picture arrangement mode obtained by the bayonet can be M rows and N columns, namely N stages of division are carried out on the whole process of illegal behaviors, and M pictures (M and N are integers larger than 0) exist in each stage. Cutting the picture data stream according to the picture data identifiers corresponding to the picture arrangement mode (for example, a row of pictures with three columns and a total of three pictures) of each bayonet to obtain a plurality of groups of pictures (a group of pictures are a group of picture sets of one stage); wherein each group of pictures comprises at least one picture related to the vehicle. The picture is cut, mainly depends on the arrangement mode of the pictures shot by the corresponding bayonet, and is not repeated.
Alternatively, in the case where a group of pictures (i.e., a set of images) contains one picture, the group of pictures (i.e., one picture taken at a stage) is identified and cut from the data stream. Alternatively, in the case that a group of pictures contains a plurality of pictures, the group of pictures is cut first, and then the plurality of pictures are cut from the group of pictures. Further alternatively, classifying the pictures according to the resolution, cutting the picture data with the same type of resolution, and sequentially filling a plurality of pictures with the resolution into the picture rows and columns of the bayonets; for example, at an intersection, the bayonet is a camera of the intersection and is specially used for capturing red light running by a vehicle. In order to be able to recognize that the behavior of the vehicle is a behavior of running a red light, the picture arrangement mode of the input algorithm is a group of pictures of one row and three columns (for example, a set of related pictures of illegal behaviors taken in the daytime with a light intensity of 1000lx or the like). According to the arrangement mode of the pictures in one row and three columns, three pictures with the same resolution (namely, a certain resolution) are sequentially filled in one row and three columns, for example, a picture A is a picture of a vehicle in a stop line, a picture B is a picture of the vehicle covering the stop line, and a picture C is a picture of the vehicle exceeding the stop line. And judging according to the three pictures by a subsequent algorithm. It can be understood that, according to only the diagrams a and B, it cannot be accurately determined that the vehicle runs the red light.
Step S303: marking the reference position of the traffic sign in at least one second image set in the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets to judge whether the pending illegal running corresponding to the one or more second image sets is illegal.
Specifically, the server marks traffic identifications in the second images included in each of the second image sets, for example, marks signal lights, white solid lines, turn marks, and the like. Optionally, under different rule violation judging scenes, different reference positions of traffic signs are set for different rule violation traveling. For example, in a red light running identification scene, traffic lights, zebra crossings and other contents are one of important traffic identifications which must be marked.
In one possible implementation manner, the marking the reference position of the traffic sign in at least one second image set of the one or more second image sets, and identifying the position change of the target vehicle in the one or more second image sets respectively, so as to determine whether the pending violation running corresponding to the one or more second image sets respectively is violated, includes: labeling a reference position of a traffic sign in a target second image set in the one or more second image sets; and labeling the reference positions of the traffic marks in the second image sets except the target second image set in the one or more second image sets according to the reference positions of the traffic marks in the target second image set and a position labeling algorithm. For example, the first picture (i.e. a selected picture to be manually marked) of a certain bayonet is manually marked with "illegal standard information", and "illegal marking information" in the marked picture, for example, the traffic light position, the stop line position, the white solid line position, etc. The aforementioned locations are one of the reference locations for the algorithm to determine vehicle violations.
For example, the server manually marks "illegal standard information" on the first picture (i.e. a picture to be manually marked) of a certain bayonet, marks "illegal marking information" in the picture, such as traffic light position, stop line position, white solid line position, etc. The aforementioned locations are one of the reference locations for the algorithm to determine vehicle violations. For another example, the traffic recognition algorithm determines whether the behavior of the vehicle in the input image group is illegal with respect to the labeling positions, the traffic rules of the scene, and the preset determination rules. Further, one picture is selected from the three pictures in one row and three columns and is used as a labeling picture, and labeling information in the picture is extracted after labeling of the picture is completed. It can be appreciated that the selected labeling picture can cover the basic feature value of such a picture (such as a group of pictures running a red light).
After one picture in a group of pictures is marked, other pictures in the same group can be marked. Specifically, for a picture with a certain resolution of a certain bayonet, the labeling information on the labeling picture is matched to all pictures with the same resolution through a self-adaptive algorithm, so that the operation of batch labeling is completed. Optionally, the labeling information can be added in batches to other pictures with the same resolution in the same bayonet through the picture arrangement format of the self-adaptive algorithm, the labeling pictures of the single Zhang Hanyou labeling information and the corresponding picture resolution parameters.
After the feature extraction of one group of pictures of one bayonet is completed, other groups of pictures of the bayonet are subsequently transmitted, and the same operations such as classification, cutting, labeling and other picture preprocessing can be performed, so that the batch labeling operation is completed. The labeling information of the labeling picture and the picture resolution are parameters of the self-adaptive algorithm.
In one possible implementation manner, the identifying the position change of the target vehicle in the one or more second image sets respectively to determine whether the pending violation running corresponding to the one or more second image sets respectively is violated includes: respectively identifying the position change of the target vehicle in the one or more second image sets through a traffic identification algorithm, and obtaining the judgment result of the undetermined illegal running corresponding to the one or more second image sets respectively; the judging result comprises illegal driving and non-illegal driving.
For example, in a different folder, the annotated file (i.e., the standard good picture) is read and placed in the HTTP request. And then, the batch-generated labeling information and the measured pictures are sent to an estimated algorithm by sending the request. And (5) automatically testing all pictures of all folders by using a python script, and counting the results. Specifically, using the python script (which contains the algorithm to be evaluated for illegal action recognition), all pictures in a certain folder are tested in batches. In the testing process, an interface for monitoring a target algorithm is generally required, and an incoming picture, a feedback result and an expected result are analyzed to determine the accuracy of the algorithm. It will be appreciated that the image recognition algorithm basically provides an image input interface to the service user and returns a corresponding result through the interface, so that it is necessary to monitor the image input interface and the corresponding result.
Optionally, according to the name of the corresponding bayonet of the picture and the size of the picture, respectively storing the pictures of different bayonets under the corresponding bayonet folders according to the size of the picture by a packaged method; according to different arrangement modes of the pictures in each bayonet picture, cutting the pictures (one camera can shoot a plurality of pictures) automatically, and selecting one picture from the pictures as a labeling picture for extracting labeling information. The extracted labeling information can be used for labeling the same type of image.
Step S304: and comparing the judging result of the undetermined illegal running corresponding to the one or more second image sets with the reference result corresponding to the one or more second image sets.
Specifically, the server compares the result given by the traffic recognition algorithm for the pending offence running corresponding to each of the second image sets with the true judgment result. The true judgment result can be a conclusion obtained according to the same method or calculation mode according to manual judgment or a recognition algorithm with highest accuracy. For example, the test results (i.e., predicted results) are matched with expected results (i.e., corresponding real cases in the table below), wherein the expected results are manually analyzed and determined for the data (i.e., pictures) of the incoming algorithm interface, whether the behavior involved in the pictures is illegal. The expected results may be obtained directly from a third party database (e.g., a road traffic monitoring system, etc.).
Step S305: and evaluating the accuracy of the traffic recognition algorithm.
Specifically, the server compares the traffic recognition algorithms by one or more algorithm evaluation metrics. Processing the image according to the previous steps, but changing different test algorithms for each recognition; each algorithm that completes the identification test will get a report of the results to evaluate how well the algorithm matches the current identification scenario.
In one possible implementation, the evaluating the accuracy of the traffic recognition algorithm includes: and evaluating the accuracy of the traffic recognition algorithm through the precision, recall and balance F score. For example, the parameters required by the following formulas are counted, thereby obtaining the accuracy of the traffic recognition algorithm. Referring to table 1, table 1 lists a comparison table of test results and real results of the traffic recognition algorithm;
TABLE 1
Then, the Precision (Precision) among samples predicted to be positive by the algorithm, the true positive sample ratio is:
wherein P represents the probability of accuracy, TP is the number of real cases as the test result, FP is the number of false cases (i.e. the real cases are not violated, and the algorithm to be tested evaluates the violated cases) as the test result (i.e. the result given by the traffic recognition algorithm to the image). Positive examples indicate a decision as being illegal, while negative examples indicate a decision as not being illegal.
The Recall (Recall) is over all positive samples, the proportion of samples predicted by the algorithm to be positive is:
Wherein FN is the condition that the test algorithm judges that the result is illegal and the real condition is not illegal. TN is the condition that the test algorithm judges that the result is illegal and the real condition is non-illegal.
The equilibrium F Score, also known as F1-Score, is defined as the harmonic mean of Precision and Recall.
The following are provided:
labeling the test results on the pictures by using the packaged distribution script, and classifying and managing the corresponding pictures of different test results; for example, pictures of illegal actions are placed in one folder and pictures of illegal actions are placed in another folder. Optionally, the patterns of the illegal act can be subdivided, and the patterns under the same folder are subdivided according to the patterns of the illegal act, so that the patterns are stored in the folder for facilitating subsequent viewing.
Optionally, the relevant records are simultaneously converted into test reports, recording the algorithm scale and the failure cause.
Alternatively, the test results may include precision, recall, and balance F scores, as well as some other results, such as average time, maximum time, minimum time, etc. of processing a single picture by the algorithm.
By implementing the embodiment of the invention, the evaluation efficiency of the algorithm is improved mainly by preprocessing the image data to be detected and analyzing the result of the algorithm. Cutting the received picture data into pictures in a fixed arrangement mode according to the picture arrangement mode of the bayonets. And marking a certain picture, and marking the rest pictures with the same resolution in batches according to the marked pictures. (the same is done for pictures of other resolutions as well). Inputting the marked picture into a to-be-detected algorithm to obtain a result; and evaluating the matching degree of the algorithm from indexes such as recall ratio, precision ratio and the like.
The foregoing details the method according to the embodiments of the present invention, and the following provides relevant apparatuses according to the embodiments of the present invention.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, and the image processing apparatus 40 may include a determining unit 401, a filtering unit 402, a labeling unit 403, and an evaluating unit 404. Wherein the optional units include an evaluation unit 404.
A determining unit 401, configured to determine a first image set, where the first image set includes a plurality of first images, and the plurality of first images are captured images for a target vehicle in the same time period in the same scene;
A screening unit 402, configured to determine one or more second image sets from the first image sets, where each second image set of the one or more second image sets includes a plurality of second images, and each second image set records a pending violation driving procedure of the target vehicle;
the labeling unit 403 is configured to label a reference position of a traffic sign in at least one second image set of the one or more second image sets, and identify a change in a position of the target vehicle in the one or more second image sets, respectively, so as to determine whether the pending violation running corresponding to the one or more second image sets is violated.
In one possible implementation, each of the plurality of first images includes an identification indicating the first image; the determining unit 401 is specifically configured to determine the first image set according to the identifier of the first image.
In one possible implementation, the image size and the image resolution of each of the plurality of first images are the same.
In a possible implementation manner, the labeling unit 403 is specifically configured to:
Labeling a reference position of a traffic sign in a target second image set in the one or more second image sets; and labeling the reference positions of the traffic marks in the second image sets except the target second image set in the one or more second image sets according to the reference positions of the traffic marks in the target second image set and a position labeling algorithm.
In a possible implementation manner, the labeling unit 403 is specifically configured to: respectively identifying the position change of the target vehicle in the one or more second image sets through a traffic identification algorithm, and obtaining the judgment result of the undetermined illegal running corresponding to the one or more second image sets respectively; the judging result comprises illegal driving and non-illegal driving.
In a possible implementation, the apparatus further comprises an evaluation unit 404 for:
comparing the judging result of the undetermined illegal running corresponding to the one or more second image sets with the reference result corresponding to the one or more second image sets respectively; and evaluating the accuracy of the traffic recognition algorithm.
In a possible implementation manner, the evaluation unit 404 is specifically configured to:
And evaluating the accuracy of the traffic recognition algorithm through the precision, recall and balance F score.
It should be noted that, the functions of the functional units of the image processing apparatus 40 described in the apparatus embodiment of the present invention may be referred to the related description of the image processing method in the method embodiment described in fig. 2, and are not repeated herein.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, and as shown in fig. 5, an image processing device 60 may be implemented in the structure of fig. 5, and the image processing apparatus 50 may include at least one storage unit 501, at least one processing unit 502, and at least one communication unit 503. In addition, the device may include common components such as an antenna, a power supply, etc., which are not described in detail herein.
The memory means 501 may comprise one or more memory units, each of which may comprise one or more memories, and may be used to store programs and various data and to enable high-speed, automatic access to programs or data during operation of the general-purpose device 50. A physical device having two stable states, denoted as "0" and "1", respectively, may be employed to store information. The foregoing storage unit 501 may be, but is not limited to, a read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a random access Memory (random access Memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a read-Only optical disk (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, an optical disk storage (which may include compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and coupled to the processor via a bus. The memory may also be integrated with the processor.
The processing unit 502, which may also be referred to as a processor, a processing unit, a processing board, a processing module, a processing device, etc. The processing means may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP) or a combination of CPU and NP, or a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the above program.
The communication unit 503 may be called a transceiver, or the like, and may be used to communicate with other devices or a communication network, and may include a unit for performing wireless, wired, or other communication methods.
When the image processing apparatus 50 is the server shown in fig. 1 or fig. 2, the processing unit 502 is configured to call the data of the storage unit 501 to perform the following operations: determining a first image set, wherein the first image set comprises a plurality of first images, and the plurality of first images are shot images aiming at a target vehicle in the same time period under the same scene; determining one or more second image sets from the first image sets, wherein each second image set in the one or more second image sets comprises a plurality of second images, and each second image set records a pending illegal driving process of the target vehicle; marking the reference position of the traffic sign in at least one second image set in the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets to judge whether the pending illegal running corresponding to the one or more second image sets is illegal.
The embodiment of the invention also provides a computer storage medium, wherein the computer storage medium can store a program, and the program can include part or all of the steps of any one of the method embodiments when being executed.
The embodiments of the present invention also provide a computer program comprising instructions which, when executed by a computer, cause the computer to perform part or all of the steps of any one of the image processing methods.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
In the embodiment of the present invention, the units described as separate units may or may not be physically separated, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional component in each embodiment of the present invention may be integrated in one component, or each component may exist alone physically, or two or more components may be integrated in one component. The above-described integrated components may be implemented in hardware or in software functional units.
The integrated components, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the embodiments of the present invention have been described in detail in the foregoing description, the scope of the embodiments of the present invention is not limited thereto, and various equivalent modifications and substitutions can be made by one skilled in the art within the scope of the embodiments of the present invention, which are intended to be included in the present invention. Therefore, the protection scope of the embodiments of the present invention shall be subject to the protection scope of the claims.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention. Although embodiments of the present invention have been described herein with reference to various embodiments, other variations of the disclosed embodiments can be understood and effected by those skilled in the art in the course of the claimed embodiments of the invention.
Claims (9)
1. An image processing method, characterized by being applied to evaluation of a traffic recognition algorithm, comprising:
determining a first image set, wherein the first image set comprises a plurality of first images, and the plurality of first images are shot images aiming at a target vehicle in the same time period under the same scene;
determining one or more second image sets from the first image sets, wherein each second image set in the one or more second image sets comprises a plurality of second images, and each second image set records a pending illegal driving process of the target vehicle;
Marking the reference position of the traffic sign in at least one of the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets to determine whether the pending illegal running corresponding to the one or more second image sets respectively is illegal, wherein marking the reference position of the traffic sign in at least one of the one or more second image sets includes: labeling a reference position of a traffic sign in a target second image set in the one or more second image sets; and labeling the reference positions of the traffic marks in the second image sets except the target second image set in the one or more second image sets according to the reference positions of the traffic marks in the target second image set and a position labeling algorithm, wherein the reference positions comprise traffic light positions, stop line positions and white solid line positions.
2. The method of claim 1, wherein each of the plurality of first images includes an identification indicating the first image; the determining the first set of images includes:
And determining the first image set according to the identification of the first image.
3. The method of claim 1, wherein the image size and image resolution of each of the plurality of first images are the same.
4. The method of claim 1, wherein the identifying the change in the position of the target vehicle in the one or more second image sets, respectively, to determine whether the pending violation traveling corresponding to the one or more second image sets, respectively, is violated comprises:
Respectively identifying the position change of the target vehicle in the one or more second image sets through a traffic identification algorithm, and obtaining the judgment result of the undetermined illegal running corresponding to the one or more second image sets respectively; the judging result comprises illegal driving and non-illegal driving.
5. The method according to claim 4, wherein the method further comprises:
Comparing the judging result of the undetermined illegal running corresponding to the one or more second image sets with the reference result corresponding to the one or more second image sets respectively;
And evaluating the accuracy of the traffic recognition algorithm.
6. The method of claim 5, wherein said evaluating the accuracy of the traffic recognition algorithm comprises:
And evaluating the accuracy of the traffic recognition algorithm through the precision, recall and balance F score.
7. An image processing apparatus for performing the method of any one of claims 1-6, comprising:
The determining unit is used for determining a first image set, wherein the first image set comprises a plurality of first images, and the plurality of first images are taken images aiming at a target vehicle in the same time period under the same scene;
A screening unit, configured to determine one or more second image sets from the first image sets, where each second image set in the one or more second image sets includes a plurality of second images, and each second image set records a pending violation driving process of the target vehicle;
The marking unit is used for marking the reference position of the traffic sign in at least one second image set in the one or more second image sets, and respectively identifying the position change of the target vehicle in the one or more second image sets so as to judge whether the pending illegal running corresponding to the one or more second image sets is illegal.
8. An image processing apparatus characterized by comprising a storage section, a communication section, and a processing section, the storage section, the communication section, and the processing section being connected to each other, wherein the storage section is for storing a data processing code, and the communication section is for performing information interaction with an external apparatus; the processing means is configured to invoke program code to perform the method of any of the preceding claims 1 to 6.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any one of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010190279.XA CN111476107B (en) | 2020-03-18 | 2020-03-18 | Image processing method and device |
PCT/CN2020/104673 WO2021184628A1 (en) | 2020-03-18 | 2020-07-25 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010190279.XA CN111476107B (en) | 2020-03-18 | 2020-03-18 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111476107A CN111476107A (en) | 2020-07-31 |
CN111476107B true CN111476107B (en) | 2024-11-08 |
Family
ID=71747982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010190279.XA Active CN111476107B (en) | 2020-03-18 | 2020-03-18 | Image processing method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111476107B (en) |
WO (1) | WO2021184628A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112163538B (en) * | 2020-09-30 | 2023-10-24 | 武汉中科通达高新技术股份有限公司 | Illegal data identification method and device and electronic equipment |
CN113012439B (en) * | 2021-03-29 | 2022-06-21 | 北京百度网讯科技有限公司 | Vehicle detection method, device, equipment and storage medium |
CN113610688B (en) * | 2021-06-25 | 2024-03-12 | 格讯科技(深圳)有限公司 | Based on big food for data analysis Security supervision method and storage medium |
CN114120080B (en) * | 2021-12-02 | 2024-06-14 | 公安部交通管理科学研究所 | Method for identifying vehicle illegal behaviors violating forbidden marks |
CN118247744A (en) * | 2024-04-26 | 2024-06-25 | 北京积加科技有限公司 | Vehicle-related information transmission method, device, equipment and computer readable medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109949579A (en) * | 2018-12-31 | 2019-06-28 | 上海眼控科技股份有限公司 | A kind of illegal automatic auditing method that makes a dash across the red light based on deep learning |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003341804A (en) * | 2002-05-29 | 2003-12-03 | Toshiba Corp | Waste transportation monitoring system, cover and container |
CN106571038A (en) * | 2015-10-12 | 2017-04-19 | 原熙 | Method for fully automatically monitoring road |
CN108986474A (en) * | 2018-08-01 | 2018-12-11 | 平安科技(深圳)有限公司 | Fix duty method, apparatus, computer equipment and the computer storage medium of traffic accident |
CN109409191A (en) * | 2018-08-24 | 2019-03-01 | 广东智媒云图科技股份有限公司 | A kind of zebra stripes vehicle evacuation detection method and system based on machine learning |
CN109858393A (en) * | 2019-01-11 | 2019-06-07 | 平安科技(深圳)有限公司 | Rule-breaking vehicle recognition methods, system, computer equipment and storage medium |
CN110069982A (en) * | 2019-03-08 | 2019-07-30 | 江苏大学 | A kind of automatic identifying method of vehicular traffic and pedestrian |
-
2020
- 2020-03-18 CN CN202010190279.XA patent/CN111476107B/en active Active
- 2020-07-25 WO PCT/CN2020/104673 patent/WO2021184628A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109949579A (en) * | 2018-12-31 | 2019-06-28 | 上海眼控科技股份有限公司 | A kind of illegal automatic auditing method that makes a dash across the red light based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
WO2021184628A1 (en) | 2021-09-23 |
CN111476107A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476107B (en) | Image processing method and device | |
CN107886722A (en) | Driving information handling method and system, terminal and computer-readable recording medium | |
CN105448103B (en) | Vehicle fake-license detection method and system | |
CN104732205A (en) | System for checking expressway toll evasion | |
CN108053653B (en) | Vehicle behavior prediction method and device based on LSTM | |
CN110723432A (en) | Garbage classification method and augmented reality equipment | |
CN110458126B (en) | Pantograph state monitoring method and device | |
CN111369801B (en) | Vehicle identification method, device, equipment and storage medium | |
CN109903172A (en) | Claims Resolution information extracting method and device, electronic equipment | |
CN106919610A (en) | Car networking data processing method, system and server | |
CN110674887A (en) | End-to-end road congestion detection algorithm based on video classification | |
US11948373B2 (en) | Automatic license plate recognition | |
CN110956822B (en) | Fake-licensed vehicle identification method and device, electronic equipment and readable storage medium | |
CN112631896A (en) | Equipment performance testing method and device, storage medium and electronic equipment | |
CN110781195B (en) | System, method and device for updating point of interest information | |
CN103913150A (en) | Consistency detection method for electron components of intelligent ammeter | |
CN113901946A (en) | Abnormal behavior detection method and device, electronic equipment and storage medium | |
CN111967450B (en) | Sample acquisition method, training method, device and system for automatic driving model | |
CN108898196A (en) | Logistics inspection monitoring method, device and type patrol terminal | |
CN110503057A (en) | A kind of information processing method, device, detection node equipment and storage medium | |
CN110826473A (en) | Neural network-based automatic insulator image identification method | |
CN111915895B (en) | A monitoring method and system based on the combination of location information and video verification | |
Łubkowski et al. | Assessment of quality of identification of data in systems of automatic licence plate recognition | |
CN114202919A (en) | Method, device and system for identifying shielding of electronic license plate of non-motor vehicle | |
AU2021240277A1 (en) | Methods and apparatuses for classifying game props and training neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |