Nothing Special   »   [go: up one dir, main page]

CN110705405B - Target labeling method and device - Google Patents

Target labeling method and device Download PDF

Info

Publication number
CN110705405B
CN110705405B CN201910891996.2A CN201910891996A CN110705405B CN 110705405 B CN110705405 B CN 110705405B CN 201910891996 A CN201910891996 A CN 201910891996A CN 110705405 B CN110705405 B CN 110705405B
Authority
CN
China
Prior art keywords
frame
image frame
labeling
image
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910891996.2A
Other languages
Chinese (zh)
Other versions
CN110705405A (en
Inventor
蒋晨
张伟
程远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910891996.2A priority Critical patent/CN110705405B/en
Publication of CN110705405A publication Critical patent/CN110705405A/en
Priority to PCT/CN2020/093958 priority patent/WO2021051885A1/en
Application granted granted Critical
Publication of CN110705405B publication Critical patent/CN110705405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

According to one embodiment, a current key frame is obtained, wherein the current key frame is one of a plurality of key frames determined from each image frame of a video stream, then, the current key frame is processed by using a pre-trained labeling model to obtain a first labeling result for the current key frame, the labeling model is used for labeling an area containing a predetermined target from a picture through a target frame, and then, target labeling is performed on a non-key frame behind the current key frame in the video stream based on the first labeling result. Thus, the effectiveness of target labeling can be improved.

Description

Target labeling method and device
Technical Field
One or more embodiments of the present specification relate to the field of computer technology, and more particularly, to a method and apparatus for target labeling by a computer.
Background
In a traditional car insurance vehicle checking scene, a vehicle is checked by professional survey personnel of an insurance company. For example, in the insurance application, it is necessary to check whether the vehicle is damaged, and in the vehicle insurance claim settlement scenario, the insurance company needs to send out professional investigation loss assessment personnel to the accident site for on-site investigation loss assessment. Because of the need for manual investigation and loss assessment, insurance companies need to invest a great deal of labor cost, and training cost of professional knowledge. From the experience of common users, the process of application and claim settlement is long in waiting time and poor in experience because of waiting for field inspection of manual investigators and the like.
Aiming at the industry pain point with huge labor cost mentioned in the background of requirements, the application of artificial intelligence and machine learning to the scene of vehicle damage detection is assumed, and the vehicle damage condition reflected in the picture is expected to be automatically identified according to the field image shot by a common user by using the computer vision image identification technology in the field of artificial intelligence. Therefore, labor cost can be greatly reduced, and user experience is improved.
However, the automatic identification technology in the conventional technology is usually performed based on the labeling of a single picture. Therefore, based on the above background, it is desirable to provide a universal target labeling method (not limited to vehicle damage condition labeling) that fully considers the association between the previous and next image frames, and improve the effectiveness of target labeling.
Disclosure of Invention
One or more embodiments of the present specification describe a method and apparatus for target annotation that can improve the accuracy of lesion identification.
According to a first aspect, there is provided a method for target annotation based on a video stream, the method comprising: acquiring a current key frame, wherein the current key frame is one of a plurality of key frames determined from each image frame of the video stream; performing target labeling on the current key frame by using a pre-trained labeling model to obtain a labeling result aiming at the current key frame, wherein the labeling model is used for labeling a region containing a preset target from a picture through a target frame; and performing target labeling on non-key frames after the current key frame in the video stream based on the labeling result aiming at the current key frame.
In one embodiment, the initial plurality of key frames is extracted by any one of:
selecting a plurality of image frames from the video stream as key frames according to a preset time interval;
and inputting the video stream into a pre-trained frame extraction model, and determining a plurality of key frames according to an output result of the frame extraction model.
In one embodiment, the video stream is a vehicle video, the target is a vehicle injury, and the annotation model is trained by: acquiring a plurality of vehicle pictures, wherein each vehicle picture corresponds to each sample marking result, and each sample marking result comprises at least one damage frame under the condition that the vehicle pictures comprise vehicle damage, and each damage frame is a minimum rectangular frame surrounding a continuous damage area; training the labeling model based on at least the plurality of vehicle pictures.
In one embodiment, in the video stream, adjacent key frames are respectively recorded as a first image frame and a second image frame, for the current image frame, the current key frame is an initial first image frame, and a frame next to the current key frame is an initial second image frame; the target labeling of the non-key frame after the current key frame in the video stream based on the labeling result for the current key frame comprises: after the first image frame is marked, detecting whether the second image frame is a key frame; detecting a similarity of the second image frame to the first image frame in a case where the second image frame is not a key frame; if the similarity between the second image frame and the first image frame is larger than a preset similarity threshold, mapping the labeling result corresponding to the first image frame to the second image frame so as to obtain a labeling result corresponding to the second image frame; and respectively updating the first image frame and the second image frame by using the second image frame and the next frame of the second image frame, and carrying out target labeling on the updated second image frame based on the labeling result of the updated first image frame.
In one embodiment, the second image frame is determined to be a key frame if the similarity of the second image frame to the first image frame is less than the similarity threshold.
In one embodiment, determining the similarity of the second image frame to the first image frame comprises: determining a reference region in the first image frame based on the labeling result of the first image frame; respectively processing the reference area in the first image frame and the second image frame by utilizing a predetermined convolutional neural network, and respectively obtaining a first convolution result and a second convolution result; taking the first convolution result as a convolution kernel, performing convolution processing on the second convolution result to obtain a third convolution result, wherein in a numerical array corresponding to the third convolution result, each numerical value respectively describes each similarity of a corresponding area of the second image frame and a reference area of the first image frame; and determining the similarity of the second image frame and the first image frame based on the maximum numerical value in the numerical array corresponding to the third convolution result.
In one embodiment, in a case that the similarity between the second image frame and the first image frame is greater than a preset similarity threshold, the mapping the labeling result corresponding to the first image frame to the second image frame to obtain a second labeling result corresponding to the second image frame includes: and according to the labeling result of the first image frame, labeling the image area of the second image frame corresponding to the maximum number.
In one embodiment, the determining the reference region in the first image frame based on the annotation result of the first image frame comprises: determining an initial reference area as an area surrounded by a target frame under the condition that the first labeling result contains the target frame; and under the condition that the first labeling result does not contain a target frame, determining an initial reference area as an area at a specified position in the current key frame.
In one embodiment, the current key frame further corresponds to a confidence flag, and the target labeling of a non-key frame following the current key frame in the video stream based on the labeling result for the current key frame includes: and determining the confidence identifications of the non-key frames after the current key frame and before the next key frame, wherein the confidence identifications are consistent with the confidence identifications corresponding to the labeling result of the current key frame.
In one embodiment, the confidence identifiers include a high-confidence identifier and a low-confidence identifier, where the high-confidence identifier corresponds to a case where the output result of the annotation model for the corresponding key frame includes a target frame, a reference region indicates a predetermined target that can provide high confidence, and the low-confidence identifier corresponds to a case where the output result of the annotation model for the corresponding key frame does not include a target frame, and a reference region does not indicate a predetermined target; the method further comprises the following steps: and adding the image frames corresponding to the high-confidence marks into a target labeling set.
According to a second aspect, there is provided an apparatus for target annotation based on a video stream, the apparatus comprising:
an acquisition unit configured to acquire a current key frame, the current key frame being one of a plurality of key frames determined from respective image frames of the video stream;
the first labeling unit is configured to perform target labeling on the current key frame by using a pre-trained labeling model to obtain a labeling result for the current key frame, wherein the labeling model is used for labeling a region containing a preset target from a picture through a target frame;
and the second labeling unit is configured to perform target labeling on non-key frames after the current key frame in the video stream based on the labeling result for the current key frame.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and wherein the processor, when executing the executable code, implements the method of the first aspect.
According to the method and the device for target labeling provided by the embodiment of the specification, in the process of target labeling, only the key frames in the video stream are processed through the labeling model, and for the non-key frames, labeling is performed through the labeling results of the key frames, so that the data processing amount is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 illustrates a schematic diagram of an implementation scenario of an embodiment disclosed herein;
FIG. 2 illustrates a flow diagram of a method of target annotation, in accordance with one embodiment;
FIG. 3 is a flow chart illustrating a method for determining image frame similarity in one embodiment;
FIG. 4 is a flowchart illustrating target annotation based on video stream according to a specific example;
FIG. 5 shows a schematic block diagram of an apparatus for target annotation, according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
For convenience of explanation, a specific application scenario of the embodiment of the present specification shown in fig. 1 is described. Fig. 1 shows a vehicle inspection scene, in which vehicle damage, such as damage, damage type, damage material, and the like, is marked as a target. The vehicle inspection scene can be any scene needing to inspect the vehicle damage condition. For example, when the vehicle is in insurance, the vehicle is determined to have no damage through vehicle inspection, or when the vehicle insurance claims are settled, the vehicle damage condition is determined through vehicle inspection.
In this implementation scenario, a user may capture a live video of a vehicle through a terminal capable of capturing live information, such as a smart phone, a camera, a sensor, and the like. Live video may include one or more video streams, a video stream being a segment of video. The field video can be sent to the manual inspection platform, the inspection purpose is determined by the manual inspection platform, so that the corresponding annotation request is sent to the computing platform, and the targeted field video is sent to the computing platform. It should be noted that the targeted live video may send one annotation request for each video stream in units of video streams, or may send one annotation request for one or more video streams in one case in units of cases. And the computing platform carries out target labeling on the video stream according to the labeling request by a target labeling method constructed by the specification. In the implementation scenario, the target annotation may be fed back to the human inspection platform as a pre-annotation result, so as to provide a reference for human decision. The pre-labeling result can indicate the damage part, the damage category and the like of the vehicle in the case of no damage or damage of the vehicle. The pre-labeling result can be in the form of text or image frame containing the vehicle damage.
The human verification platform and the computing platform shown in fig. 1 may be integrated or may be separately provided. In the case of a separate setup, the computing platform may act as a server that serves multiple artificial pinging platforms. The implementation scenario in fig. 1 is only an example, and in some implementations, a manual inspection platform may not be provided, and the terminal directly sends the video stream to the computing platform, and the computing platform feeds back the annotation result to the terminal, or feeds back the car inspection result generated based on the annotation result to the terminal.
Specifically, in the method for target annotation under the framework of the embodiment of the present specification, a plurality of key frames are determined from a video stream, each key frame is processed through a pre-trained annotation model according to a time sequence, an annotation result is obtained when each key frame is processed, and after a non-key frame following the key frame is processed, an image frame between a current key frame and a next key frame is annotated, and then the next key frame is processed. When processing the non-key frame after the current key frame, the marking result of the current key frame is referred to, and the data processing amount is reduced. Optionally, when processing the non-key frame, an image frame satisfying the condition may be selected from the non-key frames according to the actual situation and added to the key frame. And processing the selected key frame by using the labeling model to obtain a labeling result, and performing subsequent image frame processing based on the labeling result.
The method of object labeling is described in detail below.
FIG. 2 illustrates a flow diagram of a method of target annotation, according to one embodiment. The execution subject of the method can be any system, device, apparatus, platform or server with computing and processing capabilities. Such as the computing platform shown in fig. 1. The objects marked may be any objects in the relevant scene, such as various objects (e.g., kittens), modules with certain characteristics (e.g., oval leaves), and so forth. In a vehicle inspection scene, the target to be marked can be a vehicle part, a vehicle damage and the like.
As shown in fig. 2, the method for target labeling may include the following steps: step 201, acquiring a current key frame, wherein the current key frame is one of a plurality of key frames determined from a video stream; step 202, performing target labeling on the current key frame by using a pre-trained labeling model to obtain a labeling result aiming at the current key frame, wherein the labeling model is used for labeling a region containing a target from a picture through a target frame; step 203, based on the labeling result for the current key frame, performing target labeling on the non-key frame after the current key frame in the video stream.
First, in step 201, a current key frame is obtained, the current key frame being one of a plurality of key frames determined from respective image frames of a video stream. Keyframes are typically image frames that may reflect changing characteristics of the video. By extracting key frames from the video stream and reflecting the change characteristics of the video stream by the processing result of the key frames, the data processing amount can be effectively reduced.
Before a video stream is processed, a plurality of key frames can be extracted in advance. These key frames may be used as initial key frames. The key frame extraction of the video stream can be performed in various reasonable manners.
In one embodiment, image frames may be selected from a video stream as key frames at predetermined time intervals. For example, a 30 second video stream may have 60 image frames extracted as key frames at 0.5 second intervals.
In another embodiment, the frame-extracting model may be trained in advance, the video stream is input into the frame-extracting model, and each key frame of the video stream is determined according to the output result of the frame-extracting model. The frame extraction model is, as the name implies, a model for extracting key frames from a plurality of image frames of a video stream.
In an alternative implementation, the frame-extracting model may be trained by: acquiring a plurality of video streams as training samples, extracting image features (such as color features, component features and the like) from each image frame in each video stream, and providing artificially labeled sample key frames for the corresponding video streams; for each training sample, the image characteristics of each corresponding image frame are sequentially input into a selected model, such as a Recurrent Neural Network (RNN), a local state metric (LSTM) and the like, and model parameters are adjusted by comparing the output result of the model with the sample key frame, so that a frame extraction model is trained. At this time, the video stream obtained in step 201 may further include a preprocessing result for extracting image features from each image frame, which is not described herein again.
In another alternative implementation, the frame-extracting model may also be trained by: acquiring a plurality of video streams as training samples, wherein each video stream corresponds to a plurality of image frames and artificially labeled sample key frames; and sequentially inputting each image frame of each training sample into a selected model, such as a recurrent neural network (RNN, LSTM) and the like, automatically mining the characteristics of the image frames by the model, outputting a key frame extraction result, and then adjusting model parameters by comparing the model output result with the sample key frame, thereby training the frame extraction model.
In more embodiments, the key frames may be extracted in more effective manners, which is not described herein.
For each key frame in the video stream to be processed, the processing may be performed sequentially through the target annotation process shown in fig. 2. In this step 201, the key frame acquired by the current process is the current key frame, and the current key frame may be any key frame in the video stream to be processed.
Next, in step 202, a pre-trained labeling model is used to perform target labeling on the current key frame, so as to obtain a labeling result for the current key frame. The labeling model is used for labeling an area containing a preset target from the picture through the target frame. In a vehicle inspection scenario, the predetermined target may be a vehicle component, a vehicle damage, or the like.
The labeling result of the labeling model can be in a picture form or a character form. For example, in the picture format, the marked target is circled by the target frame on the basis of the original picture. The target bounding box may be a minimum bounding box of a predetermined shape, such as a minimum rectangular box, a minimum circular box, or the like, that surrounds the continuous target region. The textual form is, for example, a target feature that is tagged by a textual description. For example, in the case of a vehicle damage target, the annotation result in text form may be: damaged parts + degree of damage, such as bumper scratches; damaged material + degree of damage, such as left front window shattering; and so on.
According to one embodiment, in the case where the video stream is a vehicle video and the target is a vehicle impairment, the annotation model may be trained by:
acquiring a plurality of vehicle pictures, wherein each vehicle picture corresponds to each sample labeling result, wherein under the condition that the vehicle pictures include vehicle damage, a single sample labeling result can include at least one damage frame (corresponding to at least one damage) on the original picture, the damage frame is a minimum rectangular frame (in other embodiments, a circular frame and the like) surrounding a continuous damage area, and otherwise, the labeling result is empty, free of damage or the original picture itself;
then, based on at least the plurality of vehicle pictures with the sample labeling result, a labeling model is trained.
Thus, in this step 202, a key frame, i.e. a picture, inputs the current key frame into the annotation model, and the output result obtained by processing the annotation model may be the annotation result of the current key frame. When the current key frame does not include the predetermined target, the labeling result for the current key frame may be null, or "intact" text representation, or the original image itself.
Further, in step 203, target labeling is performed on non-key frames following the current key frame in the video stream based on the labeling result for the current key frame. Here, the non-key frame is an image frame that is not determined as a key frame. The non-key frame after the current key frame may be an image frame after the current key frame and before the next key frame. In the embodiment of the specification, the target labeling is performed on the key frame through the labeling model, and the target labeling is performed on the non-key frame by referring to the labeling result of the key frame, so that the data processing amount is reduced.
It is understood that the image frames in the video stream are usually continuously captured at a certain frequency (e.g., 24 frames per second), and the pictures between adjacent image frames may have a certain similarity. Adjacent image frames may have multiple similar regions, i.e., have greater similarity. It is understood that if the adjacent image frames are less similar, a sudden change of the picture may be generated and the features of the image frames are changed. In this case, the adjacent image frames of the key frame can also be used as key frames to reflect the feature change of the video stream. Based on this, in the embodiment of the present specification, target labeling may be performed on non-key frames based on the similarity between image frames.
In one embodiment, each image frame after the current key frame and before the next key frame may be sequentially compared with the current key frame to determine their similarity. And if the similarity is greater than a preset threshold value, marking the corresponding image frame by using the marking result of the current key frame. And if the similarity is less than a preset threshold value, taking the corresponding image frame as a key frame. According to the time sequence, the newly determined key frame is the next key frame of the current key frame in the current process, so that the newly determined key frame can be acquired as the current key frame next, and the process of target labeling shown in fig. 2 is executed. Further, non-key frames following the newly determined key frame are labeled with reference to the labeling result of the newly determined key frame.
In another embodiment, the current key frame and the non-key frame after the current key frame and before the next key frame may be compared between adjacent image frames to determine the similarity between the adjacent image frames. If the similarity of the adjacent image frames is higher, the marking result of the previous image frame is used for marking the next image frame, otherwise, the next image frame is used as a newly determined key frame, and the target marking process shown in fig. 2 is executed.
Specifically, two adjacent image frames may be referred to as a first image frame and a second image frame, respectively, and then for a current key frame, the current key frame is an initial first image frame, and a frame next to the current key frame is an initial second image frame. After the first image frame is labeled, whether the second image frame is a key frame is detected. In the case where the second image frame is a key frame, the flow shown in fig. 2 is executed with the second image frame as the current key frame. In the case where the second image frame is not a key frame, the similarity of the second image frame to the first image frame is detected.
If the similarity between the second image frame and the first image frame is smaller than the preset similarity threshold, the second image frame may be used as the current key frame, and the updated current key frame is processed by using the flow shown in fig. 2.
And if the similarity between the second image frame and the first image frame is greater than a preset similarity threshold, mapping the labeling result of the first image frame to the second image frame so as to obtain a labeling result corresponding to the second image frame. On the other hand, the first image frame and the second image frame are updated by the second image frame and the frame next to the second image frame, respectively, that is, the second image frame is used as a new first image frame, and the frame next to the second image frame is used as a new second image frame.
The above process is repeated. Until one of the following occurs:
the second image frame is the last frame of the video stream, and there is no next image frame (i.e., the step of updating the second image frame cannot be continued); or,
and detecting that the updated second image frame is a key frame in the process, and taking the second image frame as the current key frame to continue the subsequent processing.
In other embodiments, the target labeling may also be performed on the non-key frame after the current key frame by using the labeling result of the current key frame in other manners, for example, mapping the labeling result of the current key frame to other image frames by a method such as optical flow (optical flow), which is not described herein again.
In the similarity determination of the two image frames, a comparison of shapes described by the feature points, and the like can be used. It can be understood that, when the subsequent image frame is labeled by the labeling result of the previous image frame, the main purpose is to use the labeling result of the previous image frame for reference in the target labeling process of the subsequent image frame, so in an alternative embodiment, in order to reduce the data processing amount, only one reference area may be taken from the first image frame, and the similarity determination may be performed with the second image frame.
Taking the aforementioned adjacent first image frame and second image frame as an example, specifically, the reference area in the first image frame may be determined first based on the labeling result of the first image frame. Optionally, for the current key frame, determining the initial reference area as an area surrounded by the target frame when the labeling result for the current key frame includes the target frame; and under the condition that the labeling result aiming at the current key frame does not contain a target frame, determining the initial reference area as the area at the appointed position in the current key frame. The area of the designated position may be a pre-designated area containing a predetermined number of pixels, such as a 9 × 9 pixel area in the center of the first image frame, a 9 × 9 pixel area in the upper left corner of the first image frame, and so on. In subsequent image frames, the reference region in the first image frame is the region marked out in the corresponding marking result.
The similarity between the reference region of the first image frame and the corresponding regions of the second image frame may be determined by a method such as pixel value comparison, or may be determined by a similarity model, which is not limited herein.
Referring to fig. 3, a method for determining the similarity between a reference region of a first image frame and corresponding regions of a second image frame will be described by taking a similarity model as an example. Assume that the reference region determined based on the labeling result of the first image frame is the reference region z, and the second image frame is the image frame x. In one aspect, a predetermined convolutional neural network is utilized for a reference region z (e.g., corresponding to a 127 × 127 × 3 pixel array)
Figure BDA0002209039110000111
Processing is performed to obtain a first convolution result (e.g., a 6 × 6 × 128 feature array), and on the other hand, the same convolution neural network is also used for image frame x (e.g., a pixel array corresponding to 255 × 255 × 3)
Figure BDA0002209039110000112
Processing is performed to obtain a second convolution result (e.g., 22 x 128 feature array). Further, the first convolution result is used as a convolution kernel, and the second convolution result is subjected to convolution processing to obtain a third convolution result (such as a 17 × 17 × 1 numerical array). As can be appreciated, in the case of convolution by convolutionWhen one array is checked for convolution, the more similar the array is to the convolution kernel, the larger the value obtained. Therefore, in the array of values corresponding to the third convolution result, each value describes the similarity between the corresponding array in the second convolution result and the array in the first convolution result. The second convolution result is the result of processing the second image frame, and each sub-array in the second convolution result may correspond to each region in the second image frame. Meanwhile, each numerical value in the third convolution result corresponds to each sub-array in the second convolution result respectively. The third convolution result can then be viewed as a distributed array of similarities of respective corresponding regions in the second image frame to the reference region of the first image frame. In the numerical array of the third convolution result, the larger the numerical value is, the greater the similarity of the corresponding region in the second image frame with the reference region of the first image frame is. Because the second image frame is labeled based on the labeling result of the first image frame, whether the second image frame has the area corresponding to the reference area of the first image frame or not is judged. In general, if a region corresponding to a reference region of a first image frame exists in a second image frame, the region is a region of the second image frame that is most similar to the reference region of the first image frame. In this way, the similarity between the second image frame and the first key image frame may be determined based on the maximum value in the value array corresponding to the third convolution result. The maximum value corresponds to a region of the second image frame that is most similar to the reference region of the first image frame. The similarity between the second image frame and the first key image frame may be the maximum value itself, or a decimal/fractional value corresponding to the maximum value after normalization processing is performed on each value in the value array corresponding to the third convolution result.
A similarity threshold for the same region in both image frames may be preset, such that if the similarity determined by the above process is greater than the similarity threshold, it indicates that there is a region in the second image frame corresponding to the reference region in the first image frame, such as both left headlights. Otherwise, if the similarity determined by the above process is less than the similarity threshold, it indicates that the second image frame does not include a region corresponding to the reference region of the first image frame.
In the case where a region corresponding to the reference region of the first image frame is not included in the second image frame, there may be a sudden change in picture between the second image frame and the first image frame. Important information may be missed if the second image frame is not labeled. Thus, at this point, the second image frame may be added to the key frame of the video stream. And according to the time sequence, in the next flow, acquiring the second image frame as the current key frame for target marking.
It will be appreciated that, in accordance with the above description, each image frame may correspond to a reference region, but the actual meaning of the reference regions is different. For example, in a vehicle detection scenario, generally, if a labeling result of a labeling model corresponding to a current key frame includes a target frame, it indicates that a certain part or a certain material of a vehicle has a high-confidence damage, and this result may be provided for manual reference or affect a decision. The reference area at this time indicates a predetermined target that can provide a high degree of confidence. The reference area obtained by the position-specifying method may also include a frame, but the area surrounded by the frame is used for providing reference for labeling of the subsequent image frame, and does not indicate a predetermined target, and no damage exists in the vehicle detection scene. Therefore, the labeling result of the current key frame may also correspond to a confidence flag. And the output result of the labeling model comprises a target frame, and the confidence identifier of the current key frame is the high-confidence identifier under the condition that the reference region indicates the preset target which can provide high confidence. In the vehicle detection scenario shown in fig. 1, the high confidence flag represents a high likelihood of vehicle damage, and the corresponding image frame may be output for reference to the human inspection platform. When the output result of the annotation model corresponding to the current key frame does not contain the target frame, the specified position region is determined as the reference region, and the confidence identifier of the current key frame may be a low confidence identifier. In a vehicle detection scenario, the reference label corresponds to a vehicle damage with a low confidence, or a confidence of 0.
As will be readily understood by those skilled in the art, the labeling result of the non-key frame following the key frame is labeled with reference to the labeling result of the key frame, so that the confidence flag of the labeling result of the subsequent non-key frame can be consistent with the confidence flag of the current key frame. At the time of final decision, reference may be made to the confidence token. At this time, the process of labeling the target table shown in fig. 2 may further include adding the image frame corresponding to the high-confidence flag into the target label set. The target label set is used for outputting to human inspection or intelligent decision making of a computer.
For a clearer understanding of the technical concept of the embodiments of the present specification, refer to fig. 4. In a specific implementation, as shown in fig. 4, for a received video stream, a key frame is first extracted. Then, one of the key frames is acquired as a current key frame in time sequence. And processing the current key frame by a labeling model to obtain a labeling result of the current key frame. And judging whether the target frame is output or not according to the labeling result. If so, taking the area in the target frame as a reference area, simultaneously setting a high confidence mark for the current key frame, such as a confidence mark flag set to 1, and adding the current key frame into the pre-labeling result set. Otherwise, setting a low confidence flag for the current key frame, such as setting a confidence flag to 0. Then, based on the labeling result of the current key frame, the image frame is labeled.
First, whether the next image frame is a key frame is judged. If yes, acquiring the next image frame as the current key frame, and continuing the process. Otherwise, the current key frame is the current frame, and the similarity between the current frame and the next frame is detected. And if the similarity is smaller than the preset similarity threshold, adding the next frame into the key frame, acquiring the next frame as the current key frame, and continuing the process. Otherwise, the similarity is larger than a preset similarity threshold, the next frame is marked by using the marking result of the current frame, and the next frame inherits the confidence identification of the current frame. And detecting whether the confidence mark of the next frame is a high confidence mark. If the confidence mark of the next frame is a high confidence mark, adding the next frame into the pre-labeling result, updating the current frame and the next frame by using the next frame and the next frame of the next frame, and continuing the process until a key frame is detected or the video stream is ended. If the confidence mark of the next frame is not the high confidence mark, the current frame and the next frame are updated by using the next frame and the next frame, and the process is continued until the key frame is detected or the video stream is ended.
From the above description, it can be understood that, under the technical idea of the present specification, the target labeling flow shown in fig. 2 is a non-bypass labeling flow, but not a labeling flow that has to be completely executed for each key frame. For example, in the case that the next image frame of the current key frame is also a key frame, the current key frame and the next key frame have no non-key frame in interval, and in step 203, target labeling is performed on the non-key frame after the current key frame in the video stream based on the labeling result for the current key frame, which is not required to be performed.
Reviewing the above process, in the process of target labeling, only the key frames in the video stream are processed through the labeling model, and for the non-key frames, labeling is performed through the labeling results of the key frames, so that the data processing amount is greatly reduced. Furthermore, in the non-key frame labeling process, the image frames with higher similarity can be migrated, and the image frames with lower similarity can be re-labeled as key frames through the labeling model, so that more accurate labeling results can be obtained. In this way, more efficient target annotation can be provided.
According to an embodiment of another aspect, an apparatus for target labeling is also provided. FIG. 5 shows a schematic block diagram of an apparatus for target annotation, according to one embodiment. As shown in fig. 5, the apparatus 500 for target labeling includes: an acquisition unit 51 configured to acquire a current key frame, which is one of a plurality of key frames determined from respective image frames of the video stream; a first labeling unit 52, configured to perform target labeling on the current key frame by using a pre-trained labeling model to obtain a labeling result for the current key frame, where the labeling model is used to label an area containing a predetermined target from the picture through a target frame; the second labeling unit 53 is configured to perform target labeling on a non-key frame subsequent to the current key frame in the video stream based on the labeling result for the current key frame.
According to one embodiment, the apparatus 500 further comprises an extraction unit (not shown) configured to extract the initial plurality of key frames by any one of:
selecting a plurality of image frames from a video stream as key frames according to a preset time interval;
and inputting the video stream into a pre-trained frame extraction model, and determining a plurality of key frames according to the output result of the frame extraction model.
In one implementation, the video stream is a vehicle video, the target is a vehicle injury, and the apparatus 500 may further include a training unit (not shown) configured to train the annotation model by:
acquiring a plurality of vehicle pictures, wherein each vehicle picture corresponds to each sample marking result, and each sample marking result comprises at least one damage frame under the condition that the vehicle pictures comprise vehicle damage, and each damage frame is a minimum rectangular frame surrounding a continuous damage area;
and training a labeling model at least based on a plurality of vehicle pictures.
According to one possible design, in a video stream, for convenience of description, adjacent key frames are respectively recorded as a first image frame and a second image frame, and for a current key frame, an initial first image frame is the current key frame, and an initial second image frame is a frame next to the current key frame;
the second labeling unit 53 is further configured to:
after the first image frame is marked, detecting whether a second image frame is a key frame;
in the case that the second image frame is not a key frame, detecting the similarity of the second image frame and the first image frame;
if the similarity between the second image frame and the first image frame is larger than a preset similarity threshold, mapping the labeling result corresponding to the first image frame to the second image frame so as to obtain a labeling result corresponding to the second image frame;
and respectively updating the first image frame and the second image frame by using the second image frame and the next frame of the second image frame, and carrying out target labeling on the updated second image frame based on the labeling result of the updated first image frame.
And if the similarity between the second image frame and the first image frame is less than a preset similarity threshold value, determining the second image frame as a key frame.
In a further embodiment, the second labeling unit 53 is further configured to determine the similarity of the second image frame to the first image frame by:
determining a reference area in the first image frame based on the labeling result of the first image frame;
respectively processing a reference area and a second image frame in the first image frame by using a predetermined convolutional neural network, and respectively obtaining a first convolution result and a second convolution result;
taking the first convolution result as a convolution kernel, performing convolution processing on the second convolution result to obtain a third convolution result, wherein each numerical value in a numerical value array corresponding to the third convolution result respectively describes each similarity of a corresponding area of the second image frame and a reference area of the first image frame;
and determining the similarity of the second image frame and the first image frame based on the maximum value in the value array corresponding to the third convolution result.
In one embodiment, in the case that the similarity between the second image frame and the first image frame is greater than a preset similarity threshold, the second labeling unit 53 is further configured to:
and according to the labeling result of the first image frame, labeling the image area of the second image frame corresponding to the maximum number.
In one embodiment, the second labeling unit 53 is further configured to:
determining an initial reference area as an area surrounded by the target frame under the condition that the first labeling result contains the target frame;
and under the condition that the first labeling result does not contain the target frame, determining the initial reference area as the area at the appointed position in the current key frame.
In a further embodiment, the current key frame further corresponds to a confidence flag, and the second labeling unit is further configured to:
and determining the confidence identifications of the non-key frames after the current key frame and before the next key frame, wherein the confidence identifications are consistent with the confidence identifications corresponding to the labeling result of the current key frame.
The confidence marks comprise high confidence marks and low confidence marks. The output result of the labeling model corresponding to the high confidence mark for the corresponding key frame contains a target frame, the reference region indicates the condition of the preset target capable of providing high confidence, the output result of the labeling model corresponding to the low confidence mark for the corresponding key frame does not contain the target frame, and the reference region does not indicate the condition of the preset target.
At this time, the apparatus 500 may further include an annotation result determination unit (not shown) configured to:
and adding the image frames corresponding to the high-confidence-degree marks into the labeling result set.
It should be noted that the apparatus 500 shown in fig. 5 is an apparatus embodiment corresponding to the method embodiment shown in fig. 2, and the corresponding description in the method embodiment shown in fig. 2 is also applicable to the apparatus 500, and is not repeated herein.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 2.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor, when executing the executable code, implementing the method described in connection with fig. 2.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of this specification may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments are intended to explain the technical idea, technical solutions and advantages of the present specification in further detail, and it should be understood that the above-mentioned embodiments are merely specific embodiments of the technical idea of the present specification, and are not intended to limit the scope of the technical idea of the present specification, and any modification, equivalent replacement, improvement, etc. made on the basis of the technical solutions of the embodiments of the present specification should be included in the scope of the technical idea of the present specification.

Claims (22)

1. A method for target annotation based on a video stream, the method comprising:
acquiring a current key frame, wherein the current key frame is one of a plurality of key frames determined from each image frame of the video stream;
performing target labeling on the current key frame by using a pre-trained labeling model to obtain a labeling result aiming at the current key frame, wherein the labeling model is used for labeling a region containing a preset target from a picture through a target frame;
and performing target labeling on non-key frames after the current key frame in the video stream based on a labeling result for the current key frame, wherein in the process of performing target labeling on each non-key frame after the current key frame, similarity comparison is further performed on each image frame after the current key frame and before a next key frame and the current key frame in sequence, or similarity comparison is performed on adjacent image frames of each non-key frame after the current key frame and before the next key frame, so that whether each non-key frame is added into the plurality of key frames as a new key frame is detected based on the comparison result.
2. The method of claim 1, wherein the initial plurality of key frames are extracted by any one of:
selecting a plurality of image frames from the video stream as key frames according to a preset time interval;
and inputting the video stream into a pre-trained frame extraction model, and determining a plurality of key frames according to an output result of the frame extraction model.
3. The method of claim 1, wherein the video stream is a vehicle video, the target is a vehicle impairment, and the annotation model is trained by:
acquiring a plurality of vehicle pictures, wherein each vehicle picture corresponds to each sample marking result, and each sample marking result comprises at least one damage frame under the condition that the vehicle pictures comprise vehicle damage, and each damage frame is a minimum rectangular frame surrounding a continuous damage area;
training the labeling model based on at least the plurality of vehicle pictures.
4. The method according to any one of claims 1-3, wherein in the video stream, adjacent image frames are respectively marked as a first image frame and a second image frame, and for the current key frame, an initial first image frame is the current key frame, and an initial second image frame is a next frame of the current key frame;
the target labeling of the non-key frame after the current key frame in the video stream based on the labeling result for the current key frame comprises:
after the first image frame is marked, detecting whether the second image frame is a key frame;
detecting a similarity of the second image frame to the first image frame in a case where the second image frame is not a key frame;
if the similarity between the second image frame and the first image frame is larger than a preset similarity threshold, mapping the labeling result corresponding to the first image frame to the second image frame so as to obtain a labeling result corresponding to the second image frame;
and respectively updating the first image frame and the second image frame by using the second image frame and the next frame of the second image frame, and carrying out target labeling on the updated second image frame based on the labeling result of the updated first image frame.
5. The method of claim 4, wherein the second image frame is determined to be a keyframe if the second image frame is less similar to the first image frame than the similarity threshold.
6. The method of claim 4, wherein determining the similarity of the second image frame to the first image frame comprises:
determining a reference region in the first image frame based on the labeling result of the first image frame;
respectively processing the reference area in the first image frame and the second image frame by utilizing a predetermined convolutional neural network, and respectively obtaining a first convolution result and a second convolution result;
taking the first convolution result as a convolution kernel, performing convolution processing on the second convolution result to obtain a third convolution result, wherein in a numerical array corresponding to the third convolution result, each numerical value respectively describes each similarity of a corresponding area of the second image frame and a reference area of the first image frame;
and determining the similarity of the second image frame and the first image frame based on the maximum numerical value in the numerical array corresponding to the third convolution result.
7. The method of claim 6, wherein in a case that the similarity between the second image frame and the first image frame is greater than a preset similarity threshold, the mapping the labeling result corresponding to the first image frame to the second image frame to obtain a second labeling result corresponding to the second image frame comprises:
and according to the labeling result of the first image frame, labeling the image area of the second image frame corresponding to the maximum number.
8. The method of claim 6, wherein the determining a reference region in the first image frame based on the annotation result for the first image frame comprises:
under the condition that the labeling result of the first image frame contains a target frame, determining an initial reference area as an area surrounded by the target frame;
and under the condition that the labeling result of the first image frame does not contain a target frame, determining an initial reference area as an area at a specified position in the current key frame.
9. The method of claim 8, wherein the current key frame further corresponds to a confidence flag, and the target labeling of non-key frames following the current key frame in the video stream based on the labeling result for the current key frame comprises:
and determining the confidence identifications of the non-key frames after the current key frame and before the next key frame, wherein the confidence identifications are consistent with the confidence identifications corresponding to the labeling result of the current key frame.
10. The method of claim 9, wherein the confidence indicators comprise a high confidence indicator corresponding to the annotation model's output for the corresponding keyframe containing a border of the target, and a low confidence indicator corresponding to the annotation model's output for the corresponding keyframe not containing a border of the target, where the reference region indicates a case where the predetermined target with high confidence is available;
the method further comprises the following steps:
and adding the image frames corresponding to the high-confidence marks into a target labeling set.
11. An apparatus for target annotation based on a video stream, the apparatus comprising:
an acquisition unit configured to acquire a current key frame, the current key frame being one of a plurality of key frames determined from respective image frames of the video stream;
the first labeling unit is configured to perform target labeling on the current key frame by using a pre-trained labeling model to obtain a labeling result for the current key frame, wherein the labeling model is used for labeling a region containing a preset target from a picture through a target frame;
and a second labeling unit configured to perform target labeling on non-key frames after the current key frame in the video stream based on a labeling result for the current key frame, wherein in the process of performing target labeling on each non-key frame after the current key frame, each image frame after the current key frame and before a next key frame is also sequentially subjected to similarity comparison with the current key frame, or each non-key frame after the current key frame and before the current key frame is subjected to similarity comparison between adjacent image frames, so as to detect whether each non-key frame is added to the plurality of key frames as a new key frame based on the comparison result.
12. The apparatus according to claim 11, wherein the apparatus further comprises an extraction unit configured to extract an initial plurality of key frames by any one of:
selecting a plurality of image frames from the video stream as key frames according to a preset time interval;
and inputting the video stream into a pre-trained frame extraction model, and determining a plurality of key frames according to an output result of the frame extraction model.
13. The apparatus of claim 11, wherein the video stream is a vehicle video, the target is a vehicle impairment, the apparatus further comprising a training unit configured to train the annotation model by:
acquiring a plurality of vehicle pictures, wherein each vehicle picture corresponds to each sample marking result, and each sample marking result comprises at least one damage frame under the condition that the vehicle pictures comprise vehicle damage, and each damage frame is a minimum rectangular frame surrounding a continuous damage area;
training the labeling model based on at least the plurality of vehicle pictures.
14. The apparatus according to any one of claims 11-13, wherein, in the video stream, adjacent image frames are respectively denoted as a first image frame and a second image frame, and for the current key frame, an initial first image frame is the current key frame, and an initial second image frame is a frame next to the current key frame;
the second labeling unit is further configured to:
after the first image frame is marked, detecting whether the second image frame is a key frame;
detecting a similarity of the second image frame to the first image frame in a case where the second image frame is not a key frame;
if the similarity between the second image frame and the first image frame is larger than a preset similarity threshold, mapping the labeling result corresponding to the first image frame to the second image frame so as to obtain a labeling result corresponding to the second image frame;
and respectively updating the first image frame and the second image frame by using the second image frame and the next frame of the second image frame, and carrying out target labeling on the updated second image frame based on the labeling result of the updated first image frame.
15. The apparatus of claim 14, wherein the second image frame is determined to be a keyframe if the second image frame is less similar to the first image frame than the similarity threshold.
16. The apparatus of claim 14, wherein the second labeling unit is further configured to determine the similarity of the second image frame to the first image frame by:
determining a reference region in the first image frame based on the labeling result of the first image frame;
respectively processing the reference area in the first image frame and the second image frame by utilizing a predetermined convolutional neural network, and respectively obtaining a first convolution result and a second convolution result;
taking the first convolution result as a convolution kernel, performing convolution processing on the second convolution result to obtain a third convolution result, wherein in a numerical array corresponding to the third convolution result, each numerical value respectively describes each similarity of a corresponding area of the second image frame and a reference area of the first image frame;
and determining the similarity of the second image frame and the first image frame based on the maximum numerical value in the numerical array corresponding to the third convolution result.
17. The apparatus of claim 16, wherein, in case that the similarity between the second image frame and the first image frame is greater than a preset similarity threshold, the second labeling unit is further configured to:
and according to the labeling result of the first image frame, labeling the image area of the second image frame corresponding to the maximum number.
18. The apparatus of claim 16, wherein the second labeling unit is further configured to:
under the condition that the labeling result of the first image frame contains a target frame, determining an initial reference area as an area surrounded by the target frame;
and under the condition that the labeling result of the first image frame does not contain a target frame, determining an initial reference area as an area at a specified position in the current key frame.
19. The apparatus of claim 18, wherein the current key frame further corresponds to a confidence flag, and the second labeling unit is further configured to:
and determining the confidence identifications of the current key frame and each non-key frame behind and before the next key frame, wherein the confidence identifications are consistent with the confidence identifications corresponding to the labeling result of the current key frame.
20. The apparatus of claim 19, wherein the confidence indicators comprise a high confidence indicator corresponding to the annotation model's output for the corresponding keyframe containing a border of the target, and a low confidence indicator corresponding to the annotation model's output for the corresponding keyframe not containing a border of the target, where a reference region indicates a case where the predetermined target that provides high confidence is available;
the apparatus further comprises an annotation result determination unit configured to:
and adding the image frames corresponding to the high-confidence-degree marks into the labeling result set.
21. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-10.
22. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-10.
CN201910891996.2A 2019-09-20 2019-09-20 Target labeling method and device Active CN110705405B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910891996.2A CN110705405B (en) 2019-09-20 2019-09-20 Target labeling method and device
PCT/CN2020/093958 WO2021051885A1 (en) 2019-09-20 2020-06-02 Target labeling method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910891996.2A CN110705405B (en) 2019-09-20 2019-09-20 Target labeling method and device

Publications (2)

Publication Number Publication Date
CN110705405A CN110705405A (en) 2020-01-17
CN110705405B true CN110705405B (en) 2021-04-20

Family

ID=69196186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910891996.2A Active CN110705405B (en) 2019-09-20 2019-09-20 Target labeling method and device

Country Status (2)

Country Link
CN (1) CN110705405B (en)
WO (1) WO2021051885A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705405B (en) * 2019-09-20 2021-04-20 创新先进技术有限公司 Target labeling method and device
CN111918016A (en) * 2020-07-24 2020-11-10 武汉烽火众智数字技术有限责任公司 Efficient real-time picture marking method in video call
CN112053323A (en) * 2020-07-31 2020-12-08 上海图森未来人工智能科技有限公司 Single-lens multi-frame image data object tracking and labeling method and device and storage medium
CN112533060B (en) * 2020-11-24 2023-03-21 浙江大华技术股份有限公司 Video processing method and device
CN113343857B (en) * 2021-06-09 2023-04-18 浙江大华技术股份有限公司 Labeling method, labeling device, storage medium and electronic device
CN115482426A (en) * 2021-06-16 2022-12-16 华为云计算技术有限公司 Video annotation method, device, computing equipment and computer-readable storage medium
CN113378958A (en) * 2021-06-24 2021-09-10 北京百度网讯科技有限公司 Automatic labeling method, device, equipment, storage medium and computer program product
CN113506610B (en) * 2021-07-08 2024-09-13 联仁健康医疗大数据科技股份有限公司 Labeling specification generation method and device, electronic equipment and storage medium
CN113657173B (en) * 2021-07-20 2024-05-24 北京搜狗科技发展有限公司 Data processing method and device for data processing
CN113609316A (en) * 2021-07-27 2021-11-05 支付宝(杭州)信息技术有限公司 Method and device for detecting similarity of media contents
CN113792600B (en) * 2021-08-10 2023-07-18 武汉光庭信息技术股份有限公司 Video frame extraction method and system based on deep learning
CN113657307A (en) * 2021-08-20 2021-11-16 北京市商汤科技开发有限公司 Data labeling method and device, computer equipment and storage medium
CN113660469A (en) * 2021-08-20 2021-11-16 北京市商汤科技开发有限公司 Data labeling method and device, computer equipment and storage medium
CN115640422B (en) * 2022-11-29 2023-12-22 深圳有影传媒有限公司 Network media video data analysis and supervision system
CN116189063B (en) * 2023-04-24 2023-07-18 青岛润邦泽业信息技术有限公司 Key frame optimization method and device for intelligent video monitoring

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9471852B1 (en) * 2015-11-11 2016-10-18 International Business Machines Corporation User-configurable settings for content obfuscation
CN106375870B (en) * 2016-08-31 2019-09-17 北京旷视科技有限公司 Video labeling method and device
CN106385640B (en) * 2016-08-31 2020-02-11 北京旷视科技有限公司 Video annotation method and device
CN106682595A (en) * 2016-12-14 2017-05-17 南方科技大学 Image content labeling method and device
CN107610091A (en) * 2017-07-31 2018-01-19 阿里巴巴集团控股有限公司 Vehicle insurance image processing method, device, server and system
US20190130583A1 (en) * 2017-10-30 2019-05-02 Qualcomm Incorporated Still and slow object tracking in a hybrid video analytics system
CN109684956A (en) * 2018-12-14 2019-04-26 深源恒际科技有限公司 A kind of vehicle damage detection method and system based on deep neural network
CN110033011A (en) * 2018-12-14 2019-07-19 阿里巴巴集团控股有限公司 Traffic accident Accident Handling Method and device, electronic equipment
CN110705405B (en) * 2019-09-20 2021-04-20 创新先进技术有限公司 Target labeling method and device

Also Published As

Publication number Publication date
CN110705405A (en) 2020-01-17
WO2021051885A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
CN110705405B (en) Target labeling method and device
CN107945198B (en) Method and device for marking point cloud data
CN109753928B (en) Method and device for identifying illegal buildings
CN109002820B (en) License plate recognition method and device and related equipment
CN110569856B (en) Sample labeling method and device, and damage category identification method and device
CN112348778B (en) Object identification method, device, terminal equipment and storage medium
CN111639629B (en) Pig weight measurement method and device based on image processing and storage medium
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
JP2014531097A (en) Text detection using multi-layer connected components with histograms
CN109344864B (en) Image processing method and device for dense object
CN113158773B (en) Training method and training device for living body detection model
CN111121797B (en) Road screening method, device, server and storage medium
CN111652208A (en) User interface component identification method and device, electronic equipment and storage medium
CN111191582A (en) Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
CN110298302B (en) Human body target detection method and related equipment
CN112052702A (en) Method and device for identifying two-dimensional code
CN114267029A (en) Lane line detection method, device, equipment and storage medium
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN112967224A (en) Electronic circuit board detection system, method and medium based on artificial intelligence
CN111797832B (en) Automatic generation method and system for image region of interest and image processing method
CN113239931A (en) Logistics station license plate recognition method
CN112907206A (en) Service auditing method, device and equipment based on video object identification
CN112700653A (en) Method, device and equipment for judging illegal lane change of vehicle and storage medium
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN114387600B (en) Text feature recognition method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200925

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200925

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant