Nothing Special   »   [go: up one dir, main page]

CN113963175A - Image processing method and device, computer equipment and storage medium - Google Patents

Image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113963175A
CN113963175A CN202111265558.9A CN202111265558A CN113963175A CN 113963175 A CN113963175 A CN 113963175A CN 202111265558 A CN202111265558 A CN 202111265558A CN 113963175 A CN113963175 A CN 113963175A
Authority
CN
China
Prior art keywords
image processing
network
image frame
model
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111265558.9A
Other languages
Chinese (zh)
Inventor
许鲁珉
关英妲
金晟
刘文韬
钱晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111265558.9A priority Critical patent/CN113963175A/en
Publication of CN113963175A publication Critical patent/CN113963175A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image processing method, apparatus, computer device, and storage medium, wherein the method comprises: acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame; determining an image frame type of the current image frame based on a video clip to which the current image frame belongs, wherein the image type comprises a non-key frame or a key frame; determining a search range of a plurality of search dimensions matched with the current image frame based on the image frame type, and searching an image processing model meeting computing resource constraints for the current image frame in a pre-trained hyper-network based on the search range, wherein the search dimensions comprise: model depth parameters, convolutional layer parameters; and inputting the image processing results of the current image frame and the previous image frame into the image processing model for processing to obtain the image processing result of the current image frame.

Description

Image processing method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
With the rapid development of artificial intelligence technology and intelligent terminal devices, more and more applications in intelligent terminal devices are beginning to deploy neural network models for image processing, such as various types of network models including a pose estimation network model, a face recognition model, an image segmentation network model, and the like. In the conventional image processing method using a neural network model, each image is individually processed by the neural network model. And in the case where the number of images is plural, the structures of the neural network models used for image processing for the plural images are the same. Under the processing mode, the reasonable distribution of computing resources among the images is not realized, so that the processing precision of the neural network model cannot meet the actual application requirement.
Disclosure of Invention
The embodiment of the disclosure at least provides an image processing method, an image processing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame; acquiring an image processing model for processing a current image frame, and determining a fusion position and a fusion mode in the image processing model; in the process of extracting the image characteristics of the current image frame by the image processing model, fusing the image characteristics output by the network module positioned in front of the fusion position in the image processing model and the image processing result of the previous image frame according to the fusion mode to obtain a fusion result; and inputting the fusion result to a network module behind the fusion position in the image processing model for image processing to obtain an image processing result of the current image frame.
In the embodiment of the disclosure, when a current image frame is processed, information in a previous image frame can be transferred to the current image frame by combining an image processing result of the previous image frame, so that the image processing result of the previous image frame is used as guide information of the current image frame, and the current image frame is processed by the guide information, so that feature information in the current image frame can be better grasped, and an accurate image processing result is obtained. For each current image frame to be processed, a corresponding image processing model is set for the current image frame, and the fusion position and the fusion mode of the image processing result of the previous image frame are set in the image processing model, so that the reasonable distribution of computing resources for the data to be computed in the image frame can be realized, the processing precision of the image processing model is improved, and the image processing result with higher accuracy is obtained.
In an optional embodiment, the obtaining an image processing model for processing a current image frame includes: determining an image processing model for processing the current image frame based on a hyper-network; the super network includes a plurality of network modules having a connection relationship.
In an alternative embodiment, determining an image processing model for processing the current image frame based on a hyper-network comprises: determining a plurality of continuous non-key frames containing the current image frame in a video clip to which the current image frame belongs; searching at least one subnetwork model group meeting a first computing resource constraint condition in a pre-trained first hyper-network, wherein each subnetwork model group comprises a first subnetwork model searched for each non-key frame; determining an image processing model for processing the current image frame based on the at least one sub-network model group.
In the above embodiment, by searching for at least one sub-network model group satisfying the first computing resource constraint condition in the first super-network and then determining the image processing model corresponding to each non-key frame according to the at least one sub-network model group, it is possible to realize automatic allocation of computing resources among a plurality of consecutive non-key frames when the image processing models corresponding to the plurality of consecutive non-key frames satisfy the overall computing resource constraint (i.e., the first computing resource constraint condition), thereby realizing global optimization of the image processing method and improving the processing accuracy of the image processing method.
In an alternative embodiment, the determining an image processing model for processing the current image frame based on the at least one sub-network model group includes: obtaining a target test sample; testing each sub-network model group through the target test sample to obtain at least one test result; and selecting a target sub-network model group corresponding to a target test result from the at least one test result, and determining the image processing model according to the target sub-network model group, wherein the target test result is a test result meeting a first test condition in the at least one test result.
In the above embodiment, by selecting the sub-network model with the optimal processing precision for the plurality of consecutive non-key frames in the at least one sub-network model group by the target test sample, the precision of image processing can be improved, so as to obtain an image processing result with higher accuracy, for example, an attitude estimation result.
In an alternative embodiment, the searching for at least one subnetwork model group meeting a first computing resource constraint in a pre-trained first subnetwork comprises: determining a target search range of each non-key frame; each target search range comprises a search range of a plurality of first search dimensions in preset search dimensions; the first search dimension includes: a spatial search dimension for indicating an image processing model structure, and a temporal search dimension for indicating the fusion position and the fusion manner; and searching a subnetwork model meeting the first computing resource constraint condition in a pre-trained first hyper-network based on the target search range of each non-key frame, and determining the subnetwork model group according to the searched subnetwork model.
In the above embodiment, the same hyper-network may be used for searching for different non-key frames, and a group of spatial structures and sub-network models with different fusion positions and fusion modes are obtained through searching. By the processing mode, the reasonable distribution of computing resources among a plurality of continuous non-key frames can be realized, so that the precision of image processing is improved, and an image processing result with higher accuracy is obtained.
In an alternative embodiment, the first hyper-network to be trained is trained by: acquiring a first training sample set, wherein the first training sample set comprises a plurality of first training samples, and each first training sample comprises a plurality of non-key frames; extracting a plurality of groups of sub-networks to be trained from a first super-network to be trained; the number of the sub-networks in each group of sub-networks to be trained is the same as that of the non-key frames in each first training sample, and one sub-network to be trained correspondingly processes one non-key frame in the training sample; and training each group of sub-networks to be trained through the first training sample set, and obtaining the first super-network after training.
In the embodiment, the training method is used for training the first super network to be trained, so that sub-network models with different structures and variable lengths can be obtained through training; therefore, different constraint conditions can be met to meet different application scenes.
In an optional embodiment, in the case that the previous image frame is a key frame, the image processing model corresponding to the previous image frame is determined by: determining a plurality of search ranges of a second search dimension matched with the previous image frame in a preset search dimension; the plurality of second search dimensions include a spatial search dimension for indicating an image processing model structure to which the previous image frame corresponds; searching a pre-trained second super network based on the search range of each second search dimension to obtain at least one second sub network model meeting a second computing resource constraint condition; wherein the second computing resource constraint is used to characterize the complexity of processing the key frame; and determining an image processing model corresponding to the previous image frame based on the at least one second sub-network model.
In the above embodiment, by setting the second search dimension and searching the corresponding sub-network model for the key frame in the second super network as the image processing model according to the second search dimension, the image processing model meeting the requirement can be automatically determined in the second super network.
In an optional embodiment, the determining, based on the at least one second sub-network model, an image processing model corresponding to the previous image frame includes: processing the preset test set through each second sub-network model to obtain a plurality of test results; wherein the test result is used for characterizing the prediction accuracy of the corresponding second sub-network model; and determining a second sub-network model corresponding to a target test result in the plurality of test results as an image processing model corresponding to the previous image frame, wherein the target test result is a test result meeting a second test condition in the plurality of test results.
In the above embodiment, the second sub-network model with the optimal processing accuracy can be obtained by testing the screened at least one second sub-network model meeting the second constraint condition through the preset test set, so that the image processing model meeting the practical application with a high real-time requirement can be selected while the processing accuracy of the image processing model is ensured.
In an optional embodiment, the method further comprises: updating the search range corresponding to each second search dimension under the condition that the target test result is not determined in the test results of the at least one second sub-network model; and searching the second super network according to the updated search range until the image processing model corresponding to the previous image frame is determined based on the second sub network model corresponding to the target test result under the condition that the target test result is determined in the test results of at least one second sub network model which meets the second computing resource constraint condition.
In the above embodiment, the search range of the image processing model in the second super network can be narrowed down by updating the search range of each second search dimension, so that a corresponding lightweight sub-network model can be quickly searched from the second super network and used as the image processing model corresponding to the previous image frame.
In an alternative embodiment, the second super-network to be trained is trained by: acquiring a second training sample set; the second training sample set comprises a plurality of second training samples; extracting a plurality of sub-network models for each second training sample in the second super-network to be trained; training the plurality of extracted sub-network models based on a plurality of second training samples in the second training sample set, and obtaining the second super-network after training.
In the above embodiment, by randomly extracting at least one sub-network model and training each extracted sub-network model, it is possible to obtain sub-network models satisfying different search conditions through one training process of the super-network, so that the super-network can adapt to a wider application scenario, thereby reducing the overhead of network structure search.
In an optional embodiment, the preset search dimension includes: a spatial search dimension and a temporal search dimension, the spatial search dimension including at least one of: model structure parameters, convolutional layer parameters, attention module parameters; the temporal search dimension includes: fusing parameters; the model structure parameters are used for representing the number of network modules required by the image processing model to be searched in the super network; the convolutional layer parameters are used to characterize at least one of: the method comprises the following steps that the number of characteristic channels output by a network module required by an image processing model to be searched in a super network, the size of a convolution kernel of a convolution layer required by the image processing model to be searched in the super network, and/or the grouping number of the convolution layers required by the image processing model to be searched in the super network are/is represented; the attention module parameter is used for indicating whether to use a preset attention module in each network module; the fusion parameter is used for indicating a fusion position and a fusion mode for fusing the image processing result of the previous image frame into the image processing model.
In the above embodiment, by setting the model structure parameter, the convolutional layer parameter, the attention module parameter, and the fusion parameter, the search space of the neural network model can be expanded in the model search space, so that the lightweight neural network model with the processing accuracy meeting the requirement is searched, and the optimal image processing result is obtained.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including: the first acquisition unit is used for acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame; a first obtaining unit, configured to obtain an image processing model for processing the current image frame; the determining unit is used for determining a fusion position and a fusion mode in the image processing model; the fusion unit is used for fusing the image characteristics output by the network module positioned in front of the fusion position in the image processing model and the image processing result of the previous image frame according to the fusion mode in the process of extracting the image characteristics of the current image frame by the image processing model to obtain a fusion result; and the image processing unit is used for inputting the fusion result to a network module behind the fusion position in the image processing model for image processing to obtain an image processing result of the current image frame.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an image processing method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic illustration showing an alternative model search based on a model depth parameter according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration showing an alternative model search based on a model width parameter according to an embodiment of the present disclosure;
FIG. 4 is a schematic illustration showing an alternative model search based on a convolution kernel size parameter according to an embodiment of the present disclosure;
FIG. 5 is a schematic illustration showing an alternative model search based on a convolutional layer packet quantity parameter according to an embodiment of the present disclosure;
FIG. 6 is a schematic illustration showing an alternative model search based on attention module parameters provided by an embodiment of the present disclosure;
FIG. 7 is a flow chart illustrating another image processing method provided by an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 9 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to research, in the existing image processing method based on the neural network model, the selected method is to perform independent processing on each image through the neural network model; and in the case where the number of images is plural, the structure of the neural network model for performing image processing on each image is the same. Under the processing mode, the reasonable distribution of computing resources among the images is not realized, so that the processing precision of the neural network model cannot meet the actual application requirement.
Based on the above research, the present disclosure provides an image processing method. In the embodiment of the disclosure, when a current image frame is processed, information in a previous image frame can be transferred to the current image frame by combining with an image processing result of the previous image frame, so that the image processing result of the previous image frame is used as guide information of the current image frame, and the current image frame is processed by the guide information, so that feature information in the current frame can be better captured, and an accurate image processing result is obtained.
In the embodiment of the disclosure, for each current image frame to be processed, by setting a corresponding image processing model for the current image frame, and setting a fusion position and a fusion mode of an image processing result of a previous image frame in the image processing model, it is possible to reasonably allocate computing resources for data to be computed in the image frame, thereby improving the processing precision of the image processing model and obtaining an image processing result with higher accuracy.
To facilitate understanding of the present embodiment, first, an image processing method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the image processing method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the image processing method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an image processing method provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S107, where:
s101: and acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame.
S103: and acquiring an image processing model for processing the current image frame, and determining a fusion position and a fusion mode in the image processing model.
In embodiments of the present disclosure, an image processing model that meets the accuracy requirements may be determined for the current image frame. Here, the fusion position may be understood as a position where the image processing result of the previous image frame is fused into the searched image processing model, and may also be understood as a timing where the image processing result of the previous image frame and the image feature of the current image frame are fused in the image processing model.
The fusion mode may be understood as a specific fusion algorithm for fusing the image processing result of the previous image frame and the image feature of the current image frame.
In an alternative embodiment, the fusion mode comprises any one of the following: add operation Add, multiply operation Mul, cascade operation Cat.
It should be noted here that the structures of the image processing models corresponding to different image frames may be the same or different; in the image processing models corresponding to different image frames, the fusion positions and the fusion modes for data fusion may be the same or different, which is not specifically limited in this disclosure.
S105: and in the process of extracting the image characteristics of the current image frame by the image processing model, fusing the image characteristics output by the network module positioned in front of the fusion position in the image processing model and the image processing result of the previous image frame according to the fusion mode to obtain a fusion result.
Here, the image processing model searched for the current image frame includes a plurality of network modules having a connection relationship, and for example, the following types of network modules may be included: convolutional layers, pooling layers, normalization layers, and the like.
After the image processing model is acquired, a plurality of network modules included in the image processing model extract image features of a current image frame. And when the features are extracted to the network module before the fusion position, fusing the image features output by the network module and the image processing result of the previous image frame to obtain a fusion result.
For example, the fusion module may fuse the image feature output by the network module and the image processing result of the previous image frame according to any one of the fusion modes of the adding operation Add, the multiplying operation Mul, and the cascading operation Cat, so as to obtain a fusion result.
S107: and inputting the fusion result to a network module behind the fusion position in the image processing model for image processing to obtain an image processing result of the current image frame.
In the case that the image processing model is a pose estimation model, the image processing result is used to represent a pose estimation result of the target object included in the current image frame, wherein the pose estimation result can be understood as feature information of a limb key point of the target object.
In the case that the image processing model is a face detection model, the image processing result is used to represent a face detection result of a target face contained in the current image frame, where the face detection result may be understood as information of face key points of the target face, and the face key points may be key points used to represent five sense organs.
In the embodiment of the disclosure, when a current image frame is processed, information in a previous image frame can be transferred to the current image frame by combining with an image processing result of the previous image frame, so that the image processing result of the previous image frame is used as guide information of the current image frame, and the current image frame is processed by the guide information, so that feature information in the current image frame can be better grasped, and an accurate image processing result is obtained. For each current image frame to be processed, a corresponding image processing model is set for the current image frame, and the fusion position and the fusion mode of the image processing result of the previous image frame are set in the image processing model, so that the reasonable distribution of computing resources for the data to be computed in the image frame can be realized, the processing precision of the image processing model is improved, and the image processing result with higher accuracy is obtained.
The steps S101,
The current image frame and the image frame above it may be image frames in a video segment. For the video clip, the type of the image frame is preset, and the method specifically includes: key frames and non-key frames.
In setting the key frame and the non-key frame, the setting may be performed according to the content included in the image frame. For example, when the image content of the current image frame is changed (for example, a target object included in the image frame is changed) compared to the previous image frame, the current image frame may be determined as the key frame. In the case that the current image frame is a key frame, the image frame in the video segment that is located after the current image frame and before the next key frame is a non-key frame.
For example, the following image frames may be included in a video clip: key frame a1, non-key frame B1, non-key frame B2, key frame a2, non-key frame B3, non-key frame B4, non-key frame B5, key frames A3, ….
In the embodiment of the present disclosure, the manner of determining the image processing model corresponding to the key frame is different from that of the non-key frame, and will be described in the following embodiments.
Here, the key frame a1, the non-key frame B1, the non-key frame B2 may be taken as a set of image frames, and then a corresponding image processing model may be determined for each image in the set of image frames; it is also possible to take the key frame a2, the non-key frame B3, the non-key frame B4, the non-key frame B5 as another set of image frames and then determine a corresponding image processing model for each image in the set of image frames.
For step S103: acquiring an image processing model for processing a current image frame, comprising the steps of:
determining an image processing model for processing the current image frame based on a hyper-network; the super network includes a plurality of network modules having a connection relationship.
Here, the super network may be a preset neural network model including a plurality of network modules; a super network is understood to mean a large neural network in which a plurality of network modules block are contained.
In the embodiment of the disclosure, a subnetwork model meeting the requirement can be searched for the current image frame in a super network to serve as an image processing model; the search process may include the following two ways.
The first method is as follows: on the premise of not changing the structure of a preset network module in the super network, a network module meeting the requirement is searched for the current image frame in the super network, and then a sub-network model is determined as an image processing model according to the connection relation of the searched network module in the super network.
The second method comprises the following steps: the network model meeting the requirements is searched in the super network by pruning (or pruning) the structure of the preset network module in the super network. The structure of the network module is pruned according to the search ranges of the plurality of search dimensions, and a sub-network model is determined as an image processing model according to the connection relation of the searched network module in the super network.
Besides, the network module meeting the requirement can be searched in the above-described manner and manner two.
By searching the image processing model meeting the requirements for the current image frame through the two modes, the searching range of the network module can be further expanded on the basis of the network structure of the existing super network, and therefore a sub-network model with better precision is searched in the expanded searching range to serve as the image processing model.
In an alternative embodiment, determining an image processing model for processing the current image frame based on a hyper-network comprises the following processes:
s11, determining a plurality of continuous non-key frames including the current image frame in the video segment to which the current image frame belongs;
s12, searching at least one subnetwork model group meeting a first computing resource constraint condition in a pre-trained first super network, wherein each subnetwork model group comprises a first subnetwork model searched for each non-key frame;
s13, determining an image processing model for processing the current image frame based on the at least one sub-network model group.
For example, the following image frames are included in a video segment: key frame a1, non-key frame B1, non-key frame B2, key frame a2, non-key frame B3, non-key frame B4, non-key frame B5, key frames A3, ….
If the current image frame is non-key frame B4, then the consecutive non-key frames comprising the current image frame are: non-key frame B3, non-key frame B4, non-key frame B5. Similarly, if the current image frame is a non-key frame B1, then the consecutive non-key frames comprising the current image frame are non-key frame B1 and non-key frame B2.
After the plurality of consecutive non-key frames is determined, a first subnetwork model can be searched in a pre-trained first super-network for each non-key frame.
It should be noted that the first sub-network model is searched for each non-key frame by the search method described in the above-described first and/or second method, which is not described herein again.
It should be noted that the model structures of the first sub-network models searched for each non-key frame may be the same or different, and this disclosure does not specifically limit this.
By searching different first sub-network models for each non-key frame, the reasonable distribution of computing resources among a plurality of continuous non-key frames can be realized, so that the global optimization of the image processing method is realized, and the processing accuracy of the image processing method is improved.
For a plurality of consecutive non-key frames, a plurality of first sub-network models searched for the plurality of consecutive non-key frames are required to satisfy a first computational resource constraint. At this time, a plurality of first sub-network models satisfying the first computing resource constraint condition constitute one sub-network model group.
Here, it can be understood by the following means: the plurality of first sub-network models satisfy a first computational resource constraint:
each first subnetwork model satisfying a first computing resource constraint; and/or the plurality of first sub-network models satisfy the first computing resource constraint condition as a whole; and/or each network module constituting each first subnetwork model satisfies the first computing resource constraint.
Here, an arbitrary first computing resource constraint condition may be set, and for example, the following parameters may be included in the first computing resource constraint condition: floating point operands (FLOPs) in the image processing model, computation time (FPS) of the image processing model, Parameters (Parameters) of the image processing model, and the like. Floating point operands (FLOPs) are used to measure the computational complexity of the image processing model.
After determining the at least one sub-network model set that satisfies the first computing resource constraint, an image processing model for processing the current image frame may be determined based on the at least one sub-network model set.
In the above embodiment, by searching for at least one sub-network model group satisfying the first computing resource constraint condition in the first super-network and then determining the image processing model corresponding to each non-key frame according to the at least one sub-network model group, it is possible to realize automatic allocation of computing resources among a plurality of consecutive non-key frames when the image processing models corresponding to the plurality of consecutive non-key frames satisfy the overall computing resource constraint (i.e., the first computing resource constraint condition), thereby realizing global optimization of the image processing method and improving the processing accuracy of the image processing method.
In an alternative embodiment, step S13: determining an image processing model for processing the current image frame based on the at least one sub-network model group, comprising the processes of:
step S131: obtaining a target test sample;
step S132: testing each sub-network model group through the target test sample to obtain at least one test result;
step S133: and selecting a target sub-network model group corresponding to a target test result from the at least one test result, and determining the image processing model according to the target sub-network model group, wherein the target test result is a test result meeting a first test condition in the at least one test result.
Specifically, after at least one sub-network model group is determined, a group of sub-network models with better overall feature propagation effect needs to be selected from all sub-network model groups meeting the first computing resource constraint condition.
At this time, the target test sample can be processed by each sub-network model group to obtain at least one test result, wherein the test result is used for representing the processing accuracy of each sub-network model in each sub-network model group.
Here, each target test sample may include a plurality of image frames, and one sub-network model in each sub-network model group is used for processing one image frame.
After obtaining the at least one test result, a sub-network model group corresponding to a target test result meeting the first test condition may be selected as a target sub-network model group from the at least one test result, which specifically includes the following modes:
the first method is as follows:
after obtaining the at least one test result, the sub-network model group corresponding to the optimal test result may be selected from the at least one test result as the target sub-network model group.
The second method comprises the following steps:
after the at least one test result is obtained, a test result which is greater than or equal to a preset test threshold value can be selected from the at least one test result, and the sub-network model group corresponding to the selected test result is used as a target sub-network model group.
For the test result of each sub-network model group, a plurality of sub-test results may be included in the test result, where each sub-test result is used to characterize the test result of each first sub-network model in the sub-network model group.
Here, each test result being greater than or equal to the preset test threshold may be understood as: each sub-test result is greater than or equal to a preset test threshold value, and/or the mean value of each sub-test result is greater than or equal to a preset test threshold value.
After the target sub-network model group is determined, the sub-network models in the target sub-network model group can be respectively used as image processing models corresponding to a plurality of continuous key frames, so that the image processing model corresponding to the current image frame is determined.
In the above embodiment, by selecting the sub-network model with the optimal processing precision for the plurality of consecutive non-key frames in the at least one sub-network model group by the target test sample, the precision of image processing can be improved, so as to obtain an image processing result with higher accuracy, for example, an attitude estimation result.
In an alternative embodiment, step S12: searching a pre-trained first hyper-network for at least one subnetwork model group meeting a first computing resource constraint, comprising the process of:
step S121: determining a target search range of each non-key frame; each target search range comprises a search range of a plurality of first search dimensions in preset search dimensions; the first search dimension includes: a spatial search dimension for indicating an image processing model structure, and a temporal search dimension for indicating the fusion position and the fusion manner;
step S122: and searching a subnetwork model meeting the first computing resource constraint condition in a pre-trained first hyper-network based on the target search range of each non-key frame, and determining the subnetwork model group according to the searched subnetwork model.
For example, the following image frames are included in a video segment: key frame a1, non-key frame B1, non-key frame B2, key frame a2, non-key frame B3, non-key frame B4, non-key frame B5, key frames A3, …. The following description will take non-key frame B1 and non-key frame B2 as examples.
A corresponding target search range is determined for non-key frames B1 in the spatial search dimension and the temporal search dimension, and a subnetwork model C1 is searched in the first super-network based on the target search range. A corresponding target search range is determined for non-key frames B2 in the spatial search dimension and the temporal search dimension, and a subnetwork model C2 is searched in the first super-network based on the target search range. Next, a determination is made as to whether the sum of the computational complexity of subnetwork model C1 and subnetwork model C2 satisfies the first computational resource constraint.
When it is judged that the sub-network model is satisfied, the sub-network model C1 and the sub-network model C2 are defined as a sub-network model group. If not, the search for the sub-network model satisfying the first computing resource constraint condition is continued based on the target search range of each non-key frame.
In the above embodiment, the same hyper-network may be used for searching for different non-key frames, and a group of spatial structures and sub-network models with different fusion positions and fusion modes are obtained through searching. By the processing mode, the reasonable distribution of computing resources among a plurality of continuous non-key frames can be realized, so that the precision of image processing is improved, and an image processing result with higher accuracy is obtained.
In an embodiment of the present disclosure, the presetting of the search dimension includes: a spatial search dimension and a temporal search dimension, the spatial search dimension including at least one of: model structure parameters, convolutional layer parameters, attention module parameters; the temporal search dimension includes: and fusing the parameters.
The model structure parameters are used for representing the number of network modules required by the image processing model to be searched in the super network. And if the image processing model to be searched is the image processing model of the current image frame, the super network is the first super network.
Here, the model structure parameters may include: a model depth parameter, wherein the model depth parameter is used to characterize the number of network modules required in each network module of the hyper-network for the image processing model to be searched.
Fig. 2 is a schematic diagram showing model search based on model depth parameters in any network. As can be seen from fig. 2, network module 1(block1), network module 2(block2), network module 3(block3) and network module 4(block4), as well as the output layers, are included in the network.
When the model depth parameter is 2, it indicates that data is performed on input data by the network module 1(block1) and the network module 2(block2), and data processed by the network module 2 is directly input to the output layer by skipping the network module 3(block3) and the network module 4(block 4).
As can be seen from the above description, the technical solution provided by the present disclosure may search a specified number of network modules in each network module according to the model depth parameter for processing. For example, the first N network modules perform processing on the input data, and for other network modules in the network module, the input data is not subjected to any data processing, that is, the other network modules are directly skipped over.
For the above-described convolutional layer parameters, the convolutional layer parameters may include at least one of: convolutional layer channel number, convolutional kernel size parameter, convolutional layer packet number parameter.
The number of the convolution layer channels is used for indicating the number of the characteristic channels required by the network module to be output by the image processing model to be searched in the super network.
Fig. 3 is a schematic diagram showing model search based on model width parameters in any network. As can be seen from fig. 3, in the case where the model width parameter is not set, after the input data of 3 channels is calculated by the convolution kernel with the size of I × O × K, the output data of 4 channels can be obtained, where I is 3, O is 4, and K is 3.
As can be seen from fig. 3, after the model width parameter is set, a specified number of feature channels can be selected from all feature channels of the data output by the network module, so as to reduce the width of the output data, so as to reduce the computation amount of the neural network model, for example, selecting the data of the first N feature channels in the output data as the output data of the network model.
For example, when the model width parameter is 2, as shown in fig. 3, the data of the first 2 feature channels may be selected from the feature data of the 4 feature channels as the output data of the network module.
And secondly, the convolution kernel size parameter is used for representing the convolution kernel size of the convolution layer required by the image processing model to be searched in the super network.
An alternative schematic illustration of model search based on the convolution kernel size parameter is shown in fig. 4. In the convolutional layer of the network module, the size of the initial convolution kernel may be 4 × 4, and after the convolution kernel size parameter is set, a convolution kernel of 2 × 2 size may be selected from the initial convolution kernel as the convolution kernel of the convolutional layer according to the convolution kernel size parameter.
In an embodiment of the present disclosure, a convolution kernel of size 2 x 2 may be selected at a position intermediate to the initial convolution kernel, as shown in fig. 4.
And thirdly, the convolutional layer grouping quantity parameter is used for representing the grouping quantity of the convolutional layers required by the image processing model to be searched in the hyper-network.
Here, convolutional layer grouping is to divide input data into a plurality of sub-data groups, and further perform convolutional calculation on each sub-data group.
As can be seen from the convolution calculation shown in fig. 5, "before grouping", the input data is the feature data of 4 channels, and the output data is the feature data of 2 channels, in this case, the number of convolution kernels is required to be 2, and the size of each convolution kernel is 4 × K.
That is, assuming that the number of channels of the input data is Cin, the number of channels of the output data is Cout, at this time, the number of convolution kernels is Cout, and the size of each convolution kernel is Cin K, at this time, the size of the convolution kernel in the convolution layer is: cout Cin K.
Based on this, it is assumed that, as shown in "after grouping" in fig. 5, the convolutional layer grouping number parameter is 2, which means that the input data is split into 2 groups of sub-data, each group of sub-data is characteristic data of 2 channels, and the output data corresponding to each group of sub-data is characteristic data of 1 channel. For the sub-data of 2 channels, the size of the corresponding convolution kernel is 1 × 2 × K (or, (Cout/2) × (Cin/2) × K).
In the above embodiment, by grouping the convolution layers, the number of parameters required for convolution calculation can be reduced, thereby simplifying the calculation process of the image processing model and increasing the calculation efficiency of the image processing model.
The attention module parameter is used to indicate whether to use a preset attention module in each network module.
Here, the attention module parameter is used to indicate whether to use the attention module preset in each network module.
In the disclosed embodiment, whether to use each network module may be determined by the attention module parameter if the attention module needs to be set. As shown in fig. 6, when the attention module is needed, the output data of each network module passes through the corresponding attention module and then is input to the next network module for processing; when the attention module is not needed, the attention module is skipped directly.
And the fusion parameter is used for indicating the fusion position and the fusion mode of the image processing result of the previous image frame fused into the image processing model.
Here, the fusion position may be selected as a position where any one of the network modules in the depth direction of the image processing model is fused with the image processing result of the previous image frame. The fusion mode comprises any one of the following modes: add operation Add, multiply operation Mul, cascade operation Cat.
In the above embodiment, by setting the model structure parameter, the convolutional layer parameter, the attention module parameter, and the fusion parameter, the search space of the neural network model can be expanded in the model search space, so that the lightweight neural network model with the processing accuracy meeting the requirement is searched, and the optimal image processing result is obtained.
In the embodiment of the present disclosure, after the image processing model, the fusion position, and the fusion manner are determined in the above-described manner, in a process of extracting the image feature of the current image frame by the image processing model, the image feature output by the network module located before the fusion position in the image processing model and the image processing result of the previous image frame may be fused in the fusion manner, so as to obtain a fusion result. After the fusion result is obtained, carrying out convolution calculation on the fusion result; and inputting the fusion result after the convolution calculation to a network module behind the fusion position in the image processing model for image processing to obtain an image processing result of the current image frame.
It should be noted that before the image feature and the image processing result of the previous image frame are fused, the convolution calculation may be performed on the image processing result by another convolution layer, so that the image processing result after the convolution calculation and the image feature are fused to obtain a fusion result.
In an alternative embodiment, the training of the first super-network to be trained may be performed by the following steps, specifically including:
(1) acquiring a first training sample set, wherein the first training sample set comprises a plurality of first training samples, and each first training sample comprises a plurality of non-key frames;
(2) extracting a plurality of groups of sub-networks to be trained from a first super-network to be trained; the number of the sub-networks in each group of sub-networks to be trained is the same as that of the non-key frames in each first training sample, and one sub-network to be trained correspondingly processes one non-key frame in the training sample;
(3) and training each group of sub-networks to be trained through the first training sample set to obtain the first super-network after training.
When a first hyper-network to be trained is trained, a first training sample set may be obtained, where the first training sample set includes a plurality of first training samples, and each first training sample includes a plurality of non-key frames. Here, the number of non-key frames included in each first training sample of the first training sample set may be set according to actual needs.
It is understood that in the embodiments of the present disclosure, a plurality of first training sample sets may be constructed, and the number of non-key frames included in the first training samples of different first training sample sets is different. By the setting mode, different quantity requirements of a plurality of continuous non-key frames can be met; and meanwhile, the processing precision of the sub-network model under each condition in the first super network can be improved.
In this embodiment of the present disclosure, for each first training sample, N groups of sub-networks to be trained may be extracted from the first super-network to be trained, which are: the sub-network model with the largest structure in the first super-network to be trained is extracted for each non-key frame in the first training sample, the sub-network model with the smallest structure in the first super-network to be trained is extracted for each non-key frame in the first training sample, and the N-2 sub-network models are randomly extracted for each non-key frame in the first training sample. The randomly extracted N-2 sub-network models are respectively randomly extracted for each non-key frame, and both the space search dimension parameter and the time search dimension parameter are different when the N-2 sub-network models are randomly extracted. And then training based on the extracted N sub-networks to be trained, wherein each sub-network to be trained in each group of sub-networks to be trained is used for processing a non-key frame.
Here, when the extracted subnetwork model is trained by each first training sample, the prediction result of the subnetwork model with the largest structure for the first training sample may be used as the training label of other subnetwork to be trained in the group of subnetwork to be trained, so as to perform supervised training on other subnetwork to be trained.
Here, when randomly extracting a plurality of groups of subnetworks to be trained, the plurality of groups of subnetworks to be trained may be randomly extracted in the first hyper network to be trained according to the plurality of first search dimensions described above.
Specifically, a plurality of groups of sub-networks to be trained can be randomly extracted from the first super-network to be trained according to the model structure parameters, the convolutional layer parameters, the attention module parameters and the fusion parameters.
After the first training sample set is obtained, each group of sub-networks to be trained can be trained through the first training sample set, and the first super-network is obtained after training.
In the embodiment, the training method is used for training the first super network to be trained, so that sub-network models with different structures and variable lengths can be obtained through training; therefore, different constraint conditions can be met to meet different application scenes.
In an optional embodiment, in a case that a previous image frame is a key frame, the determining, by using the following steps, an image processing model corresponding to the previous image frame specifically includes:
(1) determining a plurality of search ranges of second search dimensions matched with the previous image frame in preset search dimensions; the plurality of second search dimensions include a spatial search dimension for indicating a structure of an image processing model to which the previous image frame corresponds;
(2) searching a pre-trained second super network based on the search range of each second search dimension to obtain at least one second sub network model meeting the second computing resource constraint condition; wherein the second computing resource constraint is used to characterize the complexity of processing the key frame;
(3) and determining an image processing model corresponding to the previous image frame based on the at least one second sub-network model.
In an embodiment of the present disclosure, the presetting of the search dimension includes: a spatial search dimension and a temporal search dimension, the spatial search dimension including at least one of: model structure parameters, convolutional layer parameters, attention module parameters.
Here, the plurality of second search dimensions include at least one of the following parameters: model structure parameters, convolutional layer parameters, attention module parameters. The introduction to the model structure parameters, convolutional layer parameters, and attention module parameters is as described above and will not be described in detail here.
In the embodiment of the present disclosure, after a plurality of second search dimensions are determined, a search range may be determined for each second search dimension, and at this time, the plurality of second search dimensions correspond to the plurality of search ranges.
For example, a search range (i.e., a parameter range) may be determined for each dimension of the model structure parameters, the convolutional layer parameters, the attention module parameters.
After determining the search scope for each second search dimension, a search may be performed in the second super-network based on the determined plurality of search scopes to obtain at least one second sub-network model that satisfies the second computing resource constraint. Then, an image processing model corresponding to the previous image frame is determined based on at least one second sub-network model satisfying the second computing resource constraint condition.
In the above embodiment, by setting the second search dimension and searching the corresponding sub-network model for the key frame in the second super network as the image processing model according to the second search dimension, the image processing model meeting the requirement can be automatically determined in the second super network.
Because the image processing model corresponding to the previous image frame is a lightweight network model, the processing efficiency of the image processing model can be improved by the method on the basis of ensuring the processing precision of the image processing model, so that the application scene with higher real-time requirement is met. The application scene can be a mutual entertainment item related to short videos, such as a mutual entertainment item related to human body postures.
In an alternative embodiment, the above steps: determining an image processing model corresponding to the previous image frame based on the at least one second sub-network model, including the following processes:
firstly, processing a preset test set through each second sub-network model to obtain a plurality of test results; wherein the test result is used for characterizing the prediction accuracy of the corresponding second sub-network model;
then, determining that a second sub-network model corresponding to a target test result is an image processing model corresponding to the previous image frame in the plurality of test results, wherein the target test result is a test result meeting a second test condition in the plurality of test results.
In the embodiment of the present disclosure, after at least one second sub-network model satisfying the second constraint condition is searched, a second sub-network model satisfying the requirement for processing accuracy may be determined in the at least one second sub-network model as the image processing model corresponding to the previous image frame.
Specifically, a preset test set may be obtained, then, a deployment environment of the image processing method in the embodiment of the present disclosure is simulated, and each test sample in the preset test set is processed through each second sub-network model to obtain a corresponding test result, where each second sub-network model corresponds to one test result.
Here, satisfying the second test condition may be understood as: selecting a second sub-network model corresponding to the optimal test result from the plurality of test results as an image processing model corresponding to the previous image frame; and/or selecting a second sub-network model corresponding to the target test result which is greater than or equal to the preset test threshold value from the plurality of test results as an image processing model corresponding to the previous image frame.
In the foregoing embodiment, by testing at least one screened second sub-network model satisfying the second computing resource constraint condition through the preset test set, the second sub-network model with the optimal processing precision can be obtained, so that the image processing model satisfying the practical application with a high real-time requirement can be selected while the processing precision of the image processing model is ensured.
In this embodiment of the disclosure, in a case that the target test result is not determined in the test results of the at least one second subnetwork model, the search range corresponding to each second search dimension may be updated. And then searching the second hyper-network according to the updated search range until determining the target test result in the test results of at least one second sub-network model meeting the second computing resource constraint condition, and determining the image processing model corresponding to the previous image frame based on the second sub-network model corresponding to the target test result.
After updating the search scope, a search may be conducted based on the updated search scope to obtain at least one second subnetwork model given the second computing resource constraints. For each searched second sub-network model, each second sub-network model may be tested in the manner described above, and a plurality of test results are obtained. And under the condition that the plurality of test results meet the preset precision requirement, for example, the plurality of test results include a target test result which is greater than a preset test threshold, determining the image processing model corresponding to the previous image frame according to the second sub-network model corresponding to the target test result.
In an alternative embodiment, the training of the second super-network to be trained comprises the following steps:
(1) acquiring a second training sample set; the second training sample set comprises a plurality of second training samples;
(2) extracting a plurality of sub-network models for each second training sample in the second super-network to be trained;
(3) and training the plurality of extracted sub-network models based on a plurality of second training samples in the second training sample set, and obtaining the second super-network after training.
For each second training sample in the second training sample set, according to the above-described plurality of second search dimensions, N sub-network models are extracted from the second super-network to be trained, which are: the subnetwork model with the largest structure in the second super-network to be trained, the subnetwork model with the smallest structure in the super-network to be trained, and N-2 randomly drawn subnetwork models.
Next, when the second training sample set includes M second training samples, the extracted M × N sub-network models may be trained based on the second training sample set, so as to obtain a trained second super-network.
In this embodiment of the present disclosure, the sample label corresponding to each second training sample is a prediction result of the sub-network model with the largest structure in the extracted at least one sub-network model on the second training sample.
Here, when the extracted sub-network model is trained by each second training sample, the prediction result of the second training sample with the largest structure may be used as a training label of another sub-network model, so as to perform supervised training on the other sub-network model.
In the above embodiment, by randomly extracting at least one sub-network model and training each extracted sub-network model, it is possible to obtain sub-network models satisfying different second computing resource constraints through one training process of the super-network, so that the super-network can adapt to a wider application scenario, thereby reducing the overhead of network structure search.
Further, in the above embodiment, the prediction result of the training sample by the sub-network model with the largest structure is used as the training label of other sub-network models, and the extracted sub-network models each have higher prediction accuracy by performing supervised training on the other sub-network models, so that the light-weight neural network model meeting the real-time requirement is selected to process the image on the basis of ensuring the prediction accuracy of the neural network model.
It is assumed that the image processing method provided by the present disclosure is a pose estimation method. As shown in fig. 7, a flowchart of an image processing method provided in an embodiment of the present disclosure is a method, where the method includes the following steps:
the flow shown in fig. 7 includes a process flow for one key frame and two non-key frames. The key frame is marked as a key frame t, and the non-key frames are marked as a non-key frame t +1 and a non-key frame t +2, respectively.
For the key frame t, the corresponding image processing model (i.e., the processing model of the single-frame image) can be determined for the key frame t in the manner described above; for the non-key frame t +1 and the non-key frame t +2, the corresponding image processing models may be determined for the non-key frame t +1 and the non-key frame t +2 in the manner described above, and the specific determination process is as described above and will not be described in detail here.
In the embodiment of the present disclosure, first, the key frame t is processed by the processing model of the single-frame image to obtain the predicted heat map Ht. Predicted heatmap HtAfter being processed by the convolutional layer 1, the data is input into a fusion module. Aiming at the non-key frame t +1, extracting the image characteristics of the non-key frame t +1 through a network module F1 and a network module F2 in the image processing model t +1 to obtain the image characteristics F of the non-key frame t +12(ii) a At this time, the image feature F of the non-key frame t +1 may be set2Input into the fusion module 1. The fusion module 1 acquires the prediction heat map HtAnd image features F of non-key frame t +12The predicted heatmap H is then fused (e.g., multiplied as shown in FIG. 7)tAnd image features F of non-key frame t +12Performing fusion to obtain a fusion result; then, the fusion result is processed by the convolution layer 2 to obtain a convolution processing result F2'. Next, convolution processing result F2' as an input of the network module F3 to convolve the result F of the processing by the network module F3 and the network module F42' processing to obtain a predicted heatmap Ht+1
Predicted heatmap Ht+1After being processed by the convolution layer 3, the input is input to the fusion module 2. For the non-key frame t +2, extracting the image characteristics of the non-key frame t +2 through a network module F1-a network module F3 in the image processing model t +2 to obtain the image characteristics F of the non-key frame t +13(ii) a At this time, the image feature F of the non-key frame t +1 may be set3Input into the fusion module 2. The fusion module 2 acquires the prediction heat map Ht+1And image features F of non-key frame t +13The predicted heatmap H is then fused (e.g., added as shown in FIG. 7)t+1And image features F of non-key frame t +13Performing fusion to obtain a fusion result; then, the fusion result is processed by the convolution layer 4 to obtain a convolution processing result F3'. Next, convolution processing result F3' as input of the network module F4 to convolve the processing result F4 with the result F3' processing to obtain a predicted heatmap Ht+2
The processing procedure for other non-key frames is as described above, and is not described in detail here.
It should be noted that the prediction heat map may be understood as a posture estimation result of the corresponding image frame, that is, feature information of a limb key point of the target object included in the image frame.
As can be seen from the above description, in this embodiment, by determining the fusion position and the fusion mode for the image processing model of the current image frame, the connection mode between the connected image frames can be automatically searched, and the spatial dimension search is combined to perform the automatic allocation of the computing resources between the image frames, so as to obtain a more efficient video human body posture estimation model, and under the condition that the prediction accuracy is not reduced, the computation complexity is greatly reduced and the prediction speed is greatly increased.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an image processing apparatus corresponding to the image processing method is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the image processing method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 8, a schematic diagram of an image processing apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a first acquisition unit 81, a second acquisition unit 82, a determination unit 83, a fusion unit 84, an image processing unit 85; wherein,
a first obtaining unit 81, configured to obtain a current image frame and obtain an image processing result of an image frame previous to the current image frame;
a second obtaining unit 82, configured to obtain an image processing model for processing the current image frame;
a determining unit 83, configured to determine a fusion position and a fusion manner in the image processing model;
the fusion unit 84 is configured to fuse, according to the fusion manner, the image feature output by the network module located before the fusion position in the image processing model and the image processing result in the process of extracting the image feature of the current image frame by the image processing model, so as to obtain a fusion result;
and the image processing unit 85 is configured to input the fusion result to a network module located behind the fusion position in the image processing model to perform image processing, so as to obtain an image processing result of the current image frame.
In this embodiment, when a current image frame is processed, information in a previous image frame can be transferred to the current image frame by combining with an image processing result of the previous image frame, so that the image processing result of the previous image frame is used as guidance information of the current image frame, and the current image frame is processed by the guidance information, so that feature information in the current image frame can be better captured, and an accurate image processing result can be obtained. In the embodiment of the disclosure, for each current image frame to be processed, a corresponding image processing model is set for the current image frame, and a fusion position and a fusion mode of an image processing result of the previous image frame are set in the image processing model, so that computing resources can be reasonably distributed among the images, and the image processing method in the disclosure can meet application scenes with high real-time requirements.
In a possible implementation, the second obtaining unit 82 is configured to: determining an image processing model for processing the current image frame based on a hyper-network; the super network includes a plurality of network modules having a connection relationship.
In a possible implementation, the second obtaining unit 82 is further configured to: determining a plurality of continuous non-key frames containing the current image frame in a video clip to which the current image frame belongs; searching at least one subnetwork model group meeting a first computing resource constraint condition in a pre-trained first hyper-network, wherein each subnetwork model group comprises a first subnetwork model searched for each non-key frame; determining an image processing model for processing the current image frame based on the at least one sub-network model group.
In a possible implementation, the second obtaining unit 82 is further configured to: obtaining a target test sample; testing each sub-network model group through the target test sample to obtain at least one test result; and selecting a target sub-network model group corresponding to a target test result from the at least one test result, and determining the image processing model according to the target sub-network model group, wherein the target test result is a test result meeting a first test condition in the at least one test result.
In a possible implementation, the second obtaining unit 82 is further configured to: determining a target search range of each non-key frame; each target search range comprises a search range of a plurality of first search dimensions in preset search dimensions; the first search dimension includes: a spatial search dimension for indicating an image processing model structure, and a temporal search dimension for indicating the fusion position and the fusion manner; and searching a subnetwork model meeting the first computing resource constraint condition in a pre-trained first hyper-network based on the target search range of each non-key frame, and determining the subnetwork model group according to the searched subnetwork model.
In a possible embodiment, the apparatus is further configured to: training a first hyper-network to be trained by: acquiring a first training sample set, wherein the first training sample set comprises a plurality of first training samples, and each first training sample comprises a plurality of non-key frames; extracting a plurality of groups of sub-networks to be trained from a first super-network to be trained; the number of the sub-networks in each group of sub-networks to be trained is the same as that of the non-key frames in each first training sample, and one sub-network to be trained correspondingly processes one non-key frame in the training sample; and training each group of sub-networks to be trained through the first training sample set, and obtaining the first super-network after training.
In a possible embodiment, the apparatus is further configured to: when the previous image frame is a key frame, determining an image processing model corresponding to the previous image frame by the following steps: determining a plurality of search ranges of a second search dimension matched with the previous image frame in a preset search dimension; the plurality of second search dimensions include a spatial search dimension for indicating an image processing model structure to which the previous image frame corresponds; searching a pre-trained second super network based on the search range of each second search dimension to obtain at least one second sub network model meeting a second computing resource constraint condition; wherein the second computing resource constraint is used to characterize the complexity of processing the key frame; and determining an image processing model corresponding to the previous image frame based on the at least one second sub-network model.
In a possible embodiment, the apparatus is further configured to: processing the preset test set through each second sub-network model to obtain a plurality of test results; wherein the test result is used for characterizing the prediction accuracy of the corresponding second sub-network model; and determining a second sub-network model corresponding to a target test result in the plurality of test results as an image processing model corresponding to the previous image frame, wherein the target test result is a test result meeting a second test condition in the plurality of test results.
In a possible embodiment, the apparatus is further configured to: updating the search range corresponding to each second search dimension under the condition that the target test result is not determined in the test results of the at least one second sub-network model; and searching the second super network according to the updated search range until the image processing model corresponding to the previous image frame is determined based on the second sub network model corresponding to the target test result under the condition that the target test result is determined in the test results of at least one second sub network model which meets the second computing resource constraint condition.
In a possible embodiment, the apparatus is further configured to: training a second hyper-network to be trained by: acquiring a second training sample set; the second training sample set comprises a plurality of second training samples; extracting a plurality of sub-network models for each second training sample in the second super-network to be trained; training the plurality of extracted sub-network models based on a plurality of second training samples in the second training sample set, and obtaining the second super-network after training.
In a possible implementation, the preset search dimension includes: a spatial search dimension and a temporal search dimension, the spatial search dimension including at least one of: model structure parameters, convolutional layer parameters, attention module parameters; the temporal search dimension includes: fusing parameters; the model structure parameters are used for representing the number of network modules required by the image processing model to be searched in the super network; the convolutional layer parameters are used to characterize at least one of: the method comprises the following steps that the number of characteristic channels output by a network module required by an image processing model to be searched in a super network, the size of a convolution kernel of a convolution layer required by the image processing model to be searched in the super network, and/or the grouping number of the convolution layers required by the image processing model to be searched in the super network are/is represented; the attention module parameter is used for indicating whether to use a preset attention module in each network module; the fusion parameter is used for indicating a fusion position and a fusion mode for fusing the image processing result of the previous image frame into the image processing model.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Corresponding to the image processing method in fig. 1, an embodiment of the present disclosure further provides a computer device 900, as shown in fig. 9, a schematic structural diagram of the computer device 900 provided in the embodiment of the present disclosure includes:
a processor 91, a memory 92, and a bus 93; the memory 92 is used for storing execution instructions and includes a memory 921 and an external memory 922; here, the memory 921 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 91 and data exchanged with an external memory 922 such as a hard disk, the processor 91 exchanges data with the external memory 922 through the memory 921, and when the computer apparatus 900 operates, the processor 91 communicates with the memory 92 through the bus 93, so that the processor 91 executes the following instructions:
acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame;
acquiring an image processing model for processing the current image frame, and determining a fusion position and a fusion mode in the image processing model;
in the process of extracting the image characteristics of the current image frame by the image processing model, fusing the image characteristics output by the network module positioned in front of the fusion position in the image processing model and the image processing result according to the fusion mode to obtain a fusion result;
and inputting the fusion result to a network module behind the fusion position in the image processing model for image processing to obtain an image processing result of the current image frame.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the image processing method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the image processing method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (14)

1. An image processing method, comprising:
acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame;
determining an image frame type of the current image frame based on a video clip to which the current image frame belongs, wherein the image frame type comprises a non-key frame or a key frame;
determining a search range of a plurality of search dimensions matched with the current image frame based on the image frame type, and searching an image processing model meeting computing resource constraints for the current image frame in a pre-trained hyper-network based on the search range, wherein the search dimensions comprise: model depth parameters, convolutional layer parameters;
and inputting the image processing results of the current image frame and the previous image frame into the image processing model for processing to obtain the image processing result of the current image frame.
2. The method of claim 1, wherein determining a search range for a plurality of search dimensions matching the current image frame based on the image frame type and searching for an image processing model satisfying computational resource constraints for the current image frame in a pre-trained hyper-network based on the search range comprises:
determining a plurality of consecutive non-key frames containing the current image frame in the video segment if the image frame type of the current image frame is a non-key frame;
determining a target search range of each non-key frame; each target search range comprises a search range of a plurality of first search dimensions in preset search dimensions;
based on the target search range for each non-key frame, at least one sub-network model group satisfying a first computational resource constraint is searched in the first super-network, and the image processing model is determined according to the at least one sub-network model group.
3. The method of claim 2, wherein said determining the image processing model from the at least one sub-network model group comprises:
obtaining a target test sample;
testing each sub-network model group through the target test sample to obtain at least one test result;
and selecting a target sub-network model group corresponding to a target test result from the at least one test result, and determining the image processing model according to the target sub-network model group, wherein the target test result is a test result meeting a first test condition in the at least one test result.
4. The method according to any of claims 1 to 3, characterized in that in case the image frame type of the current image frame is a non-key frame, the first hyper-network to be trained is trained by:
acquiring a first training sample set, wherein the first training sample set comprises a plurality of first training samples, and each first training sample comprises a plurality of non-key frames;
extracting a plurality of groups of sub-networks to be trained from a first super-network to be trained; the number of the sub-networks in each group of sub-networks to be trained is the same as that of the non-key frames in each first training sample, and one sub-network to be trained correspondingly processes one non-key frame in the training sample;
and training each group of sub-networks to be trained through the first training sample set, and obtaining the first super-network after training.
5. The method of any of claims 1-4, wherein determining a search range for a plurality of search dimensions matching the current image frame based on the image frame type and searching for an image processing model satisfying computational resource constraints for the current image frame in a pre-trained hyper-network based on the search range comprises:
determining a plurality of search ranges of a second search dimension matched with the current image frame in a preset search dimension under the condition that the image frame type of the current image frame is a key frame; the plurality of second search dimensions include a spatial search dimension for indicating an image processing model structure to which the previous image frame corresponds;
searching a pre-trained second super network based on the search range of each second search dimension to obtain at least one second sub network model meeting a second computing resource constraint condition; wherein the second computing resource constraint is used to characterize the complexity of processing the key frame;
and determining an image processing model corresponding to the current image frame based on the at least one second sub-network model.
6. The method of claim 5, wherein said determining an image processing model corresponding to the current image frame based on the at least one second sub-network model comprises:
processing the preset test set through each second sub-network model to obtain a plurality of test results; wherein the test result is used for characterizing the prediction accuracy of the corresponding second sub-network model;
and determining a second sub-network model corresponding to a target test result as the image processing model in the plurality of test results, wherein the target test result is a test result meeting a second test condition in the plurality of test results.
7. The method of claim 6, further comprising:
updating the search range corresponding to each second search dimension under the condition that the target test result is not determined in the test results of the at least one second sub-network model;
and searching the second hyper-network according to the updated search range until the image processing model is determined based on the second sub-network model corresponding to the target test result under the condition that the target test result is determined in the test results of at least one second sub-network model which meets the second computing resource constraint condition.
8. The method according to any one of claims 1 to 7, characterized in that in case the image frame type of the current image frame is a key frame, the second super network to be trained is trained by:
acquiring a second training sample set; the second training sample set comprises a plurality of second training samples;
extracting a plurality of sub-network models for each second training sample in the second super-network to be trained;
training the plurality of extracted sub-network models based on a plurality of second training samples in the second training sample set, and obtaining the second super-network after training.
9. The method of any one of claims 1 to 8, wherein the model depth parameters comprise: model structure parameters;
the model structure parameters are used for representing the number of network modules required by the image processing model to be searched in the super network;
the convolutional layer parameters are used to characterize at least one of: the method comprises the steps of determining the number of characteristic channels output by a network module required by an image processing model to be searched in a super network, the size of a convolution kernel of a convolution layer required by the image processing model to be searched in the super network, and the number of groups of the convolution layer required by the image processing model to be searched for in the super network.
10. The method according to any one of claims 1 to 9, wherein the inputting the image processing results of the current image frame and the previous image frame into the image processing model for processing to obtain the image processing result of the current image frame comprises:
determining fusion parameters in the image processing model, wherein the fusion parameters include: fusion position and fusion mode; the image processing models corresponding to at least part of the image frames have different structures, and the fusion positions and fusion modes for data fusion in the image processing models corresponding to at least part of the image frames are different;
and processing the image processing results of the current image frame and the previous image frame according to the fusion parameters to obtain the image processing result of the current image frame.
11. The method according to claim 10, wherein the processing the image processing results of the current image frame and the previous image frame according to the fusion parameter to obtain the image processing result of the current image frame comprises:
extracting image features of the current image frame through the image processing model;
in the process of extracting the image characteristics of the current image frame by the image processing model, fusing the image characteristics output by the network module positioned in front of the fusion position in the image processing model and the image processing result of the previous image frame according to the fusion mode to obtain a fusion result;
and inputting the fusion result to a network module behind the fusion position in the image processing model for image processing to obtain an image processing result of the current image frame.
12. An image processing apparatus characterized by comprising:
the first acquisition unit is used for acquiring a current image frame and acquiring an image processing result of a previous image frame of the current image frame;
a first determining unit, configured to determine an image frame type of the current image frame based on a video clip to which the current image frame belongs, where the image frame type includes a non-key frame or a key frame;
a second determining unit, configured to determine, based on the image frame type, search ranges of a plurality of search dimensions that match the current image frame, and search, in a pre-trained hyper-network, for the current image frame, based on the search ranges, an image processing model that satisfies a computational resource constraint, where the search dimensions include: model depth parameters, convolutional layer parameters;
and the image processing unit is used for inputting the image processing results of the current image frame and the previous image frame into the image processing model for processing to obtain the image processing result of the current image frame.
13. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the image processing method according to any one of claims 1 to 11.
14. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the image processing method according to any one of claims 1 to 11.
CN202111265558.9A 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium Withdrawn CN113963175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111265558.9A CN113963175A (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110524188.XA CN112949662B (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium
CN202111265558.9A CN113963175A (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110524188.XA Division CN112949662B (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113963175A true CN113963175A (en) 2022-01-21

Family

ID=76233852

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111265558.9A Withdrawn CN113963175A (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium
CN202110524188.XA Active CN112949662B (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110524188.XA Active CN112949662B (en) 2021-05-13 2021-05-13 Image processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (2) CN113963175A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419738A (en) * 2022-03-29 2022-04-29 北京市商汤科技开发有限公司 Attitude detection method and apparatus, electronic device and storage medium
CN114648478A (en) * 2022-03-29 2022-06-21 北京小米移动软件有限公司 Image processing method, device, chip, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689362B (en) * 2021-10-27 2022-02-22 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6950123B2 (en) * 2002-03-22 2005-09-27 Intel Corporation Method for simultaneous visual tracking of multiple bodies in a closed structured environment
US10832135B2 (en) * 2017-02-10 2020-11-10 Samsung Electronics Co., Ltd. Automatic thresholds for neural network pruning and retraining
CN111406267B (en) * 2017-11-30 2024-06-04 谷歌有限责任公司 Neural architecture search using performance prediction neural networks
CN108320274A (en) * 2018-01-26 2018-07-24 东华大学 It is a kind of to recycle the infrared video colorization method for generating confrontation network based on binary channels
CN108985386A (en) * 2018-08-07 2018-12-11 北京旷视科技有限公司 Obtain method, image processing method and the corresponding intrument of image processing model
CN108989804B (en) * 2018-08-23 2021-04-27 杭州雄迈集成电路技术股份有限公司 Image coding method and device
CN109271990A (en) * 2018-09-03 2019-01-25 北京邮电大学 A kind of semantic segmentation method and device for RGB-D image
CN109410247A (en) * 2018-10-16 2019-03-01 中国石油大学(华东) A kind of video tracking algorithm of multi-template and adaptive features select
CN109167924B (en) * 2018-10-24 2021-05-11 清华-伯克利深圳学院筹备办公室 Video imaging method, system, device and storage medium based on hybrid camera
CN110427839B (en) * 2018-12-26 2022-05-06 厦门瞳景物联科技股份有限公司 Video target detection method based on multi-layer feature fusion
CN109934846B (en) * 2019-03-18 2023-06-06 南京信息工程大学 Depth integrated target tracking method based on time and space network
CN111553362B (en) * 2019-04-01 2023-05-05 上海卫莎网络科技有限公司 Video processing method, electronic device and computer readable storage medium
CN110175597A (en) * 2019-06-04 2019-08-27 北方工业大学 Video target detection method integrating feature propagation and aggregation
CN110555405B (en) * 2019-08-30 2022-05-06 北京迈格威科技有限公司 Target tracking method and device, storage medium and electronic equipment
CN112445823A (en) * 2019-09-04 2021-03-05 华为技术有限公司 Searching method of neural network structure, image processing method and device
CN111062382A (en) * 2019-10-30 2020-04-24 北京交通大学 Channel pruning method for target detection network
CN111340220B (en) * 2020-02-25 2023-10-20 北京百度网讯科技有限公司 Method and apparatus for training predictive models
CN111738418A (en) * 2020-06-19 2020-10-02 北京百度网讯科技有限公司 Training method and device for hyper network
CN111967382A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Age estimation method, and training method and device of age estimation model
CN112149545B (en) * 2020-09-16 2024-04-09 珠海格力电器股份有限公司 Sample generation method, device, electronic equipment and storage medium
CN112651499A (en) * 2020-12-28 2021-04-13 浙江大学 Structural model pruning method based on ant colony optimization algorithm and interlayer information
CN112686856B (en) * 2020-12-29 2024-07-09 杭州优视泰信息技术有限公司 Real-time enteroscopy polyp detection device based on deep learning
CN112767534B (en) * 2020-12-31 2024-02-09 北京达佳互联信息技术有限公司 Video image processing method, device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419738A (en) * 2022-03-29 2022-04-29 北京市商汤科技开发有限公司 Attitude detection method and apparatus, electronic device and storage medium
CN114648478A (en) * 2022-03-29 2022-06-21 北京小米移动软件有限公司 Image processing method, device, chip, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112949662B (en) 2021-11-16
CN112949662A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN112949662B (en) Image processing method and device, computer equipment and storage medium
CN109829433B (en) Face image recognition method and device, electronic equipment and storage medium
CN112949842B (en) Neural network structure searching method, apparatus, computer device and storage medium
CN110263916B (en) Data processing method and device, storage medium and electronic device
CN110263733B (en) Image processing method, nomination evaluation method and related device
CN111599346B (en) Speaker clustering method, device, equipment and storage medium
CN111540375B (en) Training method of audio separation model, and separation method and device of audio signals
CN112381227B (en) Neural network generation method and device, electronic equipment and storage medium
CN112200041A (en) Video motion recognition method and device, storage medium and electronic equipment
CN111259256B (en) Content processing method, content processing device, computer readable storage medium and computer equipment
CN112884868B (en) Three-dimensional mesh vertex feature determination method, skeleton covering method and related device
CN111783692A (en) Action recognition method and device, electronic equipment and storage medium
CN112818995A (en) Image classification method and device, electronic equipment and storage medium
CN111309946A (en) Established file optimization method and device
CN114419738B (en) Attitude detection method and apparatus, electronic device and storage medium
CN107798331B (en) Method and device for extracting characteristics of off-zoom image sequence
CN115759192A (en) Neural network acceleration method, device, equipment, chip and storage medium
CN111191065A (en) Homologous image determining method and device
CN116631060A (en) Gesture recognition method and device based on single frame image
CN116502700A (en) Skin detection model training method, skin detection device and electronic equipment
CN113822291A (en) Image processing method, device, equipment and storage medium
CN114897126A (en) Time delay prediction method and device, electronic equipment and storage medium
CN113537134A (en) Target object identification method and device, storage medium and electronic device
CN114566160A (en) Voice processing method and device, computer equipment and storage medium
CN111667028A (en) Reliable negative sample determination method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220121