CN111291635A - Artificial intelligence detection method and device, terminal and computer readable storage medium - Google Patents
Artificial intelligence detection method and device, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN111291635A CN111291635A CN202010057718.XA CN202010057718A CN111291635A CN 111291635 A CN111291635 A CN 111291635A CN 202010057718 A CN202010057718 A CN 202010057718A CN 111291635 A CN111291635 A CN 111291635A
- Authority
- CN
- China
- Prior art keywords
- fine
- called
- grained
- face
- execution sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an artificial intelligence detection method and device, a terminal and a computer readable storage medium, which relate to the technical field of artificial intelligence, and the method comprises the following steps: determining a corresponding target application scene in a plurality of preset application scenes according to the information to be detected; acquiring a first execution sequence of a plurality of AI functions to be called corresponding to a target application scene; determining a second execution sequence according to the attribute information of each AI function to be called; obtaining a third execution sequence according to the incidence relation between the first execution sequence and the second execution sequence; and calling all fine-grained modules required by the AI functions to be called according to a third execution sequence, generating a first execution result when any fine-grained module is called for the first time, and directly calling the first execution result when each time is called after the first calling. The technical scheme of the invention reduces the workload of development, operation and maintenance of AI detection.
Description
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of artificial intelligence technologies, and in particular, to an artificial intelligence detection method and apparatus, a terminal, and a computer-readable storage medium.
[ background of the invention ]
In the existing AI detection technology, each functional module has strong closure, and a lot of repeated data processing can be performed in each functional module, which not only causes repeated waste of working cost and resources, but also causes the problem of data conflict when data results obtained by the same data processing under different functional modules are inconsistent.
In order to solve the above problems, developers need to spend time and cost to manually check and debug the development documents, which greatly affects the efficiency of AI technology development and maintenance.
Therefore, how to reduce the resource cost of AI detection becomes a technical problem to be solved urgently at present.
[ summary of the invention ]
The embodiment of the invention provides an artificial intelligence detection method and device, a terminal and a computer readable storage medium, and aims to solve the technical problem of repeated resource waste of AI detection in the related art.
In a first aspect, an embodiment of the present invention provides an artificial intelligence detection method, including: acquiring information to be detected; determining a corresponding target application scene in a plurality of preset application scenes according to the information to be detected; acquiring a first execution sequence of a plurality of AI functions to be called corresponding to the target application scene; determining a plurality of fine-grained modules required for executing each AI function to be called and a second execution sequence of the fine-grained modules according to the attribute information of each AI function to be called; sequencing all fine-grained modules required by the AI functions to be called according to the incidence relation between the first execution sequence and the second execution sequence to obtain a third execution sequence; and calling all fine-grained modules required by the AI functions to be called according to the third execution sequence, wherein a first execution result is generated when any fine-grained module is called for the first time, and the first execution result is directly called when each time the fine-grained modules are called after the first time.
In the above embodiment of the present invention, optionally, before the step of acquiring the information to be detected, the method further includes: and setting the corresponding multiple AI functions to be called and the first execution sequence for executing the corresponding multiple AI functions to be called for each preset application scene according to first setting information.
In the above embodiment of the present invention, optionally, before the step of acquiring the information to be detected, the method further includes: and setting the plurality of fine-grained modules required for each AI function to be called and the second execution sequence for calling the plurality of fine-grained modules required according to second setting information.
In the above embodiment of the present invention, optionally, the method further includes: and storing the corresponding relation between each AI function to be called and the required fine-grained modules into the attribute information of each AI function to be called.
In the above embodiment of the present invention, optionally, any of the fine-grained modules includes a number of neural networks, and an output of each of the number of neural networks is usable by all of the fine-grained modules.
In the above embodiment of the present invention, optionally, when the target application scene is a micro expression that needs to identify a specified object, the multiple AI functions to be called include an identity recognition function and a micro expression recognition function, where the identity recognition function is executed by a face detection fine-grained module and a face recognition fine-grained module, and the micro expression recognition function is executed by the face detection fine-grained module and the micro expression recognition fine-grained module.
In the above embodiment of the present invention, optionally, the face detection fine-grained module includes a first neural network, a second neural network, a third neural network, and a fourth neural network, where an input of the first neural network is the information to be detected, an output of the first neural network is a plurality of first face boxes and corresponding first confidence degrees, and a plurality of second face boxes and corresponding second confidence degrees are screened out from the plurality of first face boxes and corresponding first confidence degrees through a softNMS algorithm; the input of the second neural network is the information to be detected, the plurality of second face frames and corresponding second confidence coefficients, the output of the second neural network is a plurality of third face frames and corresponding third confidence coefficients, and a plurality of fourth face frames and corresponding fourth confidence coefficients are screened out from the plurality of third face frames and the corresponding third confidence coefficients through a soft NMS algorithm; the input of the third neural network is the information to be detected, the fourth face frames and the corresponding fourth confidence coefficients, the output of the third neural network is face key points, the fifth face frames and the corresponding fifth confidence coefficients, and the sixth face frames and the corresponding sixth confidence coefficients are screened out from the fifth face frames and the corresponding fifth confidence coefficients through a soft NMS algorithm; the input of the fourth neural network is the information to be detected and the face key points, and the output of the fourth neural network is the face key points after position correction.
In a second aspect, an embodiment of the present invention provides an artificial intelligence detection apparatus, including: the information acquisition unit to be detected is used for acquiring information to be detected; the application scene determining unit is used for determining a corresponding target application scene in a plurality of preset application scenes according to the information to be detected; a first execution sequence determining unit, configured to obtain a first execution sequence of a plurality of AI functions to be called corresponding to the target application scenario; a fine-grained module and second execution order determination unit, configured to determine, according to attribute information of each to-be-called AI function, a plurality of fine-grained modules required to execute each to-be-called AI function and a second execution order of the plurality of fine-grained modules; a third execution sequence determining unit, configured to sort all fine-grained modules required by the multiple AI functions to be called according to an association relationship between the first execution sequence and the second execution sequence, so as to obtain a third execution sequence; and the fine-grained module calling unit is used for calling all fine-grained modules required by the AI functions to be called according to the third execution sequence, wherein a first execution result is generated when any fine-grained module is called for the first time, and the first execution result is directly called when each time the fine-grained module is called after the first time.
In the above embodiment of the present invention, optionally, the method further includes: and the first setting unit is used for setting the corresponding multiple to-be-called AI functions and the first execution sequence used for executing the corresponding multiple to-be-called AI functions for each preset application scene according to first setting information before the to-be-detected information acquisition unit acquires the to-be-detected information.
In the above embodiment of the present invention, optionally, the method further includes: and a second setting unit, configured to set, for each to-be-called AI function, the required fine-grained modules and the second execution order for calling the required fine-grained modules in multiple preset fine-grained modules according to second setting information before the to-be-detected information obtaining unit obtains the to-be-detected information.
In the above embodiment of the present invention, optionally, the method further includes: and the storage unit is used for storing the corresponding relation between each AI function to be called and the required fine-grained modules into the attribute information of each AI function to be called.
In the above embodiment of the present invention, optionally, any of the fine-grained modules includes a number of neural networks, and an output of each of the number of neural networks is usable by all of the fine-grained modules.
In the above embodiment of the present invention, optionally, when the target application scene is a micro expression that needs to identify a specified object, the multiple AI functions to be called include an identity recognition function and a micro expression recognition function, where the identity recognition function is executed by a face detection fine-grained module and a face recognition fine-grained module, and the micro expression recognition function is executed by the face detection fine-grained module and the micro expression recognition fine-grained module.
In the above embodiment of the present invention, optionally, the face detection fine-grained module includes a first neural network, a second neural network, a third neural network, and a fourth neural network, where an output of the first neural network is a plurality of first face boxes and corresponding first confidence levels, and a plurality of second face boxes and corresponding second confidence levels are screened out from the plurality of first face boxes and corresponding first confidence levels through a soft NMS algorithm; the input of the second neural network is the information to be detected, the plurality of second face frames and corresponding second confidence coefficients, the output of the second neural network is a plurality of third face frames and corresponding third confidence coefficients, and a plurality of fourth face frames and corresponding fourth confidence coefficients are screened out from the plurality of third face frames and the corresponding third confidence coefficients through a soft NMS algorithm; the input of the third neural network is the information to be detected, the fourth face frames and the corresponding fourth confidence coefficients, the output of the third neural network is face key points, the fifth face frames and the corresponding fifth confidence coefficients, and the sixth face frames and the corresponding sixth confidence coefficients are screened out from the fifth face frames and the corresponding fifth confidence coefficients through a soft NMS algorithm; the input of the fourth neural network is the information to be detected and the face key points, and the output of the fourth neural network is the face key points after position correction.
In a third aspect, an embodiment of the present invention provides a terminal, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the method of any of the first aspects above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer-executable instructions for performing the method flow described in any one of the first aspect.
According to the technical scheme, aiming at the technical problem of repeated waste of resources in AI detection in the related technology, information to be detected is firstly acquired, and the information to be detected can be information which is input by a user or automatically acquired by a system in any application scene and is used for artificial intelligence detection. The multiple preset application scenes include, but are not limited to, identity recognition, micro-expression recognition, road pedestrian detection, key region-of-interest detection and the like, and the information to be detected can indicate the required artificial intelligence detection function, so that the corresponding target application scene is selected.
An application scenario typically includes a plurality of AI functions to be invoked. For example, in an identity recognition application scenario, face detection is generally required, and after a face is detected, face recognition is performed on the face to recognize the identity of the face, so that the identity recognition application scenario corresponds to two to-be-called AI functions, namely a face detection function and a face recognition function.
For another example, in a micro expression recognition application scenario, face detection is generally required, and after a face is detected, micro expression recognition is performed on the face, so that the micro expression recognition application scenario corresponds to two to-be-called AI functions, namely a face detection function and a micro expression recognition function.
In the above process, a plurality of AI functions to be called in each target application scenario need to be executed according to the corresponding first execution order.
In the related art, different sealing units are respectively arranged for different application scenes, for example, an identity recognition unit is arranged for the identity recognition application scene, and a micro expression recognition unit is arranged for the micro expression recognition application scene, wherein, by combining the above examples, the identity recognition unit and the micro expression recognition unit both need to be provided with a face detection function. Then, in an application scenario where a micro expression of a specified object needs to be recognized, namely, an identity recognition unit needs to be called, a micro expression recognition unit also needs to be called, and a face detection function is performed in both cases. This causes the face detection function to be repeatedly executed, which affects the AI detection efficiency. Meanwhile, the same face detection function is repeatedly set for many times in the development process, so that the development difficulty and the cost consumption are increased, and the subsequent maintenance cost is correspondingly increased.
In view of the above, in the technical solution of the present invention, in order to solve the above problem, the basic granularity of the AI detection may be reduced from the closed unit corresponding to the application scenario to the AI function level, and specifically, the closed unit corresponding to the application scenario may be split into a plurality of fine-grained modules, where each fine-grained module is used to execute an AI function. By combining the above example, a face detection fine-grained module, a face recognition fine-grained module and a micro expression recognition fine-grained module can be provided, in an application scene in which a micro expression of a specified object needs to be recognized, the face detection fine-grained module and the face recognition fine-grained module replace an identity recognition unit, and the face detection fine-grained module and the micro expression recognition fine-grained module replace a micro expression recognition unit.
The second execution sequence corresponding to the AI function capable of setting identity recognition is that the fine-grained module for face detection is in front of the fine-grained module for face recognition, and the second execution sequence corresponding to the AI function for micro-expression recognition is that the fine-grained module for face detection is in front of the fine-grained module for micro-expression recognition and the micro-expression recognition module is behind the fine-grained module for face detection.
And then, the identity recognition is carried out, a face detection fine-grained module is called to detect the face, and then the face recognition fine-grained module is called to recognize the identity of the face. That is, the third execution order in the application scene that needs to identify the micro expression of the designated object is, from first to last: the system comprises a face detection fine-grained module, a face recognition fine-grained module and a micro-expression recognition module of a face detection result by using the face detection fine-grained module.
Furthermore, the technical scheme of the invention is that the closed units corresponding to the application scene are deeply split, and the AI detection of the application scene is realized by combining split results, so that the workload and the cost of the development, the operation and the maintenance of the AI detection are reduced and the efficiency of the AI technology development and maintenance is improved in a mode of sharing fine-grained modules. Meanwhile, different AI functions share the same fine-grained module, when different AI functions are executed, the fine-grained module only obtains one result when processing the same data, and the problem of data collision when the data results are inconsistent when processing the same data under the condition that each closed unit in the related technology is respectively administrative is avoided.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 shows a flow diagram of an artificial intelligence detection method according to an embodiment of the invention;
FIG. 2 shows a flow diagram of an artificial intelligence detection method according to another embodiment of the invention;
FIG. 3 illustrates a flow diagram of an artificial intelligence detection method according to yet another embodiment of the invention;
FIG. 4 shows a block diagram of an artificial intelligence detection apparatus according to an embodiment of the invention;
fig. 5 shows a block diagram of a terminal according to an embodiment of the invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
FIG. 1 shows a flow diagram of an artificial intelligence detection method according to an embodiment of the invention.
As shown in fig. 1, a flow of an artificial intelligence detection method according to an embodiment of the present invention includes:
Firstly, information to be detected is obtained, and the information to be detected can be information which is input by a user or automatically obtained by a system in any application scene and is used for artificial intelligence detection.
And 104, determining a corresponding target application scene in a plurality of preset application scenes according to the information to be detected.
The multiple preset application scenes include, but are not limited to, identity recognition, micro-expression recognition, road pedestrian detection, key region-of-interest detection and the like, and the information to be detected can indicate the required artificial intelligence detection function, so that the corresponding target application scene is selected.
And step 106, acquiring a first execution sequence of a plurality of AI functions to be called corresponding to the target application scene.
An application scenario typically includes a plurality of AI functions to be invoked.
For example, in an identity recognition application scenario, face detection is generally required, and after a face is detected, face recognition is performed on the face to recognize the identity of the face, so that the identity recognition application scenario corresponds to two to-be-called AI functions, namely a face detection function and a face recognition function.
For another example, in a micro expression recognition application scenario, face detection is generally required, and after a face is detected, micro expression recognition is performed on the face, so that the micro expression recognition application scenario corresponds to two to-be-called AI functions, namely a face detection function and a micro expression recognition function.
In the above process, a plurality of AI functions to be called in each target application scenario need to be executed according to the corresponding first execution order.
And step 110, sequencing all fine-grained modules required by the plurality of to-be-called AI functions according to the association relationship between the first execution sequence and the second execution sequence to obtain a third execution sequence.
And 112, calling all fine-grained modules required by the plurality of to-be-called AI functions according to the third execution sequence, wherein a first execution result is generated when any fine-grained module is called for the first time, and the first execution result is directly called when each fine-grained module is called after the first time.
In the related art, different sealing units are respectively arranged for different application scenes, for example, an identity recognition unit is arranged for the identity recognition application scene, and a micro expression recognition unit is arranged for the micro expression recognition application scene, wherein, by combining the above examples, the identity recognition unit and the micro expression recognition unit both need to be provided with a face detection function. Then, in an application scenario where a micro expression of a specified object needs to be recognized, namely, an identity recognition unit needs to be called, a micro expression recognition unit also needs to be called, and a face detection function is performed in both cases. This causes the face detection function to be repeatedly executed, which affects the AI detection efficiency. Meanwhile, the same face detection function is repeatedly set for many times in the development process, so that the development difficulty and the cost consumption are increased, and the subsequent maintenance cost is correspondingly increased.
In view of the above, in the technical solution of the present invention, in order to solve the above problem, the basic granularity of the AI detection may be reduced from the closed unit corresponding to the application scenario to the AI function level, and specifically, the closed unit corresponding to the application scenario may be split into a plurality of fine-grained modules, where each fine-grained module is used to execute an AI function. By combining the above example, a face detection fine-grained module, a face recognition fine-grained module and a micro expression recognition fine-grained module can be provided, in an application scene in which a micro expression of a specified object needs to be recognized, the face detection fine-grained module and the face recognition fine-grained module replace an identity recognition unit, and the face detection fine-grained module and the micro expression recognition fine-grained module replace a micro expression recognition unit.
The second execution sequence corresponding to the AI function capable of setting identity recognition is that the fine-grained module for face detection is in front of the fine-grained module for face recognition, and the second execution sequence corresponding to the AI function for micro-expression recognition is that the fine-grained module for face detection is in front of the fine-grained module for micro-expression recognition and the micro-expression recognition module is behind the fine-grained module for face detection.
And then, the identity recognition is carried out, a face detection fine-grained module is called to detect the face, and then the face recognition fine-grained module is called to recognize the identity of the face. That is, the third execution order in the application scene that needs to identify the micro expression of the designated object is, from first to last: the system comprises a face detection fine-grained module, a face recognition fine-grained module and a micro-expression recognition module of a face detection result by using the face detection fine-grained module. For the same fine-grained module which appears for many times, because the execution result is generated when the same fine-grained module is called for the first time, the execution result generated when the same fine-grained module is called for the first time can be directly called under any calling condition after the same fine-grained module is called for the first time, so that the repeated calling of the same fine-grained module is avoided, the AI detection process is simplified, and the AI detection efficiency is improved.
Furthermore, the technical scheme of the invention is that the closed units corresponding to the application scene are deeply split, and the AI detection of the application scene is realized by combining split results, so that the workload and the cost of the development, the operation and the maintenance of the AI detection are reduced and the efficiency of the AI technology development and maintenance is improved in a mode of sharing fine-grained modules. Meanwhile, different AI functions share the same fine-grained module, when different AI functions are executed, the fine-grained module only obtains one result when processing the same data, and the problem of data collision when the data results are inconsistent when processing the same data under the condition that each closed unit in the related technology is respectively administrative is avoided.
FIG. 2 shows a flow diagram of an artificial intelligence detection method according to another embodiment of the invention.
As shown in fig. 2, a flow of the artificial intelligence detection method according to another embodiment of the invention includes:
step 202, setting a plurality of corresponding AI functions to be called and a first execution sequence for executing the corresponding AI functions to be called for each preset application scenario according to the first setting information.
The plurality of preset application scenes include, but are not limited to, identity recognition, micro-expression recognition, road pedestrian detection, key region of interest detection and the like, and the information to be detected can indicate the required target application scene. The AI functions to be called and the number thereof required by each application scenario are different, and the differences can be set according to first setting information, wherein the first setting information includes but is not limited to operation information of development and maintenance personnel or information that the system evaluates according to actual conditions.
Through above technical scheme, promoted the flexibility that the AI function set up, helped satisfying actual AI and detect the demand.
And 204, setting a plurality of fine-grained modules required for each AI function to be called and a second execution sequence for calling the plurality of fine-grained modules required for each AI function to be called in the plurality of preset fine-grained modules according to the second setting information.
The fine-grained modules can realize different AI functions under different execution sequences, so that a second execution sequence unique to each type of AI function to be called can be set, and the AI function to be called can be realized by executing the fine-grained modules under the second execution sequence.
In actual application, of course, with the update of the AI function and the system bottom layer data, it may also happen that the fine-grained module or the execution sequence thereof in the to-be-called AI function needs to be changed to adapt to the update situation. Therefore, the operation information of development and maintenance personnel, or the updating of the AI function, the updating of the system bottom layer data, the information obtained by the system according to the actual condition evaluation and the like can be used as second setting information to adaptively adjust the fine-grained module under the to-be-called AI function and the second execution sequence thereof, so that the AI function has high adaptability, and the use convenience brought by the AI function is improved.
As described above, the corresponding relationship between each to-be-called AI function and the required fine-grained modules may be stored in the attribute information of the to-be-called AI function, so as to facilitate calling when the AI function is executed.
And step 208, acquiring information to be detected.
Firstly, information to be detected is obtained, and the information to be detected can be information which is input by a user or automatically obtained by a system in any application scene and is used for artificial intelligence detection.
The multiple preset application scenes include, but are not limited to, identity recognition, micro-expression recognition, road pedestrian detection, key region-of-interest detection and the like, and the information to be detected can indicate the required artificial intelligence detection function, so that the corresponding target application scene is selected.
An application scenario typically includes a plurality of AI functions to be invoked.
For example, in an identity recognition application scenario, face detection is generally required, and after a face is detected, face recognition is performed on the face to recognize the identity of the face, so that the identity recognition application scenario corresponds to two to-be-called AI functions, namely a face detection function and a face recognition function.
For another example, in a micro expression recognition application scenario, face detection is generally required, and after a face is detected, micro expression recognition is performed on the face, so that the micro expression recognition application scenario corresponds to two to-be-called AI functions, namely a face detection function and a micro expression recognition function.
In the above process, a plurality of AI functions to be called in each target application scenario need to be executed according to the corresponding first execution order.
The first execution sequence is an upper layer and defines an execution sequence of the plurality of AI functions to be called, and the second execution sequence is a lower layer and defines an execution sequence of the plurality of fine-grained modules in each AI function to be called, so that all the fine-grained modules in the plurality of AI functions to be called can be arranged into a third execution sequence according to the upper-lower layer relation of the first execution sequence and the second execution sequence. It should be appreciated that in each position of the third execution order, the same fine-grained module may appear multiple times in different positions, or may appear only once. For the same fine-grained module which appears for many times, because the execution result is generated when the same fine-grained module is called for the first time, the execution result generated when the same fine-grained module is called for the first time can be directly called under any calling condition after the same fine-grained module is called for the first time, so that the repeated calling of the same fine-grained module is avoided, the AI detection process is simplified, and the AI detection efficiency is improved.
Meanwhile, different AI functions share the same fine-grained module, when different AI functions are executed, the fine-grained module only obtains one result when processing the same data, and the problem of data collision when the data results are inconsistent when processing the same data under the condition that each closed unit in the related technology is respectively administrative is avoided.
It is to be understood that any of the fine-grained modules includes several neural networks, and the types of neural networks include, but are not limited to, cascade networks, self-organizing neural networks, learning vector quantization networks, radial basis function neural networks, and the like. And the output of each of the number of neural networks may be used by the full fine-grained module.
That is to say, the fine-grained module can also be split into smaller execution units, and the execution result of the updated execution unit can also be called by the fine-grained module itself or other fine-grained modules as existing data, so that the repeated calculation amount in the AI detection process is further reduced, and the AI detection efficiency is greatly improved.
With reference to the foregoing example, when the target application scene is a micro expression that needs to identify a specified object, the multiple AI functions to be invoked include an identity recognition function and a micro expression recognition function, where the identity recognition function is executed by a face detection fine-grained module and a face recognition fine-grained module, and the micro expression recognition function is executed by the face detection fine-grained module and the micro expression recognition fine-grained module. The fine-grained module for face detection includes a first neural network, a second neural network, a third neural network, and a fourth neural network, and the working process of the fine-grained module for face detection is described in detail with reference to the embodiment in fig. 3.
As shown in fig. 3, the artificial intelligence detection method according to still another embodiment of the present invention includes:
In one possible design, the first, second, third, and fourth neural networks described above and below are cascaded networks.
And step 304, screening a plurality of second face frames and corresponding second confidence degrees from the plurality of first face frames and the corresponding first confidence degrees through a soft NMS algorithm.
And 308, screening a plurality of fourth face frames and corresponding fourth confidence degrees from the plurality of third face frames and the corresponding third confidence degrees through a soft NMS algorithm.
And 310, inputting the information to be detected, the fourth face frames and the corresponding fourth confidence coefficients into a third neural network, and outputting face key points, fifth face frames and corresponding fifth confidence coefficients through the third neural network.
And step 312, screening a plurality of sixth face frames and corresponding sixth confidence degrees from the plurality of fifth face frames and corresponding fifth confidence degrees through a soft NMS algorithm.
And step 314, inputting the information to be detected and the face key points into a fourth neural network, and outputting the face key points with corrected positions through the fourth neural network.
The first, second, third and fourth neural networks are named PNet, RNet, ONet, LNet, respectively, and the above principle is specifically analyzed.
The PNet is a first-level network, directly receives an original image (i.e. information to be detected), pyramids the original image (sequentially reduces the original image according to the same proportion), and outputs a face frame (including information of a wide, a high and an upper left corner point) detected by each reduced image and a confidence coefficient corresponding to the face frame. And a soft NMS algorithm is arranged between the PNet and the RNet, and is used for combining the face frames to form a large frame and removing the face frames with the confidence degrees lower than a first specified threshold value according to the confidence degree.
At this point, PRect, PConf is output.
The second layer network is RNet, the input of the layer network needs to cut the original image according to the selected PRIct of the previous layer to obtain a batch of face frames (the face part is cut out from the original image and is possibly not cut out) which are scaled to 24x24, the output obtained after the batch of preprocessed face frames is used as input is processed by a soft NMS algorithm, and the face frames with the confidence coefficient lower than a second specified threshold value are screened out.
At this point, RRect, RConf are output.
The third layer network is ONet, and the input of the network in the layer cuts the input original image into a batch of face frames according to the RRect screened in the previous layer and scales the face frames to 48x 48. And after RRect and RConf are input into ONet, the obtained output is processed by a soft NMS algorithm, and face frames with confidence degrees lower than a third specified threshold value are screened out to obtain ORect and OConf. This number of layers is already very accurate, if one face is obtained, only one box is output, and if two faces are obtained, two boxes are output. Meanwhile, the ONet also outputs face key points, wherein the face key points at least comprise two pupil points, a nose point and two mouth corner points.
At this time, the layer of network outputs face frames, face frame confidence and face key points.
The fourth layer network LNet uses the face key points and the original image as input, and is used for fine-tuning the positions of the face key points and outputting corrected face key points.
And finally, combining the four layers of networks to obtain three kinds of information of a face frame, a face frame confidence coefficient and a face key point of the original image.
The final output of the four-layer network and the output of each layer network in the four-layer network can be used as effective data to be used as the basis for data calculation of other neural networks and other fine-grained modules, so that the overall calculation amount in the AI detection process is reduced, and the AI detection efficiency is improved.
FIG. 4 shows a block diagram of an artificial intelligence detection apparatus according to an embodiment of the invention.
As shown in fig. 4, an artificial intelligence detection apparatus 400 according to an embodiment of the present invention includes: an information to be detected acquisition unit 402 configured to acquire information to be detected; an application scene determining unit 404, configured to determine, according to the to-be-detected information, a corresponding target application scene in a plurality of preset application scenes; a first execution sequence determining unit 406, configured to obtain a first execution sequence of a plurality of AI functions to be called corresponding to the target application scenario; a fine-grained module and second execution order determining unit 408, configured to determine, according to the attribute information of each to-be-called AI function, a plurality of fine-grained modules required for executing each to-be-called AI function and a second execution order of the plurality of fine-grained modules; a third execution order determining unit 410, configured to sort all fine-grained modules required by the multiple AI functions to be called according to an association relationship between the first execution order and the second execution order, so as to obtain a third execution order; a fine-grained module calling unit 412, configured to call all fine-grained modules required by the multiple AI functions to be called according to the third execution order, where a first execution result is generated when any fine-grained module is called for the first time, and the first execution result is directly called when each call is performed after the first call.
In the above embodiment of the present invention, optionally, the method further includes: a first setting unit, configured to set, for each preset application scene, the corresponding multiple to-be-called AI functions and the first execution sequence used to execute the corresponding multiple to-be-called AI functions according to first setting information before the to-be-detected information obtaining unit 402 obtains the to-be-detected information.
In the above embodiment of the present invention, optionally, the method further includes: a second setting unit, configured to set, before the to-be-detected information obtaining unit 402 obtains the to-be-detected information, the plurality of fine-grained modules required for each AI function to be called in a plurality of preset fine-grained modules according to second setting information, and the second execution order used to call the plurality of required fine-grained modules.
In the above embodiment of the present invention, optionally, the method further includes: and the storage unit is used for storing the corresponding relation between each AI function to be called and the required fine-grained modules into the attribute information of each AI function to be called.
In the above embodiment of the present invention, optionally, any of the fine-grained modules includes a number of neural networks, and an output of each of the number of neural networks is usable by all of the fine-grained modules.
In the above embodiment of the present invention, optionally, when the target application scene is a micro expression that needs to identify a specified object, the multiple AI functions to be called include an identity recognition function and a micro expression recognition function, where the identity recognition function is executed by a face detection fine-grained module and a face recognition fine-grained module, and the micro expression recognition function is executed by the face detection fine-grained module and the micro expression recognition fine-grained module.
In the above embodiment of the present invention, optionally, the face detection fine-grained module includes a first neural network, a second neural network, a third neural network, and a fourth neural network, where an input of the first neural network is the information to be detected, an output of the first neural network is a plurality of first face boxes and corresponding first confidence degrees, and a plurality of second face boxes and corresponding second confidence degrees are screened out from the plurality of first face boxes and corresponding first confidence degrees through a softNMS algorithm; the input of the second neural network is the information to be detected, the plurality of second face frames and corresponding second confidence coefficients, the output of the second neural network is a plurality of third face frames and corresponding third confidence coefficients, and a plurality of fourth face frames and corresponding fourth confidence coefficients are screened out from the plurality of third face frames and the corresponding third confidence coefficients through a soft NMS algorithm; the input of the third neural network is the information to be detected, the fourth face frames and the corresponding fourth confidence coefficients, the output of the third neural network is face key points, the fifth face frames and the corresponding fifth confidence coefficients, and the sixth face frames and the corresponding sixth confidence coefficients are screened out from the fifth face frames and the corresponding fifth confidence coefficients through a soft NMS algorithm; the input of the fourth neural network is the information to be detected and the face key points, and the output of the fourth neural network is the face key points after position correction.
The artificial intelligence detection apparatus 400 uses the scheme described in any one of the embodiments shown in fig. 1 to fig. 3, and therefore, all the technical effects described above are achieved, and are not described herein again.
Fig. 5 shows a block diagram of a terminal according to an embodiment of the invention.
As shown in fig. 5, a terminal 500 of one embodiment of the present invention includes at least one memory 502; and a processor 504 communicatively coupled to the at least one memory 502; wherein the memory stores instructions executable by the at least one processor 504, the instructions being configured to perform the aspects of any of the embodiments of fig. 1-3 described above. Therefore, the terminal 500 has the same technical effect as any one of the embodiments of fig. 1 to 3, and is not described herein again.
The terminal of the embodiments of the present invention exists in various forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
In addition, an embodiment of the present invention provides a computer-readable storage medium storing computer-executable instructions for performing the method flow described in any one of the above embodiments of fig. 1 to 3.
The technical scheme of the invention is explained in detail in the above with reference to the attached drawings, and by the technical scheme of the invention, the technical problem of low efficiency of AI technology development and maintenance in the related technology is solved, fine-grained module sharing is realized in a mode of reducing the magnitude of AI functions, the consumption cost is reduced, the workload of development and maintenance is greatly reduced, and the efficiency of AI technology development and maintenance is improved.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. An artificial intelligence detection method, comprising:
acquiring information to be detected;
determining a corresponding target application scene in a plurality of preset application scenes according to the information to be detected;
acquiring a first execution sequence of a plurality of AI functions to be called corresponding to the target application scene;
determining a plurality of fine-grained modules required for executing each AI function to be called and a second execution sequence of the fine-grained modules according to the attribute information of each AI function to be called;
sequencing all fine-grained modules required by the AI functions to be called according to the incidence relation between the first execution sequence and the second execution sequence to obtain a third execution sequence;
and calling all fine-grained modules required by the AI functions to be called according to the third execution sequence, wherein a first execution result is generated when any fine-grained module is called for the first time, and the first execution result is directly called when each time the fine-grained modules are called after the first time.
2. The artificial intelligence detection method of claim 1, further comprising, prior to the step of obtaining information to be detected:
and setting the corresponding multiple AI functions to be called and the first execution sequence for executing the corresponding multiple AI functions to be called for each preset application scene according to first setting information.
3. The artificial intelligence detection method of claim 1 or 2, further comprising, before the step of obtaining the information to be detected:
and setting the plurality of fine-grained modules required for each AI function to be called and the second execution sequence for calling the plurality of fine-grained modules required according to second setting information.
4. The artificial intelligence detection method of claim 3, further comprising:
and storing the corresponding relation between each AI function to be called and the required fine-grained modules into the attribute information of each AI function to be called.
5. The artificial intelligence detection method of claim 1,
any of the fine-grained modules includes a number of neural networks, the output of each of the number of neural networks being usable by all of the fine-grained modules.
6. The artificial intelligence detection method of claim 5,
when the target application scene is a micro expression needing to identify the specified object, the AI functions to be called comprise an identity identification function and a micro expression identification function, wherein,
the identity recognition function is executed by a face detection fine-grained module and a face recognition fine-grained module,
the micro expression recognition function comprises the face detection fine granularity module and the micro expression recognition fine granularity module.
7. The artificial intelligence detection method of claim 6 wherein the face detection fine granularity module comprises a first neural network, a second neural network, a third neural network, and a fourth neural network, wherein,
the input of the first neural network is the information to be detected, and the output of the first neural network is a plurality of first human face frames, corresponding first confidence coefficients and
screening a plurality of second face frames and corresponding second confidence degrees from the plurality of first face frames and the corresponding first confidence degrees through a soft NMS algorithm;
the input of the second neural network is the information to be detected, the plurality of second face frames and the corresponding second confidence, the output of the second neural network is a plurality of third face frames and the corresponding third confidence, and
screening a plurality of fourth face frames and corresponding fourth confidence degrees from the plurality of third face frames and the corresponding third confidence degrees through a soft NMS algorithm;
the input of the third neural network is the information to be detected, the fourth face frames and the corresponding fourth confidence coefficients, and the output of the third neural network is the face key points, the fifth face frames and the corresponding fifth confidence coefficients, and
screening a plurality of sixth face frames and corresponding sixth confidence degrees from the plurality of fifth face frames and the corresponding fifth confidence degrees through a soft NMS algorithm;
the input of the fourth neural network is the information to be detected and the face key points, and the output of the fourth neural network is the face key points after position correction.
8. An artificial intelligence detection device, comprising:
the information acquisition unit to be detected is used for acquiring information to be detected;
the application scene determining unit is used for determining a corresponding target application scene in a plurality of preset application scenes according to the information to be detected;
a first execution sequence determining unit, configured to obtain a first execution sequence of a plurality of AI functions to be called corresponding to the target application scenario;
a fine-grained module and second execution order determination unit, configured to determine, according to attribute information of each to-be-called AI function, a plurality of fine-grained modules required to execute each to-be-called AI function and a second execution order of the plurality of fine-grained modules;
a third execution sequence determining unit, configured to sort all fine-grained modules required by the multiple AI functions to be called according to an association relationship between the first execution sequence and the second execution sequence, so as to obtain a third execution sequence;
and the fine-grained module calling unit is used for calling all fine-grained modules required by the AI functions to be called according to the third execution sequence, wherein a first execution result is generated when any fine-grained module is called for the first time, and the first execution result is directly called when each time the fine-grained module is called after the first time.
9. A terminal, comprising: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the method of any of the preceding claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer-executable instructions for performing the method flow of any of claims 1-7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010057718.XA CN111291635A (en) | 2020-01-19 | 2020-01-19 | Artificial intelligence detection method and device, terminal and computer readable storage medium |
PCT/CN2020/087738 WO2021142975A1 (en) | 2020-01-19 | 2020-04-29 | Artificial intelligence detection method and apparatus, terminal and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010057718.XA CN111291635A (en) | 2020-01-19 | 2020-01-19 | Artificial intelligence detection method and device, terminal and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111291635A true CN111291635A (en) | 2020-06-16 |
Family
ID=71030689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010057718.XA Pending CN111291635A (en) | 2020-01-19 | 2020-01-19 | Artificial intelligence detection method and device, terminal and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111291635A (en) |
WO (1) | WO2021142975A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067109B (en) * | 2022-01-13 | 2022-04-22 | 安徽高哲信息技术有限公司 | Grain detection method, grain detection device and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10769261B2 (en) * | 2018-05-09 | 2020-09-08 | Futurewei Technologies, Inc. | User image verification |
CN108898125A (en) * | 2018-07-10 | 2018-11-27 | 深圳市巨龙创视科技有限公司 | One kind being based on embedded human face identification and management system |
CN109635680B (en) * | 2018-11-26 | 2021-07-06 | 深圳云天励飞技术有限公司 | Multitask attribute identification method and device, electronic equipment and storage medium |
CN109977781A (en) * | 2019-02-26 | 2019-07-05 | 上海上湖信息技术有限公司 | Method for detecting human face and device, readable storage medium storing program for executing |
CN111079643B (en) * | 2019-12-13 | 2023-04-07 | 三一重工股份有限公司 | Face detection method and device based on neural network and electronic equipment |
-
2020
- 2020-01-19 CN CN202010057718.XA patent/CN111291635A/en active Pending
- 2020-04-29 WO PCT/CN2020/087738 patent/WO2021142975A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2021142975A1 (en) | 2021-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111950723B (en) | Neural network model training method, image processing method, device and terminal equipment | |
CN107862270B (en) | Face classifier training method, face detection method and device and electronic equipment | |
CN110781784A (en) | Face recognition method, device and equipment based on double-path attention mechanism | |
CN110765860A (en) | Tumble determination method, tumble determination device, computer apparatus, and storage medium | |
CN116824278B (en) | Image content analysis method, device, equipment and medium | |
CN115063875B (en) | Model training method, image processing method and device and electronic equipment | |
CN112488218A (en) | Image classification method, and training method and device of image classification model | |
CN111985281B (en) | Image generation model generation method and device and image generation method and device | |
CN113096055B (en) | Training method and device for image generation model, electronic equipment and storage medium | |
CN112221155B (en) | Game data identification method based on artificial intelligence and big data and game cloud center | |
CN110941978A (en) | Face clustering method and device for unidentified personnel and storage medium | |
CN112132130A (en) | Real-time license plate detection method and system for whole scene | |
CN112839223A (en) | Image compression method, image compression device, storage medium and electronic equipment | |
CN117746015A (en) | Small target detection model training method, small target detection method and related equipment | |
CN110706691B (en) | Voice verification method and device, electronic equipment and computer readable storage medium | |
CN111291635A (en) | Artificial intelligence detection method and device, terminal and computer readable storage medium | |
CN113762382B (en) | Model training and scene recognition method, device, equipment and medium | |
CN112488054A (en) | Face recognition method, face recognition device, terminal equipment and storage medium | |
CN109766089B (en) | Code generation method and device based on dynamic diagram, electronic equipment and storage medium | |
US20230260245A1 (en) | Image segmentation model quantization method and apparatus, computer device, and storage medium | |
CN113240032A (en) | Image classification method, device, equipment and storage medium | |
CN113361456A (en) | Face recognition method and system | |
CN118015312B (en) | Image processing method, device and equipment | |
CN112861687A (en) | Mask wearing detection method, device, equipment and medium for access control system | |
CN116363262B (en) | Image generation method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |