CN114399830A - Target class identification method and device and readable storage medium - Google Patents
Target class identification method and device and readable storage medium Download PDFInfo
- Publication number
- CN114399830A CN114399830A CN202210299739.1A CN202210299739A CN114399830A CN 114399830 A CN114399830 A CN 114399830A CN 202210299739 A CN202210299739 A CN 202210299739A CN 114399830 A CN114399830 A CN 114399830A
- Authority
- CN
- China
- Prior art keywords
- component
- correlation coefficient
- vector
- phase relation
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a target class identification method, a target class identification device and a readable storage medium. The method comprises the following steps: extracting the correlation coefficient of the input image for each target class, and respectively multiplying each component in the target class correlation coefficient vector byRespectively subtracting the maximum component from each component in the obtained second phase relation number vector, respectively adding preset precision to each component in the obtained third phase relation number vector, then performing rounding, updating each component smaller than 0 in the obtained fourth phase relation number vector to be 0, respectively calculating powers obtained by taking 2 as a base number and each component as an exponent, respectively corresponding each component in the fifth correlation coefficient vector to each powerThe sum of the powers corresponding to all the components in the fifth correlation coefficient vector is divided by the power of the fifth correlation coefficient vector, and the obtained quotient values are the probabilities that the input image contains various types of targets respectively. The embodiment of the invention reduces the calculation complexity of target category identification on the premise of ensuring the target category identification precision.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for identifying a target category, and a readable storage medium.
Background
With the rapid development of the deep learning technology, the deep learning algorithm is widely applied in the field of artificial intelligence. Classification is one of the most common application scenarios, such as expression recognition, age recognition, gender recognition, scene recognition, sound event classification, and the like. Due to the very good classification effect and the relatively simple calculation process of softmax, softmax classifiers are often used in various classification tasks.
The calculation process of softmax needs to perform multiple exponential operations with a natural number e as a base, and the exponential operations are not easy to be realized on hardware such as an NPU (Neural-Network Processing Unit), so that the use of an algorithm model containing a softmax classifier on hardware such as the NPU is influenced.
Disclosure of Invention
The embodiment of the invention provides a target class identification method, a target class identification device, a readable storage medium and a computer program product, which are used for reducing the calculation complexity of target class identification on the premise of ensuring the target class identification precision.
The technical scheme of the embodiment of the invention is realized as follows:
a method of object class identification, the method comprising:
extracting a correlation coefficient of the input image aiming at each target category to obtain a target category correlation coefficient vector, wherein each component in the vector respectively represents the correlation coefficient of the input image and a category of targets, and the dimension of the vector is the same as the total category number of the target categories;
multiplying each component in the object class correlation coefficient vector by the corresponding componentObtaining a second phase relation number vector by the corresponding decimal;
searching the maximum component in the second phase relation number vector, and subtracting the maximum component from each component in the second phase relation number vector to obtain a third phase relation number vector;
adding preset precision to each component in the third phase relation number vector respectively and then rounding to obtain a fourth phase relation number vector;
updating each component smaller than 0 in the fourth correlation coefficient vector to be 0 to obtain a fifth correlation coefficient vector;
for each component in the fifth correlation coefficient vector, respectively calculating a power which takes 2 as a base number and takes the component as an exponent;
and dividing the power corresponding to each component in the fifth correlation coefficient vector by the sum of the powers corresponding to all the components in the fifth correlation coefficient vector, wherein the obtained quotient values are the probabilities that the input image contains various targets.
The extracting of the correlation coefficient of the input image for each target category comprises the following steps:
inputting an input image into a pre-trained target category correlation coefficient to extract a neural network model, wherein the number of output channels of the model is the same as the total number of categories of the target, and the output of the model is the target category correlation coefficient vector.
The preset precision is greater than or equal to 2.
The rounding is performed after each component in the third phase relation number vector is added with a preset precision, and the rounding comprises the following steps:
and adding preset precision and 0.5 to each component in the third phase relation number vector respectively, and then carrying out downward rounding.
The target class identification is expression class identification, age class identification, gender class identification, scene class identification, or sound event class identification.
An object class identification apparatus, the apparatus comprising:
the target category correlation coefficient extraction module is used for extracting a correlation coefficient of each target category from the input image to obtain a target category correlation coefficient vector, each component in the vector respectively represents the correlation coefficient of the input image and one category of targets, and the dimension of the vector is the same as the total category number of the target categories;
a correlation coefficient vector processing module for multiplying each component in the target class correlation coefficient vector by the corresponding componentObtaining a second phase relation number vector by the corresponding decimal; searching the maximum component in the second phase relation number vector, and subtracting the maximum component from each component in the second phase relation number vector to obtain a third phase relation number vector; adding preset precision to each component in the third phase relation number vector respectively and then rounding to obtain a fourth phase relation number vector; updating each component smaller than 0 in the fourth correlation coefficient vector to be 0 to obtain a fifth correlation coefficient vector;
the category probability calculation module is used for calculating power which is obtained by taking 2 as a base number and taking the component as an exponent for each component in the fifth correlation coefficient vector; and dividing the power corresponding to each component in the fifth correlation coefficient vector by the sum of the powers corresponding to all the components in the fifth correlation coefficient vector, wherein the obtained quotient values are the probabilities that the input image contains various targets.
A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps of the target class identification method of any one of the above.
In the embodiment of the invention, each component in the target category correlation coefficient vector is multiplied byCorresponding decimal, plus preset precisionRounding, and setting the component less than 0 as 0, so that when performing the base-2 exponential operation required by the probability calculation of various targets contained in the input image, all the exponents are non-negative integers, and the exponential operation can be realized only by simple shifting, so that the target class identification algorithm is convenient to realize fixed points on hardware such as NPU (network processor Unit), and the calculation complexity of target class identification is reduced on the premise of ensuring the target class identification precision.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a target class identification method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an object class identification apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail with specific examples. Several of the following embodiments may be combined with each other and some details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a flowchart of a target class identification method according to an embodiment of the present invention, which includes the following specific steps:
step 101: and extracting the correlation coefficient of the input image aiming at each target category to obtain a target category correlation coefficient vector, wherein each component in the vector respectively represents the correlation coefficient of the input image and a category of targets, and the dimension of the vector is the same as the total category number of the target categories.
The embodiment of the invention does not limit the way of extracting the correlation coefficient of each target category from the input image, for example, one way is to input the input image into a pre-trained target category correlation coefficient extraction neural network model, the number of output channels of the model is the same as the total number of categories of the target category, and the output of the model is the target category correlation coefficient vector.
The total number of classes of the object class is N (N is an integer and N > 1), the dimension of the object class correlation coefficient vector is N, and the component i (1 ≦ i ≦ N) in the vector represents the correlation coefficient of the input image and the object of the class i.
Step 102: multiplying each component in the object class correlation coefficient vector by the corresponding componentAnd obtaining a second phase relation number vector according to the corresponding decimal.
And 8, the position is approximately equal to 1.442695, and the position is specifically taken several times after the decimal point, and the position can be determined according to the actual required precision.
Step 103: and searching the maximum component in the second phase relation number vector, and subtracting the maximum component from each component in the second phase relation number vector to obtain a third phase relation number vector.
Step 104: and respectively adding preset precision to each component in the third phase relation number vector and then rounding to obtain a fourth phase relation number vector.
Setting the predetermined precision to P, then 2PThe ratio of the maximum probability to the minimum probability of each type of object in the input image is determined, so that the larger the P is, the higher the probability accuracy is, and meanwhile, the more shift operations are required for the base-2 exponential operation in step 106. The value range of the preset precision P is usually: p is more than or equal to 2, preferably, P is more than or equal to 4 and less than or equal to 32.
In practical applications, the rounding each component in the third phase relation number vector after adding the preset precision respectively may include: and adding preset precision and 0.5 to each component in the third phase relation number vector respectively, and then carrying out downward rounding.
Step 105: and updating each component smaller than 0 in the fourth correlation coefficient vector to be 0 to obtain a fifth correlation coefficient vector.
Step 106: for each component in the fifth correlation coefficient vector, a base-2 power is calculated, respectively, and the component is taken as an exponent.
Step 107: and dividing the power corresponding to each component in the fifth correlation coefficient vector by the sum of the powers corresponding to all the components in the fifth correlation coefficient vector, wherein the obtained quotient values are the probabilities that the input image contains various targets.
In the above embodiment, the target class correlation coefficient vector is obtained by multiplying each component of the target class correlation coefficient vector by the corresponding componentCorresponding decimal fraction, thenThe target class recognition algorithm is convenient to realize fixed points on hardware such as NPU (non-uniform numerical control) and reduces the calculation complexity of target class recognition on the premise of ensuring the target class recognition precision.
The target category identification of the embodiment of the invention can be expression category identification, age category identification, gender category identification, scene category identification or sound event category identification.
Fig. 2 is a schematic structural diagram of an object class identification apparatus according to an embodiment of the present invention, where the apparatus mainly includes:
the target category correlation coefficient extraction module 21 is configured to extract a correlation coefficient for each target category from the input image to obtain a target category correlation coefficient vector, where each component in the vector represents a correlation coefficient between the input image and a category of targets, and the dimension of the vector is the same as the total category number of the target categories.
A correlation coefficient vector processing module 22 for multiplying each component of the target category correlation coefficient vector extracted by the target category correlation coefficient extraction module 21 by each otherObtaining a second phase relation number vector by the corresponding decimal; searching the maximum component in the second phase relation number vector, and subtracting the maximum component from each component in the second phase relation number vector to obtain a third phase relation number vector; adding preset precision to each component in the third phase relation number vector respectively and then rounding to obtain a fourth phase relation number vector; and updating each component smaller than 0 in the fourth correlation coefficient vector to be 0 to obtain a fifth correlation coefficient vector.
A category probability calculation module 23, configured to calculate, for each component in the fifth correlation coefficient vector obtained by the correlation coefficient vector processing module 22, a power obtained by taking 2 as a base number and taking the component as an exponent; and dividing the power corresponding to each component in the fifth correlation coefficient vector by the sum of the powers corresponding to all the components in the fifth correlation coefficient vector, wherein the obtained quotient values are the probabilities that the input image contains various targets.
In an alternative embodiment, the target class correlation coefficient extraction module 21 performs correlation coefficient extraction for each target class on the input image, and includes: inputting an input image into a pre-trained target category correlation coefficient to extract a neural network model, wherein the number of output channels of the model is the same as the total number of categories of the target, and the output of the model is the target category correlation coefficient vector.
In an optional embodiment, the correlation coefficient vector processing module 22 respectively adds a preset precision to each component in the third phase relation number vector and then performs rounding, including: and adding preset precision and 0.5 to each component in the third phase relation number vector respectively, and then carrying out downward rounding.
The embodiments of the present application further provide a computer program product, which includes a computer program or instructions, and when the computer program or instructions is executed by a processor, the steps of the object class identification method as described in any of the above embodiments are implemented.
Embodiments of the present application also provide a computer-readable storage medium storing instructions, which when executed by a processor, may perform the steps in the object class identification method as described above. In practical applications, the computer readable medium may be included in each device/apparatus/system of the above embodiments, or may exist separately and not be assembled into the device/apparatus/system. Wherein instructions are stored in a computer readable storage medium, which stored instructions, when executed by a processor, may perform the steps in the object class identification method as described above.
According to embodiments disclosed herein, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example and without limitation: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, without limiting the scope of the present disclosure. In the embodiments disclosed herein, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
As shown in fig. 3, an embodiment of the present invention further provides an electronic device. As shown in fig. 3, it shows a schematic structural diagram of an electronic device according to an embodiment of the present invention, specifically:
the electronic device may include a processor 31 of one or more processing cores, memory 32 of one or more computer-readable storage media, and a computer program stored on the memory and executable on the processor. The above-described object class identification method can be implemented when the program of the memory 32 is executed.
Specifically, in practical applications, the electronic device may further include a power supply 33, an input/output unit 34, and the like. Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 3 is not intended to be limiting of the electronic device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
the processor 31 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 32 and calling data stored in the memory 32, thereby performing overall monitoring of the electronic device.
The memory 32 may be used to store software programs and modules, i.e., the computer-readable storage media described above. The processor 31 executes various functional applications and data processing by executing software programs and modules stored in the memory 32. The memory 32 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 32 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 32 may also include a memory controller to provide the processor 31 access to the memory 32.
The electronic device further comprises a power supply 33 for supplying power to each component, and the power supply 33 can be logically connected with the processor 31 through a power management system, so that functions of charging, discharging, power consumption management and the like can be managed through the power management system. The power supply 33 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may also include an input-output unit 34, the input-unit output 34 operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. The input unit output 34 may also be used to display information input by or provided to the user as well as various graphical user interfaces, which may be composed of graphics, text, icons, video, and any combination thereof.
The flowchart and block diagrams in the figures of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments disclosed herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not explicitly recited in the present application. In particular, the features recited in the various embodiments and/or claims of the present application may be combined and/or coupled in various ways, all of which fall within the scope of the present disclosure, without departing from the spirit and teachings of the present application.
The principles and embodiments of the present invention are explained herein using specific examples, which are provided only to help understanding the method and the core idea of the present invention, and are not intended to limit the present application. It will be appreciated by those skilled in the art that changes may be made in this embodiment and its broader aspects and without departing from the principles, spirit and scope of the invention, and that all such modifications, equivalents, improvements and equivalents as may be included within the scope of the invention are intended to be protected by the claims.
Claims (7)
1. An object class identification method, characterized in that the method comprises:
extracting a correlation coefficient of the input image aiming at each target category to obtain a target category correlation coefficient vector, wherein each component in the vector respectively represents the correlation coefficient of the input image and a category of targets, and the dimension of the vector is the same as the total category number of the target categories;
multiplying each component in the object class correlation coefficient vector by the corresponding componentObtaining a second phase relation number vector by the corresponding decimal;
searching the maximum component in the second phase relation number vector, and subtracting the maximum component from each component in the second phase relation number vector to obtain a third phase relation number vector;
adding preset precision to each component in the third phase relation number vector respectively and then rounding to obtain a fourth phase relation number vector;
updating each component smaller than 0 in the fourth correlation coefficient vector to be 0 to obtain a fifth correlation coefficient vector;
for each component in the fifth correlation coefficient vector, respectively calculating a power which takes 2 as a base number and takes the component as an exponent;
and dividing the power corresponding to each component in the fifth correlation coefficient vector by the sum of the powers corresponding to all the components in the fifth correlation coefficient vector, wherein the obtained quotient values are the probabilities that the input image contains various targets.
2. The method of claim 1, wherein the extracting the correlation coefficient for each target class from the input image comprises:
inputting an input image into a pre-trained target category correlation coefficient to extract a neural network model, wherein the number of output channels of the model is the same as the total number of categories of the target, and the output of the model is the target category correlation coefficient vector.
3. The method according to claim 1, characterized in that said preset precision is greater than or equal to 2.
4. The method of claim 1, wherein the rounding each component of the third phase relation vector after adding a predetermined precision comprises:
and adding preset precision and 0.5 to each component in the third phase relation number vector respectively, and then carrying out downward rounding.
5. The method of claim 1, wherein the object class identification is an expression class identification, an age class identification, a gender class identification, a scene class identification, or a voice event class identification.
6. An object class identification device, characterized in that the device comprises:
the target category correlation coefficient extraction module is used for extracting a correlation coefficient of each target category from the input image to obtain a target category correlation coefficient vector, each component in the vector respectively represents the correlation coefficient of the input image and one category of targets, and the dimension of the vector is the same as the total category number of the target categories;
a correlation coefficient vector processing module for multiplying each component in the target class correlation coefficient vector by the corresponding componentObtaining a second phase relation number vector by the corresponding decimal; searching the maximum component in the second phase relation number vector, and subtracting the maximum component from each component in the second phase relation number vector to obtain a third phase relation number vector; adding preset precision to each component in the third phase relation number vector respectively and then rounding to obtain a fourth phase relation number vector; updating each component smaller than 0 in the fourth correlation coefficient vector to be 0 to obtain a fifth correlation coefficient vector;
the category probability calculation module is used for calculating power which is obtained by taking 2 as a base number and taking the component as an exponent for each component in the fifth correlation coefficient vector; and dividing the power corresponding to each component in the fifth correlation coefficient vector by the sum of the powers corresponding to all the components in the fifth correlation coefficient vector, wherein the obtained quotient values are the probabilities that the input image contains various targets.
7. A non-transitory computer readable storage medium storing instructions which, when executed by a processor, cause the processor to perform the steps of the target class identification method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210299739.1A CN114399830B (en) | 2022-03-25 | 2022-03-25 | Target class identification method and device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210299739.1A CN114399830B (en) | 2022-03-25 | 2022-03-25 | Target class identification method and device and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114399830A true CN114399830A (en) | 2022-04-26 |
CN114399830B CN114399830B (en) | 2022-06-24 |
Family
ID=81235023
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210299739.1A Active CN114399830B (en) | 2022-03-25 | 2022-03-25 | Target class identification method and device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114399830B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101589610A (en) * | 2007-01-25 | 2009-11-25 | 高通Mems科技公司 | Arbitrary power function using logarithm lookup table |
WO2019009420A1 (en) * | 2017-07-07 | 2019-01-10 | 国立大学法人大阪大学 | Pain determination using trend analysis, medical device incorporating machine learning, economic discriminant model, and iot, tailormade machine learning, and novel brainwave feature quantity for pain determination |
US20190171419A1 (en) * | 2017-12-06 | 2019-06-06 | Fujitsu Limited | Arithmetic processing device and control method of arithmetic processing device |
CN113407747A (en) * | 2020-03-17 | 2021-09-17 | 三星电子株式会社 | Hardware accelerator execution method, hardware accelerator and neural network device |
-
2022
- 2022-03-25 CN CN202210299739.1A patent/CN114399830B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101589610A (en) * | 2007-01-25 | 2009-11-25 | 高通Mems科技公司 | Arbitrary power function using logarithm lookup table |
WO2019009420A1 (en) * | 2017-07-07 | 2019-01-10 | 国立大学法人大阪大学 | Pain determination using trend analysis, medical device incorporating machine learning, economic discriminant model, and iot, tailormade machine learning, and novel brainwave feature quantity for pain determination |
US20190171419A1 (en) * | 2017-12-06 | 2019-06-06 | Fujitsu Limited | Arithmetic processing device and control method of arithmetic processing device |
CN113407747A (en) * | 2020-03-17 | 2021-09-17 | 三星电子株式会社 | Hardware accelerator execution method, hardware accelerator and neural network device |
Also Published As
Publication number | Publication date |
---|---|
CN114399830B (en) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109670029B (en) | Method, apparatus, computer device and storage medium for determining answers to questions | |
US10977739B2 (en) | Risk identification model building and risk identification | |
US20190164043A1 (en) | Low-power hardware acceleration method and system for convolution neural network computation | |
CN111461637A (en) | Resume screening method and device, computer equipment and storage medium | |
CN111985229A (en) | Sequence labeling method and device and computer equipment | |
CN111783974A (en) | Model construction and image processing method and device, hardware platform and storage medium | |
CN111178537B (en) | Feature extraction model training method and device | |
CN114579746B (en) | Optimized high-precision text classification method and device | |
CN112200295A (en) | Ordering method, operation method, device and equipment of sparse convolutional neural network | |
CN110096605B (en) | Image processing method and device, electronic device and storage medium | |
CN117540825A (en) | Method and device for constructing pre-training model based on reinforcement learning and electronic equipment | |
CN112287656A (en) | Text comparison method, device, equipment and storage medium | |
CN110033092B (en) | Data label generation method, data label training device, event recognition method and event recognition device | |
CN114399772B (en) | Sample generation, model training and track recognition methods, devices, equipment and media | |
CN113642727B (en) | Training method of neural network model and processing method and device of multimedia information | |
CN113413607A (en) | Information recommendation method and device, computer equipment and storage medium | |
CN111428487B (en) | Model training method, lyric generation method, device, electronic equipment and medium | |
CN114399830B (en) | Target class identification method and device and readable storage medium | |
CN110968702B (en) | Method and device for extracting rational relation | |
CN112287077A (en) | Statement extraction method and device for combining RPA and AI for document, storage medium and electronic equipment | |
CN114707483B (en) | Zero sample event extraction system and method based on contrast learning and data enhancement | |
WO2022153710A1 (en) | Training apparatus, classification apparatus, training method, classification method, and program | |
CN111414452B (en) | Search word matching method and device, electronic equipment and readable storage medium | |
CN114462526A (en) | Classification model training method and device, computer equipment and storage medium | |
CN113836005A (en) | Virtual user generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |