WO2023024412A1 - 基于深度学习模型的视觉问答方法及装置、介质、设备 - Google Patents
基于深度学习模型的视觉问答方法及装置、介质、设备 Download PDFInfo
- Publication number
- WO2023024412A1 WO2023024412A1 PCT/CN2022/071428 CN2022071428W WO2023024412A1 WO 2023024412 A1 WO2023024412 A1 WO 2023024412A1 CN 2022071428 W CN2022071428 W CN 2022071428W WO 2023024412 A1 WO2023024412 A1 WO 2023024412A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- answer
- model
- vector
- image feature
- target
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 140
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000013136 deep learning model Methods 0.000 title claims abstract description 20
- 239000013598 vector Substances 0.000 claims description 244
- 238000004590 computer program Methods 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 230000011218 segmentation Effects 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 18
- 238000013135 deep learning Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 6
- 241000282326 Felis catus Species 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000013145 classification model Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3346—Query execution using probabilistic model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- This application relates to the field of artificial intelligence, in particular to a visual question answering method, device, medium, and equipment based on a deep learning model.
- VQA Visual Question Answering
- Models built by computer algorithms need to have a certain reasoning ability, which has higher requirements than traditional computer vision tasks.
- Visual Question Answering a system capable of answering natural language questions about images, has been sought after by cutting-edge technology research and domain engineering.
- a discriminative model that is, a classification model.
- the category is pre-defined, and the model can The answer given cannot exceed the given category, which affects the accuracy of the final result, but this type of error is caused by the design of the model.
- the model may have learned the required information, but the final category does not affect the accuracy of the final result. Restricted; the number of pre-defined categories is huge, at least larger than the common 3K categories, and some large Internet companies even set more than hundreds of thousands of categories.
- Such a large-scale category can easily lead to sufficient learning of common category models , uncommon categories are seldom learned by the model, resulting in inaccurate predictions, and seriously affecting the training efficiency of the model and the difficulty of deploying the business in the later stage.
- this application proposes a visual question answering method, device, medium, and equipment based on a deep learning model.
- the visual question answering method provided by this application not only has the ability to predict from common categories, but also can generate the required answers by itself, and the model can selectively decide whether the final answer is matched with common categories or automatically generated according to the score.
- Automated generative visual question answering improves the accuracy of visual question answering results.
- a visual question answering method based on a deep learning model including:
- the visual question answering model includes an encoder sub-model and a decoder sub-model
- the prediction probabilities corresponding to the first answer and the second answer are calculated respectively, so as to select and output the first answer and/or the second answer as the target answer corresponding to the question data.
- a visual question answering device based on a deep learning model including:
- a visual question answering model building module is used to establish a visual question answering model using the pre-trained language model T5 framework; wherein, the visual question answering model includes an encoder sub-model and a decoder sub-model;
- the first answer matching module is used to obtain image data and question data, input the image data and the question data to the visual question answering model, and use the encoder sub-model in the visual question answering model to preset matching in the classification category to obtain the first answer of the classification formula corresponding to the question data;
- the second answer generation module is used to use the decoder sub-model in the visual question answering model in combination with the common word list to obtain the second answer of the generated type corresponding to the question data;
- a target answer output module configured to calculate respectively the predicted probabilities corresponding to the first answer and the second answer, so as to select the first answer and/or the second answer as the target answer corresponding to the question data.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, a visual question answering method based on a deep learning model is implemented, including:
- the visual question answering model includes an encoder sub-model and a decoder sub-model
- the prediction probabilities corresponding to the first answer and the second answer are calculated respectively, so as to select and output the first answer and/or the second answer as the target answer corresponding to the question data.
- a computer device including a memory, a processor, and a computer program stored on the memory and operable on the processor.
- the processor executes the computer program
- the computer program based on the deep learning model is realized visual question answering methods, including:
- the visual question answering model includes an encoder sub-model and a decoder sub-model
- the prediction probabilities corresponding to the first answer and the second answer are calculated respectively, so as to select and output the first answer and/or the second answer as the target answer corresponding to the question data.
- the visual question-and-answer method provided by this application can either predict by classification or automatically generate answers.
- the two prediction methods are evaluated according to the prediction probability, and the final answer can be output according to actual needs, realizing automatic and flexible generative vision.
- Question answering greatly improving the accuracy of visual question answering results.
- Fig. 1 shows a schematic flow diagram of a visual question answering method based on a deep learning model provided by an embodiment of the present application
- Fig. 2 shows a schematic diagram of the prediction process of the encoder sub-model provided by an embodiment of the present application
- Fig. 3 shows a brief schematic diagram of vector splicing provided by an embodiment of the present application
- Fig. 4 shows a schematic structural diagram of a visual question answering device based on a deep learning model provided by an embodiment of the present application
- FIG. 5 shows a schematic diagram of a physical structure of a computer device provided by an embodiment of the present application.
- AI artificial intelligence
- digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
- Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
- Artificial intelligence software technology mainly includes computer vision technology, robotics technology, biometrics technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
- the embodiment of the present application provides a visual question answering method based on a deep learning model. As shown in FIG. 1 , the method may at least include the following steps S101 to S104.
- Step S101 using the pre-trained language model T5 framework to establish a visual question answering model.
- the visual question answering model includes an encoder sub-model and a decoder sub-model, which are used to generate target answers based on input image data and question data.
- the pre-training model framework selected in the embodiment of this application is the T5 (Transfer Text-to-Text Transformer) model based on deep learning network technology proposed by Google, which is constructed based on the basic Transformer (deep self-attention network) architecture, which is a
- the sequence-to-sequence model includes two modules: the encoder sub-model and the decoder sub-model. Among them, the encoder sub-model and the decoder sub-model are constructed using the Multi-layer transformer (multi-layer deep self-attention network) included in the T5 model.
- the T5 model provides a general framework for the entire Neuro-Linguistic Programming (NLP) pre-training model field, converting all NLP tasks into Text-to-Text form, and the same Model, the same loss function, the same training process, the same decoding process to complete all NLP tasks.
- NLP Neuro-Linguistic Programming
- the visual question answering model established based on the T5 model framework includes an encoder sub-model and a decoder sub-model, and the encoder sub-model and decoder sub-model correspond to generating answers and Automatically generate visual quiz answers based on a vocabulary of common words. Two different visual question answering results are generated by two different methods, and the visual question answering result can be selected as the final output answer according to actual needs.
- Step S102 acquire image data and question data, input the image data and question data into the visual question answering model, use the encoder sub-model in the visual question answering model to match in the preset classification categories to obtain the first classified answer corresponding to the question data.
- Visual question answering is a natural language question answering of visual images. It connects images and language through visual understanding, and answers specific questions based on understanding images.
- the image data and question data acquired in this application are image data and corresponding question data in the visual question answering to be processed.
- the encoder sub-model constructed in this application is a model based on the Multi-layer transformer (multi-layer deep self-attention network) architecture, which can match the first answer of the classification type corresponding to the question data in the preset classification category; where , the first answer is the answer included in the preset classification category.
- Multi-layer transformer multi-layer deep self-attention network
- using the encoder sub-model in the visual question answering model to match the preset classification categories to obtain the first classified answer corresponding to the question data may include the following steps S102-1 to S102-4.
- Step S102-1 Input the image data into the object detection Faster R-CNN model based on deep learning, extract the image features and image feature categories corresponding to the image, and convert the image features and image feature categories into images with the first vector dimension feature vector and image feature category vector with second vector dimension.
- the Faster R-CNN model is a target detection model based on depth information.
- the open source Faster R-CNN model can extract the corresponding image features and image feature categories for each picture. For example, if an image contains a cat and a dog, this image can extract cat and There are two different image features of the dog image, and two different image feature categories of "cat" and "dog".
- the Faster R-CNN model can also output the image feature vector corresponding to the image feature, and the dimension is 2048 dimensions; the image feature The image feature category vector corresponding to the category, with a dimension of 300 dimensions.
- Step S102-2 Perform word segmentation processing on the question data to obtain text elements, and convert the text elements into text element vectors with the second vector dimension by using the preset word vector model.
- perform word segmentation processing on the question to obtain text elements Specifically, you can first determine whether the question data is in English or Chinese; if the question data is in English, use spaces to perform text word segmentation to obtain English-type text elements; if the question data is in Chinese , use the word segmentation model to segment the text, add the first tag at the beginning of the sentence to indicate the beginning, and add the second tag at the end of the sentence to indicate the end, and get the text element of the Chinese type.
- the text is segmented using the jieba word segmentation model.
- the jieba word segmentation model is a method that can cut sentences most accurately and convert all words that can be formed into words into words in the sentence. Scanned out to fit the word segmentation method for text analysis.
- the first tag added at the beginning of the sentence can also be a general [CLS] tag, and the second tag added at the end of the sentence can be a general [SEP] tag to divide the Chinese text into sentences.
- each word segmentation and added element can be used as a text element.
- GloVe word vector is a method based on a language model such as a neural network language model to capture the grammatical and semantic information of words, and to represent word text using word vectors.
- a language model such as a neural network language model to capture the grammatical and semantic information of words, and to represent word text using word vectors.
- each text element after word segmentation can be converted into a 300-dimensional text element vector through the pre-trained GloVe word vector model.
- Step S102-3 Input the image feature vector, text element vector and image feature category vector into the encoder sub-model, use the encoding sub-modeler to concatenate the image feature vector, text element vector and image feature category vector to obtain a vector matrix.
- the encoding sub-modeler is used to stitch image feature vectors, text element vectors, and image feature category vectors according to the vector dimension, specifically through the following method: through the fully-connected layer in the encoder sub-model respectively, the image with the first vector dimension
- the image feature vector and the text element vector with the second vector dimension, the image feature category vector are converted to the image feature vector, the text element vector and the image feature category vector with the same third vector dimension; the image feature with the third vector dimension
- the category vector, the image feature vector and the text element vector are spliced according to a preset splicing sequence, wherein the image feature category vector and the image feature vector with the third vector dimension correspond to each other according to the splicing sequence.
- the encoder sub-model in the embodiment of the present application adopts the Multi-layer transformer (multi-layer deep self-attention network) model architecture, and converts image feature vectors, text element vectors, and image feature category vectors into the same dimension through a fully connected layer, for example , the text element vector and image feature category vector are 300 dimensions, and the image feature vector is 2048 dimensions.
- the image feature vector, text element vector, and image feature category vector are converted into the same 1024 dimensions through a fully connected layer for subsequent splicing.
- the feature category labels are spliced to the text vector spliced in 1.1.
- the concatenation of vectors can be performed in the order of text element vectors, picture feature category vectors, and picture feature vectors.
- the 1-4 digits are the vectors corresponding to the question text
- the 5-11 digits are the picture feature category vectors
- the 12-18 correspond to the picture feature vectors
- the picture feature category vectors correspond to the picture feature vectors one by one, that is, the 5th digit
- the picture feature category corresponds to the 12th picture feature vector
- the 6th picture feature category corresponds to the 13th picture feature vector, etc.
- Step S102-4 Select the last target vector in the last layer of the vector matrix, convert the target vector into the target dimension through the fully connected layer of the encoder sub-model, and obtain the predicted probability of each category in the preset classification categories through the softmax operation , select the category with the highest prediction probability as the first answer of the classification formula; where, the target dimension is the preset number of classification categories, and a natural number greater than 1 is selected.
- the number of classification categories in this embodiment of the application can be customized according to actual conditions. For example, set the number of classification categories to 3000. Select the last target vector of the last layer with the most characteristic weight in the vector matrix, transform the target vector into a target dimension of 3000 dimensions, perform mathematical normalization on the preset classification categories through the softmax operation, and map the classification categories as The real number between 0-1 and the sum is 1, and the probability prediction of 3000 classification categories is performed, and the sum is exactly 1. Among them, the category with the largest prediction probability corresponding to the smallest cross-quotient loss is the first answer output by the encoder. By using the encoder sub-model to predict the visual question and answer, and matching the predicted answer with the largest possible prediction probability as the first answer, the accuracy of the visual question and answer is improved.
- Step S103 using the decoder sub-model in the visual question answering model combined with the vocabulary of common words to obtain the second generative answer corresponding to the question data.
- the decoder sub-model constructed in this application is the same as the encoder sub-model. It is a model based on the Multi-layer transformer (multi-layer deep self-attention network) architecture. A second answer is automatically generated, wherein the second answer can be a variety of categories constructed according to combinations of common word lists.
- Multi-layer transformer multi-layer deep self-attention network
- the encoder sub-model is a model that is pre-trained using the stochastic gradient descent algorithm based on the deep learning neural network pytorch framework.
- the decoder sub-model can be trained using the stochastic gradient descent algorithm through the pytorch framework.
- the pytorch framework can be understood as a deep learning programming language, and the stochastic gradient descent algorithm is (SGD) in neural network model training. It is a very common optimization algorithm. This algorithm is generated based on the gradient descent algorithm, which can be used as a parameter update strategy, which can update the parameters of the decoder sub-model better and faster, and generate a model that meets the required performance requirements.
- using the stochastic gradient descent algorithm to train the decoder sub-model may specifically include:
- L is the cross-entropy loss of the visual question answering model
- L 1 is the cross-entropy function of the encoder sub-model
- L 2 is the cross-entropy loss of the decoder sub-model
- K is the number of samples
- M i is the prediction probability vector of the i-th sample
- Y i is the one-hot code corresponding to the i-th sample
- l represents the l-th dimension of the vector
- K is the number of samples
- N is the number of characters of the output answer
- M ij is the predicted probability vector of the jth character corresponding to the output answer of the i-th sample
- Y ij is the corresponding output answer of the i- th sample
- l represents the lth dimension of the vector.
- the decoder sub-model with optimized parameters is obtained.
- the target vector in the encoder sub-model is input to the trained decoder sub-model, and an output answer corresponding to the target vector is generated in combination with the common word list as the second answer.
- the target vector h is input, and the first character y1 output at the position corresponding to h is obtained through model prediction;
- the model inputs [h, y1], and the second character y2 output at the corresponding position of y1 is obtained through model prediction;
- the model inputs [h, y1, y2], and the third character y3 output at the corresponding position of y2 is obtained through model prediction;
- model inputs [h, y1, y2, y3], and the output "end" character corresponding to the position of y3 is obtained through model prediction;
- the model prediction is terminated, and the obtained [y1, y2, y3] is used as the output result as the second answer.
- the final answer is "Garfield”
- the first character output at the position corresponding to h in the first round is "plus”
- the model input in the second round is [h, plus]
- the second character output at the position corresponding to y1 is "fei”.
- the third round of model input [h, plus, Fei] the third character output corresponding to the position of y2 is "cat”
- the fourth round of model input [h, plus, Fei, cat] the output corresponding to the position of y3 is "end "character. Get [Garfield] as the second answer of the final output.
- the target vector h is the last vector of the last layer with the most characteristic weight in the vector matrix.
- the decoder sub-model uses common words as the vocabulary. There are about 8K common words. Through multiple rounds of prediction, it can be constructed There are infinitely many categories, and the answers are automatically generated, not limited to the preset classification categories.
- the visual question answering model proposed in the embodiment of the present application can automatically generate required categories, overcome the limitation of matching limited classification categories, and further improve the accuracy of visual question answering results.
- Step S104 Calculate the predicted probabilities corresponding to the first answer and the second answer respectively, so as to select the first answer and/or the second answer as the target answer corresponding to the question data and output them.
- the softmax function may be used to calculate the first predicted probability corresponding to the first answer and the second predicted probability corresponding to the second answer.
- the softmax algorithm is generally used in multi-classification scenarios. It can map the output of neurons to real numbers between (0-1), and normalize to ensure that the sum is 1, so that the sum of the predicted probabilities of multi-classification is exactly 1.
- the output after softmax is the predicted probability of each category, and the sum of each predicted probability is 1.
- the calculation process of softmax is the ratio of the index of an element to the sum of all element indices.
- Calculate the difference between the first predicted probability and the second predicted probability if the difference is greater than or equal to the preset value, compare the first predicted probability and the second predicted probability, select the predicted answer with a larger predicted probability as the target answer and output it; If the difference is less than the preset value, the first answer and the second answer are used as the target answer, and the first answer and the corresponding first predicted probability and the second answer and the corresponding second predicted probability are output at the same time, or directly output The first answer or the second answer is used as the target answer and output.
- the final predicted answer generated in the embodiment of the present application includes the first answer of the classification type matched according to the classification category model and the second answer of the generation type automatically output according to the generative model.
- the larger the prediction probability the closer the predicted answer is to the real value. Therefore, The predicted answer with a higher predicted probability can be used as the final target answer and output for display.
- the first answer and the second answer can be output simultaneously for reference selection. You can also set a preset difference. If the difference between the two predicted probabilities is greater than or equal to the predicted difference, it means that the predicted answer with a higher predicted probability is more convincing close to the real value. If the predicted difference between the two is smaller than the predicted difference, it means that the two The predicted probabilities of the two predicted answers are not much different, and one or both of them can be output as the final target answer for reference.
- the visual question answering method based on the deep learning model provided by the embodiment of the present application uses the T5 model framework to establish a visual question answering model including an encoder sub-model and a decoder sub-model, receives input image data and question data, and based on preset classification categories Use the encoder sub-model to match the image data and question data in the preset classification category to obtain the first answer of the classification type, and use the decoder sub-model to automatically output the second answer of the generation type according to the image data and question data based on the common word list. The first answer and/or the second answer serve as the target answer for the visual question answering model.
- the visual question answering method provided by this application can not only predict the classification category, but also automatically generate the answer.
- the two prediction methods can also be evaluated according to the prediction probability, which can adapt to the actual needs and output the final answer, realizing automatic and flexible generative visual question answering, breaking through the traditional model to give answers that cannot exceed the preset
- the influence of classification categories further improves the accuracy of the final result.
- the embodiment of the present application provides a visual question answering device based on a deep learning model.
- the device may include: a visual question answering model building module 410, a first answer matching module 420 , the second answer generation module 430 and the target answer output module 440 .
- the visual question answering model building module 410 can be used to build a visual question answering model using the framework of the pre-trained language model T5; wherein, the visual question answering model includes an encoder sub-model and a decoder sub-model.
- the first answer matching module 420 can be used to obtain image data and question data, input the image data and question data into the visual question answering model, and use the encoder sub-model in the visual question answering model to match in the preset classification category to obtain the corresponding question data.
- the second answer generation module 430 can be used to use the decoder sub-model in the visual question answering model combined with the common word list to obtain the generated second answer corresponding to the question data.
- the target answer output module 440 can be used to calculate the predicted probabilities corresponding to the first answer and the second answer, so as to select the first answer and/or the second answer as the target answer corresponding to the question data and output them.
- the first answer matching module 420 can also be used to input the image data into the object detection Faster R-CNN model based on deep learning, extract the image features and image feature categories corresponding to the image, and combine the image features and image features category conversion into image feature vectors with a first vector dimension and image feature category vectors with a second vector dimension;
- the target dimension is the preset number of classification categories, and a natural number greater than 1 is selected.
- the second answer generation module 430 can also be used to input the target vector in the encoder sub-model to the trained decoder sub-model, and generate an output answer corresponding to the target vector in combination with the common word list, as the first two answers;
- the decoder sub-model is a model that is pre-trained using the stochastic gradient descent algorithm based on the deep learning neural network pytorch framework.
- the target answer output module 440 can also be used to calculate the first predicted probability corresponding to the first answer and the second predicted probability corresponding to the second answer by using the softmax function;
- Calculate the difference between the first predicted probability and the second predicted probability if the difference is greater than or equal to the preset value, compare the first predicted probability and the second predicted probability, select the predicted answer with a larger predicted probability as the target answer and output it; If the difference is less than the preset value, the first answer and the second answer are used as the target answer, and the first answer and the corresponding first predicted probability and the second answer and the corresponding second predicted probability are output at the same time; or, the first answer and the corresponding second predicted probability are output; The first answer is used as the target answer and output; or, the second answer is used as the target answer and output.
- the first answer matching module 420 can also be used to respectively combine the image feature vector with the first vector dimension and the text element vector with the second vector dimension, image feature category through the fully connected layer in the encoder sub-model
- the vectors are transformed into image feature vectors, text element vectors, and image feature category vectors with the same third vector dimension;
- the image feature category vector, image feature vector and text element vector with the third vector dimension are spliced according to a preset splicing sequence, wherein the image feature category vector and the image feature vector with the third vector dimension correspond to each other according to the splicing sequence.
- the first answer matching module 420 can also be used to determine whether the question data is English or Chinese;
- question data is in Chinese
- use the word segmentation model for text segmentation add the first tag at the beginning of the sentence to indicate the beginning, and add the second tag at the end of the sentence to indicate the end, and get the text elements of Chinese type.
- the second answer generation module 430 can also be used to train the decoder sub-model using the stochastic gradient descent algorithm, specifically including: calculating the cross-entropy loss of the visual question answering model, and using the stochastic gradient descent algorithm to minimize the cross-entropy loss ;
- the cross-entropy loss of the visual question answering model is the sum of the cross-entropy function of the encoder sub-model and the cross-entropy loss of the decoder sub-model, and the calculation formula is:
- L is the cross-entropy loss of the visual question answering model
- L 1 is the cross-entropy function of the encoder sub-model
- L 2 is the cross-entropy loss of the decoder sub-model
- K is the number of samples
- M i is the prediction of the i-th sample
- Probability vector Y i is the one-hot code corresponding to the i-th sample
- l represents the l-th dimension of the vector
- N is the number of characters in the output answer
- M ij is the j-th character corresponding to the output answer of the i-th sample
- Prediction probability vector Y ij is the one-hot code corresponding to the j-th character corresponding to the output answer of the M ij- th sample
- l represents the l-th dimension of the vector.
- the embodiment of the present application also provides a computer-readable storage medium.
- the computer-readable storage medium may be non-volatile or volatile, and the There is a computer program, and the following steps are implemented when the computer program is executed by the processor: using the pre-trained language model T5 framework to establish a visual question answering model; wherein the visual question answering model includes an encoder sub-model and a decoder sub-model; obtaining image data and question data ; Input the image data and question data into the visual question answering model, use the encoder sub-model in the visual question answering model to match in the preset classification categories to obtain the first answer of the classification type corresponding to the question data; use the decoder sub-model in the visual question answering model
- the model combines the common word list to obtain the second answer of the generative type corresponding to the question data; respectively calculate the prediction probabilities corresponding to the first answer and the second answer, so as to select the first answer and/or the second answer as the target answer corresponding to the question data
- the embodiment of the present application also provides a physical structure diagram of a computer device, as shown in Figure 5, the computer device may include a communication bus, a processing device, memory, and communication interface, and may also include, input/output interface, and display device, where each functional unit may communicate with each other through a bus.
- the memory stores a computer program
- a processor is used to execute the program stored in the memory, and the following steps are implemented when the program is executed: a visual question answering model is established using a pre-trained language model T5 framework; wherein the visual question answering model includes an encoder sub- Model and decoder sub-model; obtain image data and question data; input image data and question data into the visual question answering model, use the encoder sub-model in the visual question answering model to match in the preset classification categories to obtain the classification formula corresponding to the question data The first answer; using the decoder sub-model in the visual question answering model combined with the vocabulary of common words to obtain the second answer of the generative type corresponding to the question data; respectively calculating the prediction probabilities corresponding to the first answer and the second answer to select the first answer and /or the second answer is used as the target answer corresponding to the question data and output.
- the visual question answering model includes an encoder sub- Model and decoder sub-model; obtain image data and question data; input image data and question data into the visual
- each functional unit in each embodiment of the present application may be physically independent of each other, or two or more functional units may be integrated together, or all functional units may be integrated into one processing unit.
- the above-mentioned integrated functional units can be implemented not only in the form of hardware, but also in the form of software or firmware.
- the integrated functional unit is implemented in the form of software and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the essence of the technical solution of the present application or all or part of the technical solution can be embodied in the form of software products, and the computer software products are stored in a storage medium, which includes several instructions to make a A computing device (such as a personal computer, a server, or a network device, etc.) executes all or part of the steps of the methods described in the embodiments of the present application when executing the instructions.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk, and various media capable of storing program codes.
- all or part of the steps for realizing the aforementioned method embodiments may be implemented by program instruction-related hardware (such as a personal computer, server, or computing device such as a network device), and the program instructions may be stored in a computer-readable memory
- program instruction-related hardware such as a personal computer, server, or computing device such as a network device
- the program instructions may be stored in a computer-readable memory
- the computing device executes all or part of the steps of the methods described in the embodiments of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Image Analysis (AREA)
Abstract
一种基于深度学习模型的视觉问答方法及装置、介质、设备,其中,该方法包括:利用预训练语言模型T5构架建立视觉问答模型;其中,视觉问答模型包括编码器子模型和解码器子模型(S101);获取图像数据和问题数据,输入至视觉问答模型,利用视觉问答模型中的编码器子模型在预设分类类别中匹配得到问题数据对应的分类式第一答案(S102);利用视觉问答模型中的解码器子模型结合常见字词表得到生成式第二答案(S103);计算第一答案和第二答案对应的预测概率,以选取第一答案和/或第二答案作为目标答案并输出(S104)。通过该方法,视觉问答的最终答案既可以是常见类别匹配的也可以是自动生成的,根据预测概率的高低选取输出答案,提高了结果准确率。
Description
本申请要求于2021年8月25日提交中国专利局、申请号为202110980645.6,发明名称为“基于深度学习模型的视觉问答方法及装置、介质、设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人工智能领域,特别是一种基于深度学习模型的视觉问答方法及装置、介质、设备。
视觉问答(Visual Question Answering,VQA)是一个需要同时理解文本和视觉的热门领域。需要计算机算法构建的模型有一定的推理能力,相比传统计算机视觉任务有更高的要求。视觉问答是一个能够回答关于图像的自然语言问题的系统,一直深受前沿技术研究和领域工程的追捧。
发明人意识到目前主流视觉问答模型主要采用判别式模型,即分类模型,通过采用深度学习领域常见的分类模型对答案的类别进行预测,有以下几点缺陷:类别是预先定义好的,模型能给出的答案无法超过给定的类别,从而影响了最终结果的准确性,但这一类错误是由于模型的设计所造成的,模型或许已经学习到了所需信息,只不过最终的类别对其进行了限制;预先定义的类别数量是巨大的,至少大于常见的3K个类别,有些大型互联网公司设置的类别甚至会超过几十万,如此大规模的类别容易造成对常见类别模型的学习较为充分,非常见类别很少被模型学习,导致对其预测不准确,且严重影响了模型的训练效率以及后期业务上线的部署难度。
发明内容
鉴于上述问题,本申请提出了一种基于深度学习模型的视觉问答方法及装置、介质、设备。本申请提供的视觉问答方法除了具备能从常见类别预测的能力,也可以自己生成所需要的答案,且模型可以根据分数高低有选择的决定最终答案是常见类别匹配的还是自动生成的,实现了自动化的生成式视觉问答、提高了视觉问答结果准确率。
依据本申请第一方面,提供了一种基于深度学习模型的视觉问答方法,包括:
利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;
获取图像数据和问题数据;将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对 应的分类式第一答案;
利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;
分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
依据本申请第二方面,提出了一种基于深度学习模型的视觉问答装置,包括:
视觉问答模型建立模块,用于利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;
第一答案匹配模块,用于获取图像数据和问题数据,将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;
第二答案生成模块,用于利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;
目标答案输出模块,用于分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案。
依据本申请第三方面,提出了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现基于深度学习模型的视觉问答方法,包括:
利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;
获取图像数据和问题数据;将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;
利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;
分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
依据本申请第四方面,提出了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现基于深度学习模型的视觉问答方法,包括:
利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;
获取图像数据和问题数据;将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;
利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;
分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
本申请提供的视觉问答方法既可以通过分类类别预测,也可以自动生成答案,根据预测概率高低对两种预测方法进行评估,能够适应实际需求输出最终答案,实现了自动化、灵活化的生成式视觉问答,大大提高了视觉问答的结果准确率。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
根据下文结合附图对本申请具体实施例的详细描述,本领域技术人员将会更加明了本申请的上述以及其他目的、优点和特征。
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了本申请一实施例提供的基于深度学习模型的视觉问答方法的流程示意图;
图2示出了本申请一实施例提供的编码器子模型的预测流程示意图;
图3示出了本申请一实施例提供的向量拼接的简要示意图;
图4示出了本申请一实施例提供的基于深度学习模型的视觉问答装置的结构示意图;
图5示出了本申请实施例提供的一种计算机设备的实体结构示意图。
下面将参照附图更详细地描述本申请的示例性实施例。虽然附图中显示了本申请的示例性实施例,然而应当理解,可以以各种形式实现本申请而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本申请,并且能够将本申请的范围完整的传达给本领域的技术人员。
本申请实施例可以基于人工智能技术对相关的数据进行获取和处理。其中,人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。
人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、机器人技术、生物识别技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
本申请实施例提供了一种基于深度学习模型的视觉问答方法,如图1所示,该方法至少可以包括以下步骤S101~S104。
步骤S101,利用预训练语言模型T5构架建立视觉问答模型。
其中,视觉问答模型包括编码器子模型和解码器子模型,用于根据输入的图像数据和问题数据生成目标答案。
本申请实施例中选用的预训练模型构架为Google提出的基于深度学习网络技术的T5(Transfer Text-to-Text Transformer)模型,基于基础的Transformer(深度自注意力网络)架构进行构建,是一个序列到序列(sequence-to-sequence)的模型,包含编码器(encoder)子模型和解码器(decoder)子模型两个模块。其中,编码器子模型和解码器子模型使用包含在T5模型中的Multi-layer transformer(多层深度自注意力网络)进行构建。
其中,T5模型为整个神经语言程序学(Neuro-Linguistic Programming,NLP)的预训练模型领域提供了一个通用框架,将所有的NLP任务都转换成Text-to-Text形式,也就可以用同样的模型,同样的损失函数,同样的训练过程,同样的解码过程来完成所有NLP任务。用于本申请涉及的视觉问答过程中,基于T5模型构架建立的视觉问答模型包括编码器子模型和解码器子模型,编码器子模型和解码器子模型分别对应根据预设分类类别生成答案和根据常见字词表自动生成视觉问答答案。通过两种不同方法生成两种不同的视觉问答结果,可以根据实际需要选取视觉问答结果作为最终输出答案。
步骤S102,获取图像数据和问题数据,将图像数据和问题数据输入至视觉问答模型,利用视觉问答模型中的编码器子模型在预设分类类别中匹配得到问题数据对应的分类式第一答案。
视觉问答是一种对视觉图像的自然语言问答,通过视觉理解连接着图像和语言,在理解图像的基础上,根据具体的问题做出回答。本申请中获取的图像数据和问题数据为待处理视觉问答中的图片数据及对应的问题数据。
本申请中构建的编码器子模型是一种基于Multi-layer transformer(多层深度自注意力网络)架构的模型,可以在预设分类类别中匹配得到问题数据对应的分类式第一答案;其中,第一答案为预设分类类别中所包含的答案。
进一步地,利用视觉问答模型中的编码器子模型在预设分类类别中匹配得到问题数据对应的分类式第一答案,可以包括以下步骤S102-1~S102-4。
步骤S102-1:将图像数据输入至基于深度学习的目标检测Faster R-CNN模型,抽取图像对应的图像特征和图像特征类别,并将图像特征和图像特征类别转换为具有第一向量维度的图像特征向量和具有第二向量维度的图像特征类别向量。
Faster R-CNN模型是一种基于深度信息的目标检测模型。对于输入的图片,通过开源的Faster R-CNN模型可以对每张图片抽取相应的图像特征和图像特征类别,例如,一张图像中包含一只猫和一只狗,这张图像可以抽取猫和狗的图像两个不同的图像特征,以及“猫”和“狗”两个不同的图像特征类别,Faster R-CNN模型还可以输出将图像特征对应的图像特征向量,维度为2048维;图像特征类别对应的图像特征类别向量,维度为300维。
步骤S102-2:对问题数据进行文本分词处理,得到文本元素,利用预设的词向量模型将文本元素转换为具有第二向量维度的文本元素向量。
可选地,对问题进行文本分词处理,得到文本元素,具体可以先判断问题数据为英文或 中文;若问题数据为英文,利用空格进行文本分词,得到英文类型的文本元素;若问题数据为中文,利用分词模型进行文本分词,句首添加第一标记表示开始,句末添加第二标记表示结束,得到中文类型的文本元素。
在本申请实施例中,当问题数据为中文时,利用jieba分词模型对文本进行分词处理,jieba分词模型是一种可以将句子最精确地切开,把句子中所有的可以成词的词语都扫描出来,以适应文本分析的分词方法。还可以通过在句首添加的第一标记可以为通用的[CLS]标记,在句尾添加的第二标记可以为通用的[SEP]标记以对中文文本进行分句。在对问题数据进行文本处理之后,每个分词及添加元素都可以作为文本元素。GloVe词向量是一种基于语言模型例如神经网络语言模型捕捉词语的语法和语义信息,将词文本使用词向量进行表示的方法。本申请中,可以通过预先训练好的GloVe词向量模型将上述分词后的每个文本元素转换为300维的文本元素向量。
步骤S102-3:将图像特征向量、文本元素向量和图像特征类别向量输入至编码器子模型,利用编码子模型器拼接图像特征向量、文本元素向量和图像特征类别向量,得到向量矩阵。
可选地,利用编码子模型器按照向量维度拼接图像特征向量、文本元素向量和图像特征类别向量,具体可以通过以下方法:分别通过编码器子模型中的全连接层将具有第一向量维度的图像特征向量和具有第二向量维度的文本元素向量、图像特征类别向量转换为具有相同的第三向量维度的图像特征向量、文本元素向量和图像特征类别向量;将具有第三向量维度的图像特征类别向量、图像特征向量和文本元素向量按照预设的拼接顺序进行拼接,其中,具有第三向量维度的图像特征类别向量和图像特征向量按照拼接顺序相互对应。
本申请实施例中的编码器子模型采用Multi-layer transformer(多层深度自注意力网络)模型架构,通过全连接层将图像特征向量、文本元素向量、图像特征类别向量转化为相同维度,例如,文本元素向量和图像特征类别向量为300维,图像特征向量为2048维,通过全连接层将图像特征向量、文本元素向量、图像特征类别向量转化为相同的1024维度,以便于后续拼接。其中将特征类别标签拼接到1.1拼接成的文本向量后。如图2所示,对于向量的拼接,可以按照文本元素向量、图片特征类别向量和图片特征向量的顺序来进行。例如,第1-4位的为问题文本对应的向量,5-11位为图片特征类别向量,12-18对应图片特征向量,图片特征类别向量和图片特征向量一一对应,即第5位的图片特征类别对应第12位的图片特征向量,第6位的图片特征类别对应第13位的图片特征向量等。
步骤S102-4:选取向量矩阵中最后一层的最后一个目标向量,通过编码器子模型的全连接层将目标向量转换为目标维度,通过softmax操作得到预设分类类别中每个类别的预测概率,选取预测概率最大的类别作为分类式第一答案;其中,目标维度为预设的分类类别数,选用大于1的自然数。
本申请实施例中的分类类别数可以根据实际情况进行自定义。例如,设置分类类别数为3000。选取向量矩阵中量最具特征权重的最后一层的最后一个目标向量,将目标向量转化为目标维度3000维,通过softmax操作将预设的分类类别进行数学归一化处理,将分类类别映射为0-1之间的实数且和为1,并对3000个分类类别进行概率预测,和也刚好为1。其中, 预测概率最大的类别对应的交叉商损失最小,则为编码器输出的第一答案。通过利用编码器子模型对视觉问答进行预测,匹配预测概率尽可能大的预测答案作为第一答案,提高了的视觉问答的结果准确率。
步骤S103,利用视觉问答模型中的解码器子模型结合常见字词表得到问题数据对应的生成式第二答案。
本申请中构建的解码器子模型与编码器子模型相同,是一种基于Multi-layer transformer(多层深度自注意力网络)架构的模型,利用常见字词表根据输入的图像数据和问题数据自动生成第二答案,其中,第二答案可以为根据常见字词表组合构造的多种类别。
其中,编码器子模型为基于深度学习的神经网络pytorch框架预先利用随机梯度下降算法进行训练的模型。
本申请实施例中,可以通过pytorch框架利用随机梯度下降算法对解码器子模型进行训练,pytorch构架可以理解为一种深度学习编程语言,随机梯度下降算法是(SGD)在神经网络模型训练中,是一种很常见的优化算法。这种算法是基于梯度下降算法产生的,可以作为一种参数更新的策略,能够更好更快的对解码器子模型进行参数更新,生成达到所需的性能要求的模型。
可选地,利用随机梯度下降算法对解码器子模型进行训练具体可以包括:
计算视觉问答模型的交叉熵损失,利用随机梯度下降算法最小化交叉熵损失;其中,视觉问答模型的交叉熵损失为编码器子模型的交叉熵函数和解码器子模型的交叉熵损失的和,计算公式为:
L=L
1+L
2
其中,L为视觉问答模型的交叉熵损失,L
1为编码器子模型的交叉熵函数,L
2为解码器子模型的交叉熵损失;
其中,K为样本数,M
i为第i个样本的预测概率向量,Y
i为第i个样本对应的one-hot编码,l表示向量的第l维;
其中,K为样本数,N为输出答案的字符数,M
ij为第个i样本的输出答案对应的第j个字符的预测概率向量,Y
ij为第个M
ij样本的输出答案对应的第j个字符对应的one-hot编码,l表示向量的第l维。
本申请中,通过融合编码器的分类类别损失和解码器损失作为视觉问答模型的损失函数,当损失函数值最小时,则得到参数最优化的解码器子模型。
进一步地,将编码器子模型中的目标向量输入至训练后的解码器子模型,并结合常见字词表生成目标向量对应的输出答案,作为第二答案。
利用解码器子模型接收编码器子模型中的目标向量,根据输出答案的输出答案的字符数 分轮次进行模型预测,直得到最终答案。以最终答案包含三个字符的模型预测为例,过程如图3所示:
第一轮,输入目标向量h,通过模型预测得到h对应位置输出的第一个字符y1;
第二轮,模型输入[h,y1],通过模型预测得到y1对应位置输出的第二个字符y2;
第三轮,模型输入[h,y1,y2],通过模型预测得到y2对应位置输出的第三个字符y3;
第四轮,模型输入[h,y1,y2,y3],通过模型预测得到y3对应位置的输出“结束”字符;
一旦模型输出“结束”字符,则终止模型预测,得到的[y1,y2,y3]作为输出结果,作为第二答案。例如,最终答案为“加菲猫”,第一轮h对应位置输出的第一个字符为“加”,第二轮模型输入[h,加],y1对应位置输出的第二个字符为“菲”;第三轮模型输入[h,加,菲],y2对应位置输出的第三个字符为“猫”;第四轮模型输入[h,加,菲,猫],y3对应位置的输出“结束”字符。得到[加菲猫]为最终输出的第二答案。
其中,目标向量h为向量矩阵中量最具特征权重的最后一层的最后一个向量,解码器子模型利用常见字作为词表,常见字大约有8K个左右,通过多轮次预测,可以构造出无穷多的类别,自动生成答案,不局限于预设的分类类别。本申请实施例提出的视觉问答模型可以自动生成所需类别,克服了匹配有限的分类类别的局限性,进一步提高了视觉问答的结果准确率。
步骤S104:分别计算第一答案和第二答案对应的预测概率,以选取第一答案和/或第二答案作为问题数据对应的目标答案并输出。
具体的,可以利用softmax函数分别计算第一答案对应的第一预测概率和第二答案对应的第二预测概率。
softmax算法一般用于多分类场景,可以把神经元的输出映射到(0-1)之间的实数,并且归一化保证和为1,从而使得多分类的预测概率之和也刚好为1。使得经过softmax后的输出为每个类别的预测概率,并且各个预测概率的和为1。softmax的计算过程为某个元素的指数与所有元素指数和的比值。
可选地,选取第一答案和/或第二答案作为问题数据对应的目标答案并输出,具体可以通过以下步骤:
比较第一预测概率和第二预测概率大小,若第一预测概率大于第二预测概率,则将第一答案作为目标答案并输出;若第一预测概率小于第二预测概率,则将第二答案作为目标答案并输出;若第一预测概率等于第二预测概率,则将第一答案和第二答案作为目标答案并同时输出;或,
计算第一预测概率和第二预测概率的差值,若差值大于等于预设值,则比较第一预测概率和第二预测概率大小,选取预测概率更大的预测答案作为目标答案并输出;若差值小于预设值,则将第一答案和第二答案作为目标答案,并同时输出第一答案及对应的第一预测概率和第二答案及对应的第二预测概率,也可以直接输出第一答案或第二答案作为目标答案并输出。
本申请实施例中最终生成的预测答案包括根据分类类别模型匹配的分类式第一答案和 根据生成式模型自动输出的生成式第二答案,预测概率越大,预测答案越接近真实值,因此,可以将预测概率更大的预测答案作为最终的目标答案并输出展示。当两者预测概率相同时,可以同时输出第一答案和第二答案供参考选择。也可以设置预设差值,若两者预测概率差值大于等于预测差值,说明预测概率大的预测答案更具有接近真实值的说服力,若两者预测差值小于预测差值,说明两个预测答案的预测概率差别不大,可以根据输出其中任一或同时输出两者作为最终的目标答案供参考选择。
本申请实施例提供的基于深度学习模型的视觉问答方法,通过利用T5模型构架建立包括编码器子模型和解码器子模型的视觉问答模型,接收输入的图像数据和问题数据,基于预设分类类别利用编码器子模型根据图像数据和问题数据在预设分类类别中匹配得到分类式第一答案,基于常见字词表利用解码器子模型根据图像数据和问题数据自动输出生成式第二答案,选取第一答案和/或第二答案作为视觉问答模型的目标答案。本申请提供的视觉问答方法既可以通过分类类别预测,也可以自动生成答案,无需设置大规模的分类类别数量也可以完成视觉问答,一定程度上消除了由于分类类别数过大造成的非常见类别预测不准确的影响,还可以根据预测概率高低对两种预测方法进行评估,能够适应实际需求输出最终答案,实现了自动化、灵活化的生成式视觉问答,突破传统模型给出答案无法超出预设分类类别的影响,进一步提高了最终结果的准确性。
进一步地,作为图1的具体实现,本申请实施例提供了一种基于深度学习模型的视觉问答装置,如图4所示,该装置可以包括:视觉问答模型建立模块410、第一答案匹配模块420、第二答案生成模块430和目标答案输出模块440。
视觉问答模型建立模块410,可以用于利用预训练语言模型T5构架建立视觉问答模型;其中,视觉问答模型包括编码器子模型和解码器子模型。
第一答案匹配模块420,可以用于获取图像数据和问题数据,将图像数据和问题数据输入至视觉问答模型,利用视觉问答模型中的编码器子模型在预设分类类别中匹配得到问题数据对应的分类式第一答案。
第二答案生成模块430,可以用于利用视觉问答模型中的解码器子模型结合常见字词表得到问题数据对应的生成式第二答案。
目标答案输出模块440,可以用于分别计算第一答案和第二答案对应的预测概率,以选取第一答案和/或第二答案作为问题数据对应的目标答案并输出。
可选地,第一答案匹配模块420,还可以用于将图像数据输入至基于深度学习的目标检测Faster R-CNN模型,抽取图像对应的图像特征和图像特征类别,并将图像特征和图像特征类别转换为具有第一向量维度的图像特征向量和具有第二向量维度的图像特征类别向量;
对问题数据进行文本分词处理,得到文本元素,利用预设的词向量模型将文本元素转换为具有第二向量维度的文本元素向量;
将图像特征向量、文本元素向量和图像特征类别向量输入至编码器子模型,利用编码子模型器拼接图像特征向量、文本元素向量和图像特征类别向量,得到向量矩阵;
选取向量矩阵中最后一层的最后一个目标向量,通过编码器子模型的全连接层将目标向 量转换为目标维度,通过softmax操作得到预设分类类别中每个类别的预测概率,选取预测概率最大的类别作为分类式第一答案;其中,目标维度为预设的分类类别数,选用大于1的自然数。
可选地,第二答案生成模块430,还可以用于将编码器子模型中的目标向量输入至训练后的解码器子模型,并结合常见字词表生成目标向量对应的输出答案,作为第二答案;
其中,解码器子模型为基于深度学习的神经网络pytorch框架预先利用随机梯度下降算法对进行训练的模型。
可选地,目标答案输出模块440,还可以用于利用softmax函数分别计算第一答案对应的第一预测概率和第二答案对应的第二预测概率;
比较第一预测概率和第二预测概率的大小,若第一预测概率大于第二预测概率,则将第一答案作为目标答案并输出;若第一预测概率小于第二预测概率,则将第二答案作为目标答案并输出;若第一预测概率等于第二预测概率,则将第一答案和第二答案作为目标答案并同时输出;或,
计算第一预测概率和第二预测概率的差值,若差值大于等于预设值,则比较第一预测概率和第二预测概率大小,选取预测概率更大的预测答案作为目标答案并输出;若差值小于预设值,则将第一答案和第二答案作为目标答案,并同时输出第一答案及对应的第一预测概率和第二答案及对应的第二预测概率;或,将第一答案作为目标答案并输出;或,将第二答案作为目标答案并输出。
可选地,第一答案匹配模块420,还可以用于分别通过编码器子模型中的全连接层将具有第一向量维度的图像特征向量和具有第二向量维度的文本元素向量、图像特征类别向量转换为具有相同的第三向量维度的图像特征向量、文本元素向量和图像特征类别向量;
将具有第三向量维度的图像特征类别向量、图像特征向量和文本元素向量按照预设的拼接顺序进行拼接,其中,具有第三向量维度的图像特征类别向量和图像特征向量按照拼接顺序相互对应。
可选地,第一答案匹配模块420,还可以用于判断问题数据为英文或中文;
若问题数据为英文,利用空格进行文本分词,得到英文类型的文本元素;
若问题数据为中文,利用分词模型进行文本分词,句首添加第一标记表示开始,句末添加第二标记表示结束,得到中文类型的文本元素。
可选地,第二答案生成模块430,还可以用于利用随机梯度下降算法对解码器子模型进行训练,具体包括:计算视觉问答模型的交叉熵损失,利用随机梯度下降算法最小化交叉熵损失;其中,视觉问答模型的交叉熵损失为编码器子模型的交叉熵函数和解码器子模型的交叉熵损失的和,计算公式为:
L=L
1+L
2
其中,L为视觉问答模型的交叉熵损失,L
1为编码器子模型的交叉熵函数,L
2为解码器子模型的交叉熵损失;K为样本数,M
i为第i个样本的预测概率向量,Y
i为第i个样本对应的one-hot编码,l表示向量的第l维;N为输出答案的字符数,M
ij为第个i样本的输出答案对应的第j个字符的预测概率向量,Y
ij为第个M
ij样本的输出答案对应的第j个字符对应的one-hot编码,l表示向量的第l维。
需要说明的是,本申请实施例提供的一种基于深度学习模型的视觉问答装置所涉及各功能模块的其他相应描述,可以参考图1所示方法的对应描述,在此不再赘述。
基于上述如图1所示方法,相应的,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,其上存储有计算机程序,该计算机程序被处理器执行时实现以下步骤:利用预训练语言模型T5构架建立视觉问答模型;其中,视觉问答模型包括编码器子模型和解码器子模型;获取图像数据和问题数据;将图像数据和问题数据输入至视觉问答模型,利用视觉问答模型中的编码器子模型在预设分类类别中匹配得到问题数据对应的分类式第一答案;利用视觉问答模型中的解码器子模型结合常见字词表得到问题数据对应的生成式第二答案;分别计算第一答案和第二答案对应的预测概率,以选取第一答案和/或第二答案作为问题数据对应的目标答案并输出。
基于上述如图1所示方法和如图4所示装置的实施例,本申请实施例还提供了一种计算机设备的实体结构图,如图5所示,该计算机设备可以包括通信总线、处理器、存储器和通信接口,还可以包括、输入输出接口和显示设备,其中,各个功能单元之间可以通过总线完成相互间的通信。该存储器存储有计算机程序,处理器,用于执行存储器上所存放的程序,执行所述程序时实现以下步骤:利用预训练语言模型T5构架建立视觉问答模型;其中,视觉问答模型包括编码器子模型和解码器子模型;获取图像数据和问题数据;将图像数据和问题数据输入至视觉问答模型,利用视觉问答模型中的编码器子模型在预设分类类别中匹配得到问题数据对应的分类式第一答案;利用视觉问答模型中的解码器子模型结合常见字词表得到问题数据对应的生成式第二答案;分别计算第一答案和第二答案对应的预测概率,以选取第一答案和/或第二答案作为问题数据对应的目标答案并输出。
所属领域的技术人员可以清楚地了解到,上述描述的系统、装置、模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,为简洁起见,在此不另赘述。
另外,在本申请各个实施例中的各功能单元可以物理上相互独立,也可以两个或两个以上功能单元集成在一起,还可以全部功能单元都集成在一个处理单元中。上述集成的功能单元既可以采用硬件的形式实现,也可以采用软件或者固件的形式实现。
本领域普通技术人员可以理解:所述集成的功能单元如果以软件的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,其包括若干指令,用以使得一台计算设备(例如个人计算 机,服务器,或者网络设备等)在运行所述指令时执行本申请各实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM)、随机存取存储器(RAM),磁碟或者光盘等各种可以存储程序代码的介质。
或者,实现前述方法实施例的全部或部分步骤可以通过程序指令相关的硬件(诸如个人计算机,服务器,或者网络设备等的计算设备)来完成,所述程序指令可以存储于一计算机可读取存储介质中,当所述程序指令被计算设备的处理器执行时,所述计算设备执行本申请各实施例所述方法的全部或部分步骤。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:在本申请的精神和原则之内,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案脱离本申请的保护范围。
Claims (20)
- 一种基于深度学习模型的视觉问答方法,其中,包括:利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;获取图像数据和问题数据;将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
- 根据权利要求1所述的方法,其中,所述利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案,包括:将所述图像数据输入至基于深度学习的目标检测Faster R-CNN模型,抽取所述图像对应的图像特征和图像特征类别,并将所述图像特征和所述图像特征类别转换为具有第一向量维度的图像特征向量和具有第二向量维度的图像特征类别向量;对所述问题数据进行文本分词处理,得到文本元素,利用预设的词向量模型将所述文本元素转换为具有第二向量维度的文本元素向量;将所述图像特征向量、所述文本元素向量和所述图像特征类别向量输入至所述编码器子模型,利用所述编码子模型器拼接所述图像特征向量、所述文本元素向量和所述图像特征类别向量,得到向量矩阵;选取所述向量矩阵中最后一层的最后一个目标向量,通过所述编码器子模型的全连接层将所述目标向量转换为目标维度,通过softmax操作得到预设分类类别中每个类别的预测概率,选取所述预测概率最大的类别作为所述分类式第一答案;其中,所述目标维度为预设的分类类别数,选用大于1的自然数。
- 根据权利要求2所述的方法,其中,所述利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案,包括:将所述编码器子模型中的所述目标向量输入至解码器子模型,并结合常见字词表生成所述目标向量对应的输出答案,作为所述生成式第二答案;其中,所述解码器子模型为基于深度学习的神经网络pytorch框架预先利用随机梯度下降算法进行训练的模型。
- 根据权利要求1所述的方法,其中,所述分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出,包括:利用softmax函数分别计算所述第一答案对应的第一预测概率和所述第二答案对应的第二预测概率;比较所述第一预测概率和所述第二预测概率的大小,若所述第一预测概率大于所述第二预测概率,则将所述第一答案作为目标答案并输出;若所述第一预测概率小于所述第二预测概率,则将所述第二答案作为目标答案并输出;若所述第一预测概率等于所述第二预测概率,则将所述第一答案和所述第二答案作为目标答案并同时输出;或,计算所述第一预测概率和所述第二预测概率的差值,若所述差值大于等于预设值,则比较所述第一预测概率和所述第二预测概率大小,选取所述预测概率更大的预测答案作为目标答案并输出;若所述差值小于预设值,则将所述第一答案和所述第二答案作为目标答案,并同时输出所述第一答案及对应的第一预测概率和所述第二答案及对应的第二预测概率;或,将所述第一答案作为目标答案并输出;或,将所述第二答案作为目标答案并输出。
- 根据权利要求2所述的方法,其中,所述利用所述编码子模型器拼接所述图像特征向量、所述文本元素向量和所述图像特征类别向量,包括:分别通过所述编码器子模型中的全连接层将所述具有第一向量维度的图像特征向量和所述具有第二向量维度的文本元素向量、图像特征类别向量转换为具有相同的第三向量维度的图像特征向量、文本元素向量和图像特征类别向量;将具有第三向量维度的所述图像特征类别向量、所述图像特征向量和所述文本元素向量按照预设的拼接顺序进行拼接,其中,具有所述第三向量维度的所述图像特征类别向量和所述图像特征向量按照拼接顺序相互对应。
- 根据权利要求2所述的方法,其中,所述对所述问题数据进行文本分词处理,得到文本元素,包括:判断所述问题数据为英文或中文;若所述问题数据为英文,利用空格进行文本分词,得到英文类型的文本元素;若所述问题数据为中文,利用分词模型进行文本分词,句首添加第一标记表示开始,句末添加第二标记表示结束,得到中文类型的文本元素。
- 根据权利要求3所述的方法,其中,所述方法还包括:利用随机梯度下降算法对所述解码器子模型进行训练,具体包括:计算所述视觉问答模型的交叉熵损失,利用随机梯度下降算法最小化所述交叉熵损失;其中,所述所述视觉问答模型的交叉熵损失为所述编码器子模型的交叉熵函数和所述解码器子模型的交叉熵损失的和,计算公式为:L=L 1+L 2其中,L为所述视觉问答模型的交叉熵损失,L 1为所述编码器子模型的交叉熵函数,L 2 为所述解码器子模型的交叉熵损失;K为样本数,M i为第i个样本的预测概率向量,Y i为第i个样本对应的one-hot编码,l表示向量的第l维;N为输出答案的字符数,M ij为第个i样本的输出答案对应的第j个字符的预测概率向量,Y ij为第个M ij样本的输出答案对应的第j个字符对应的one-hot编码,l表示向量的第l维。
- 一种基于深度学习模型的视觉问答装置,其中,包括:视觉问答模型建立模块,用于利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;第一答案匹配模块,用于获取图像数据和问题数据,将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;第二答案生成模块,用于利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;目标答案输出模块,用于分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现基于深度学习模型的视觉问答方法,包括:利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;获取图像数据和问题数据;将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
- 根据权利要求9所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案,包括:将所述图像数据输入至基于深度学习的目标检测Faster R-CNN模型,抽取所述图像对应的图像特征和图像特征类别,并将所述图像特征和所述图像特征类别转换为具有第一向量维度的图像特征向量和具有第二向量维度的图像特征类别向量;对所述问题数据进行文本分词处理,得到文本元素,利用预设的词向量模型将所述文本元素转换为具有第二向量维度的文本元素向量;将所述图像特征向量、所述文本元素向量和所述图像特征类别向量输入至所述编码器子模型,利用所述编码子模型器拼接所述图像特征向量、所述文本元素向量和所述图像特征类别向量,得到向量矩阵;选取所述向量矩阵中最后一层的最后一个目标向量,通过所述编码器子模型的全连接层将所述目标向量转换为目标维度,通过softmax操作得到预设分类类别中每个类别的预测概率,选取所述预测概率最大的类别作为所述分类式第一答案;其中,所述目标维度为预设的分类类别数,选用大于1的自然数。
- 根据权利要求10所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案,包括:将所述编码器子模型中的所述目标向量输入至解码器子模型,并结合常见字词表生成所述目标向量对应的输出答案,作为所述生成式第二答案;其中,所述解码器子模型为基于深度学习的神经网络pytorch框架预先利用随机梯度下降算法进行训练的模型。
- 根据权利要求9所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出,包括:利用softmax函数分别计算所述第一答案对应的第一预测概率和所述第二答案对应的第二预测概率;比较所述第一预测概率和所述第二预测概率的大小,若所述第一预测概率大于所述第二预测概率,则将所述第一答案作为目标答案并输出;若所述第一预测概率小于所述第二预测概率,则将所述第二答案作为目标答案并输出;若所述第一预测概率等于所述第二预测概率,则将所述第一答案和所述第二答案作为目标答案并同时输出;或,计算所述第一预测概率和所述第二预测概率的差值,若所述差值大于等于预设值,则比较所述第一预测概率和所述第二预测概率大小,选取所述预测概率更大的预测答案作为目标答案并输出;若所述差值小于预设值,则将所述第一答案和所述第二答案作为目标答案,并同时输出所述第一答案及对应的第一预测概率和所述第二答案及对应的第二预测概率;或,将所述第一答案作为目标答案并输出;或,将所述第二答案作为目标答案并输出。
- 根据权利要求10所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述利用所述编码子模型器拼接所述图像特征向量、所述文本元素向量和所述图像特征类别向量,包括:分别通过所述编码器子模型中的全连接层将所述具有第一向量维度的图像特征向量和所述具有第二向量维度的文本元素向量、图像特征类别向量转换为具有相同的第三向量维度的图像特征向量、文本元素向量和图像特征类别向量;将具有第三向量维度的所述图像特征类别向量、所述图像特征向量和所述文本元素向量按照预设的拼接顺序进行拼接,其中,具有所述第三向量维度的所述图像特征类别向量和所述图像特征向量按照拼接顺序相互对应。
- 根据权利要求11所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时实现所述计算机程序被处理器执行时实现基于深度学习模型的视觉问答方法,还包括:利用随机梯度下降算法对所述解码器子模型进行训练,具体包括:计算所述视觉问答模型的交叉熵损失,利用随机梯度下降算法最小化所述交叉熵损失;其中,所述所述视觉问答模型的交叉熵损失为所述编码器子模型的交叉熵函数和所述解码器子模型的交叉熵损失的和,计算公式为:L=L 1+L 2其中,L为所述视觉问答模型的交叉熵损失,L 1为所述编码器子模型的交叉熵函数,L 2为所述解码器子模型的交叉熵损失;K为样本数,M i为第i个样本的预测概率向量,Y i为第i个样本对应的one-hot编码,l表示向量的第l维;N为输出答案的字符数,M ij为第个i样本的输出答案对应的第j个字符的预测概率向量,Y ij为第个M ij样本的输出答案对应的第j个字符对应的one-hot编码,l表示向量的第l维。
- 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现基于深度学习模型的视觉问答方法,包括:利用预训练语言模型T5构架建立视觉问答模型;其中,所述视觉问答模型包括编码器子模型和解码器子模型;获取图像数据和问题数据;将所述图像数据和所述问题数据输入至所述视觉问答模型,利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案;利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案;分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出。
- 根据权利要求15所述的计算机设备,其中,所述处理器执行所述计算机程序时实现所述利用所述视觉问答模型中的所述编码器子模型在预设分类类别中匹配得到所述问题数据对应的分类式第一答案,包括:将所述图像数据输入至基于深度学习的目标检测Faster R-CNN模型,抽取所述图像对应的图像特征和图像特征类别,并将所述图像特征和所述图像特征类别转换为具有第一向量维度的图像特征向量和具有第二向量维度的图像特征类别向量;对所述问题数据进行文本分词处理,得到文本元素,利用预设的词向量模型将所述文本元素转换为具有第二向量维度的文本元素向量;将所述图像特征向量、所述文本元素向量和所述图像特征类别向量输入至所述编码器子模型,利用所述编码子模型器拼接所述图像特征向量、所述文本元素向量和所述图像特征类 别向量,得到向量矩阵;选取所述向量矩阵中最后一层的最后一个目标向量,通过所述编码器子模型的全连接层将所述目标向量转换为目标维度,通过softmax操作得到预设分类类别中每个类别的预测概率,选取所述预测概率最大的类别作为所述分类式第一答案;其中,所述目标维度为预设的分类类别数,选用大于1的自然数。
- 根据权利要求16所述的计算机设备,其中,所述处理器执行所述计算机程序时实现所述利用所述视觉问答模型中的所述解码器子模型结合常见字词表得到所述问题数据对应的生成式第二答案,包括:将所述编码器子模型中的所述目标向量输入至解码器子模型,并结合常见字词表生成所述目标向量对应的输出答案,作为所述生成式第二答案;其中,所述解码器子模型为基于深度学习的神经网络pytorch框架预先利用随机梯度下降算法进行训练的模型。
- 根据权利要求15所述的计算机设备,其中,所述处理器执行所述计算机程序时实现所述分别计算所述第一答案和所述第二答案对应的预测概率,以选取所述第一答案和/或所述第二答案作为所述问题数据对应的目标答案并输出,包括:利用softmax函数分别计算所述第一答案对应的第一预测概率和所述第二答案对应的第二预测概率;比较所述第一预测概率和所述第二预测概率的大小,若所述第一预测概率大于所述第二预测概率,则将所述第一答案作为目标答案并输出;若所述第一预测概率小于所述第二预测概率,则将所述第二答案作为目标答案并输出;若所述第一预测概率等于所述第二预测概率,则将所述第一答案和所述第二答案作为目标答案并同时输出;或,计算所述第一预测概率和所述第二预测概率的差值,若所述差值大于等于预设值,则比较所述第一预测概率和所述第二预测概率大小,选取所述预测概率更大的预测答案作为目标答案并输出;若所述差值小于预设值,则将所述第一答案和所述第二答案作为目标答案,并同时输出所述第一答案及对应的第一预测概率和所述第二答案及对应的第二预测概率;或,将所述第一答案作为目标答案并输出;或,将所述第二答案作为目标答案并输出。
- 根据权利要求16所述的计算机设备,其中,所述处理器执行所述计算机程序时实现所述利用所述编码子模型器拼接所述图像特征向量、所述文本元素向量和所述图像特征类别向量,包括:分别通过所述编码器子模型中的全连接层将所述具有第一向量维度的图像特征向量和所述具有第二向量维度的文本元素向量、图像特征类别向量转换为具有相同的第三向量维度的图像特征向量、文本元素向量和图像特征类别向量;将具有第三向量维度的所述图像特征类别向量、所述图像特征向量和所述文本元素向量按照预设的拼接顺序进行拼接,其中,具有所述第三向量维度的所述图像特征类别向量和所述图像特征向量按照拼接顺序相互对应。
- 根据权利要求17所述的计算机设备,其中,所述处理器执行所述计算机程序时实 现利用随机梯度下降算法对所述解码器子模型进行训练,具体包括:计算所述视觉问答模型的交叉熵损失,利用随机梯度下降算法最小化所述交叉熵损失;其中,所述所述视觉问答模型的交叉熵损失为所述编码器子模型的交叉熵函数和所述解码器子模型的交叉熵损失的和,计算公式为:L=L 1+L 2其中,L为所述视觉问答模型的交叉熵损失,L 1为所述编码器子模型的交叉熵函数,L 2为所述解码器子模型的交叉熵损失;K为样本数,M i为第i个样本的预测概率向量,Y i为第i个样本对应的one-hot编码,l表示向量的第l维;N为输出答案的字符数,M ij为第个i样本的输出答案对应的第j个字符的预测概率向量,Y ij为第个M ij样本的输出答案对应的第j个字符对应的one-hot编码,l表示向量的第l维。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110980645.6 | 2021-08-25 | ||
CN202110980645.6A CN113656570B (zh) | 2021-08-25 | 2021-08-25 | 基于深度学习模型的视觉问答方法及装置、介质、设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023024412A1 true WO2023024412A1 (zh) | 2023-03-02 |
Family
ID=78492810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/071428 WO2023024412A1 (zh) | 2021-08-25 | 2022-01-11 | 基于深度学习模型的视觉问答方法及装置、介质、设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113656570B (zh) |
WO (1) | WO2023024412A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115952270A (zh) * | 2023-03-03 | 2023-04-11 | 中国海洋大学 | 冰箱的智能问答方法、装置和存储介质 |
CN116862000A (zh) * | 2023-09-01 | 2023-10-10 | 浪潮电子信息产业股份有限公司 | 一种生成式人工智能的因果思维链生成方法、装置及设备 |
CN116991459A (zh) * | 2023-08-18 | 2023-11-03 | 中南大学 | 一种软件多缺陷信息预测方法与系统 |
CN117033609A (zh) * | 2023-10-09 | 2023-11-10 | 腾讯科技(深圳)有限公司 | 文本视觉问答方法、装置、计算机设备和存储介质 |
CN117273151A (zh) * | 2023-11-21 | 2023-12-22 | 杭州海康威视数字技术股份有限公司 | 基于大语言模型的科学仪器使用分析方法、装置及系统 |
CN117726990A (zh) * | 2023-12-27 | 2024-03-19 | 浙江恒逸石化有限公司 | 纺丝车间的检测方法、装置、电子设备及存储介质 |
CN117972044A (zh) * | 2023-12-29 | 2024-05-03 | 中国科学院自动化研究所 | 基于知识增强的视觉问答方法及平台 |
CN118467709A (zh) * | 2024-07-11 | 2024-08-09 | 浪潮电子信息产业股份有限公司 | 视觉问答任务的评价方法、设备、介质及计算机程序产品 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113656570B (zh) * | 2021-08-25 | 2024-05-10 | 平安科技(深圳)有限公司 | 基于深度学习模型的视觉问答方法及装置、介质、设备 |
CN113672716A (zh) * | 2021-08-25 | 2021-11-19 | 中山大学·深圳 | 基于深度学习和多模态数值推理的几何题解答方法及模型 |
CN114416914B (zh) * | 2022-03-30 | 2022-07-08 | 中建电子商务有限责任公司 | 一种基于图片问答的处理方法 |
CN114707017B (zh) * | 2022-04-20 | 2023-05-23 | 北京百度网讯科技有限公司 | 视觉问答方法、装置、电子设备和存储介质 |
CN114913341A (zh) * | 2022-05-26 | 2022-08-16 | 华中科技大学 | 基于陈述句提示微调的视觉问答方法 |
CN114972944B (zh) * | 2022-06-16 | 2023-10-27 | 中国电信股份有限公司 | 视觉问答模型的训练方法及装置、问答方法、介质、设备 |
CN115510193B (zh) * | 2022-10-10 | 2024-04-16 | 北京百度网讯科技有限公司 | 查询结果向量化方法、查询结果确定方法及相关装置 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902166A (zh) * | 2019-03-12 | 2019-06-18 | 北京百度网讯科技有限公司 | 视觉问答模型、电子设备及存储介质 |
WO2019148315A1 (en) * | 2018-01-30 | 2019-08-08 | Intel Corporation | Visual question answering using visual knowledge bases |
CN110377710A (zh) * | 2019-06-17 | 2019-10-25 | 杭州电子科技大学 | 一种基于多模态融合的视觉问答融合增强方法 |
CN110516059A (zh) * | 2019-08-30 | 2019-11-29 | 腾讯科技(深圳)有限公司 | 基于机器学习的问题答复方法、问答模型训练方法及装置 |
CN113010656A (zh) * | 2021-03-18 | 2021-06-22 | 广东工业大学 | 一种基于多模态融合和结构性控制的视觉问答方法 |
CN113012822A (zh) * | 2021-03-23 | 2021-06-22 | 同济大学 | 一种基于生成式对话技术的医疗问答系统 |
US20210216862A1 (en) * | 2020-01-15 | 2021-07-15 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for semantic analysis of multimedia data using attention-based fusion network |
CN113282721A (zh) * | 2021-04-28 | 2021-08-20 | 南京大学 | 基于网络结构搜索的视觉问答方法 |
CN113656570A (zh) * | 2021-08-25 | 2021-11-16 | 平安科技(深圳)有限公司 | 基于深度学习模型的视觉问答方法及装置、介质、设备 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110019729B (zh) * | 2017-12-25 | 2024-03-15 | 上海智臻智能网络科技股份有限公司 | 智能问答方法及存储介质、终端 |
CN109800294B (zh) * | 2019-01-08 | 2020-10-13 | 中国科学院自动化研究所 | 基于物理环境博弈的自主进化智能对话方法、系统、装置 |
CN110163299B (zh) * | 2019-05-31 | 2022-09-06 | 合肥工业大学 | 一种基于自底向上注意力机制和记忆网络的视觉问答方法 |
CN110348462B (zh) * | 2019-07-09 | 2022-03-04 | 北京金山数字娱乐科技有限公司 | 一种图像特征确定、视觉问答方法、装置、设备及介质 |
CN112364150A (zh) * | 2021-01-12 | 2021-02-12 | 南京云创大数据科技股份有限公司 | 一种结合检索与生成的智能问答方法和系统 |
-
2021
- 2021-08-25 CN CN202110980645.6A patent/CN113656570B/zh active Active
-
2022
- 2022-01-11 WO PCT/CN2022/071428 patent/WO2023024412A1/zh active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019148315A1 (en) * | 2018-01-30 | 2019-08-08 | Intel Corporation | Visual question answering using visual knowledge bases |
CN109902166A (zh) * | 2019-03-12 | 2019-06-18 | 北京百度网讯科技有限公司 | 视觉问答模型、电子设备及存储介质 |
CN110377710A (zh) * | 2019-06-17 | 2019-10-25 | 杭州电子科技大学 | 一种基于多模态融合的视觉问答融合增强方法 |
CN110516059A (zh) * | 2019-08-30 | 2019-11-29 | 腾讯科技(深圳)有限公司 | 基于机器学习的问题答复方法、问答模型训练方法及装置 |
US20210216862A1 (en) * | 2020-01-15 | 2021-07-15 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for semantic analysis of multimedia data using attention-based fusion network |
CN113010656A (zh) * | 2021-03-18 | 2021-06-22 | 广东工业大学 | 一种基于多模态融合和结构性控制的视觉问答方法 |
CN113012822A (zh) * | 2021-03-23 | 2021-06-22 | 同济大学 | 一种基于生成式对话技术的医疗问答系统 |
CN113282721A (zh) * | 2021-04-28 | 2021-08-20 | 南京大学 | 基于网络结构搜索的视觉问答方法 |
CN113656570A (zh) * | 2021-08-25 | 2021-11-16 | 平安科技(深圳)有限公司 | 基于深度学习模型的视觉问答方法及装置、介质、设备 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115952270A (zh) * | 2023-03-03 | 2023-04-11 | 中国海洋大学 | 冰箱的智能问答方法、装置和存储介质 |
CN116991459A (zh) * | 2023-08-18 | 2023-11-03 | 中南大学 | 一种软件多缺陷信息预测方法与系统 |
CN116991459B (zh) * | 2023-08-18 | 2024-04-26 | 中南大学 | 一种软件多缺陷信息预测方法与系统 |
CN116862000B (zh) * | 2023-09-01 | 2024-01-23 | 浪潮电子信息产业股份有限公司 | 一种生成式人工智能的因果思维链生成方法、装置及设备 |
CN116862000A (zh) * | 2023-09-01 | 2023-10-10 | 浪潮电子信息产业股份有限公司 | 一种生成式人工智能的因果思维链生成方法、装置及设备 |
CN117033609A (zh) * | 2023-10-09 | 2023-11-10 | 腾讯科技(深圳)有限公司 | 文本视觉问答方法、装置、计算机设备和存储介质 |
CN117033609B (zh) * | 2023-10-09 | 2024-02-02 | 腾讯科技(深圳)有限公司 | 文本视觉问答方法、装置、计算机设备和存储介质 |
CN117273151B (zh) * | 2023-11-21 | 2024-03-15 | 杭州海康威视数字技术股份有限公司 | 基于大语言模型的科学仪器使用分析方法、装置及系统 |
CN117273151A (zh) * | 2023-11-21 | 2023-12-22 | 杭州海康威视数字技术股份有限公司 | 基于大语言模型的科学仪器使用分析方法、装置及系统 |
CN117726990A (zh) * | 2023-12-27 | 2024-03-19 | 浙江恒逸石化有限公司 | 纺丝车间的检测方法、装置、电子设备及存储介质 |
CN117726990B (zh) * | 2023-12-27 | 2024-05-03 | 浙江恒逸石化有限公司 | 纺丝车间的检测方法、装置、电子设备及存储介质 |
CN117972044A (zh) * | 2023-12-29 | 2024-05-03 | 中国科学院自动化研究所 | 基于知识增强的视觉问答方法及平台 |
CN118467709A (zh) * | 2024-07-11 | 2024-08-09 | 浪潮电子信息产业股份有限公司 | 视觉问答任务的评价方法、设备、介质及计算机程序产品 |
Also Published As
Publication number | Publication date |
---|---|
CN113656570A (zh) | 2021-11-16 |
CN113656570B (zh) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023024412A1 (zh) | 基于深度学习模型的视觉问答方法及装置、介质、设备 | |
CN110609891B (zh) | 一种基于上下文感知图神经网络的视觉对话生成方法 | |
CN110334354B (zh) | 一种中文关系抽取方法 | |
CN108984526B (zh) | 一种基于深度学习的文档主题向量抽取方法 | |
CN111985239A (zh) | 实体识别方法、装置、电子设备及存储介质 | |
CN109214006B (zh) | 图像增强的层次化语义表示的自然语言推理方法 | |
CN114090780B (zh) | 一种基于提示学习的快速图片分类方法 | |
CN110555084B (zh) | 基于pcnn和多层注意力的远程监督关系分类方法 | |
CN111274800A (zh) | 基于关系图卷积网络的推理型阅读理解方法 | |
CN112069811A (zh) | 多任务交互增强的电子文本事件抽取方法 | |
CN114092707A (zh) | 一种图像文本视觉问答方法、系统及存储介质 | |
CN111339281A (zh) | 一种多视角融合的阅读理解选择题的答案选择方法 | |
CN115221846A (zh) | 一种数据处理方法及相关设备 | |
CN114841151B (zh) | 基于分解-重组策略的医学文本实体关系联合抽取方法 | |
CN114648016A (zh) | 一种基于事件要素交互与标签语义增强的事件论元抽取方法 | |
CN113742733A (zh) | 阅读理解漏洞事件触发词抽取和漏洞类型识别方法及装置 | |
US20240037335A1 (en) | Methods, systems, and media for bi-modal generation of natural languages and neural architectures | |
CN118312600B (zh) | 一种基于知识图谱与大语言模型的智能客服问答方法 | |
CN116049387A (zh) | 一种基于图卷积的短文本分类方法、装置、介质 | |
CN115510230A (zh) | 一种基于多维特征融合与比较增强学习机制的蒙古语情感分析方法 | |
CN114780723A (zh) | 基于向导网络文本分类的画像生成方法、系统和介质 | |
CN108875024B (zh) | 文本分类方法、系统、可读存储介质及电子设备 | |
CN111259673A (zh) | 一种基于反馈序列多任务学习的法律判决预测方法及系统 | |
CN115936073A (zh) | 一种语言导向卷积神经网络及视觉问答方法 | |
CN116662924A (zh) | 基于双通道与注意力机制的方面级多模态情感分析方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22859773 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22859773 Country of ref document: EP Kind code of ref document: A1 |