CN111738016B - Multi-intention recognition method and related equipment - Google Patents
Multi-intention recognition method and related equipment Download PDFInfo
- Publication number
- CN111738016B CN111738016B CN202010600015.7A CN202010600015A CN111738016B CN 111738016 B CN111738016 B CN 111738016B CN 202010600015 A CN202010600015 A CN 202010600015A CN 111738016 B CN111738016 B CN 111738016B
- Authority
- CN
- China
- Prior art keywords
- sentence
- intent
- identified
- sub
- intention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to artificial intelligence and provides a multi-intention recognition method and related equipment. The multi-intention recognition method generates sentence samples based on single-intention sentences; training a sentence-breaking model by using the sentence sample; acquiring a statement to be identified; labeling a plurality of sub-sentences to be identified of the sentences to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention; and respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model. The invention improves the accuracy and efficiency of multi-purpose recognition.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a multi-intention identification method, a multi-intention identification device, computer equipment and a computer readable storage medium.
Background
Intent recognition is an important component of natural language understanding, which may reflect the actual ideas of the client, may help the robot recognize client ideas in an intelligent conversational robot, and then make decisions. The accuracy of intention recognition directly influences the next decision of the robot, and the interaction experience can be improved by improving the accuracy of intention recognition. The efficiency of intent recognition also affects the time for the robot to react properly.
However, in the actual interaction process, the client often includes multiple intents in a sentence, so the robot should recognize the multiple intents of the client to make the next decision. How to accurately and efficiently identify a plurality of intents from sentences to be identified becomes a problem to be solved.
Disclosure of Invention
In view of the foregoing, there is a need for a multi-intent recognition method, apparatus, computer device, and computer-readable storage medium that can recognize multiple intentions of a sentence to be recognized, improving accuracy and efficiency of multi-intent recognition.
A first aspect of the present application provides a multi-intent recognition method including:
generating sentence samples based on the single-intent sentences;
training a sentence-breaking model by using the sentence sample;
acquiring a statement to be identified;
labeling a plurality of sub-sentences to be identified of the sentences to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
and respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model.
In another possible implementation manner, the generating the sentence sample based on the single intent sentence includes:
obtaining a plurality of single-intent sentences from a single-intent sentence set;
Combining the plurality of single intent statements into an intermediate statement;
acquiring a first vector of each word in the intermediate sentence from a preset word encoding table;
generating a second vector for each word in the intermediate sentence according to the part of speech;
acquiring a third vector of each word in the intermediate sentence in a preset knowledge graph;
splicing a first vector, a second vector and a third vector of each word in the intermediate sentence to obtain a word vector of each word in the intermediate sentence;
and combining the word vectors of each word in the intermediate sentence according to the word sequence to obtain the sentence sample, wherein each word vector in the sentence sample corresponds to one sentence breaking label.
In another possible implementation, before the obtaining the plurality of single intent statements from the single intent statement set, the method further includes:
converting the single-intention voice information into text information;
acquiring each verb and each noun in the text information;
inquiring the paraphrasing of each verb and the paraphrasing of each noun;
replacing each verb with a near meaning word of each verb, and replacing each noun with a near meaning word of each noun to obtain a plurality of target sentences;
adding the target sentences into the single-intention sentence set.
In another possible implementation manner, the sentence-breaking model includes:
based on the BERT, the two-way long-short term memory network and the model of the conditional random field, the training of the sentence-breaking model by the sentence sample comprises the following steps:
pre-training the BERT to obtain a pre-trained BERT model;
calculating a first output according to the sentence sample by using the pre-trained BERT model;
calculating a second output from the first output using the two-way long and short term memory network;
calculating a third output from the second output using a conditional random field;
and optimizing parameters of the two-way long-short-term memory network and the conditional random field according to the difference value between the third output and the label of the sentence sample.
In another possible implementation, the intent recognition model may include:
and the support vector machine or the deep neural network is used for identifying the intention of each sub-sentence to be identified according to the vector representation of each sub-sentence to be identified.
In another possible implementation manner, the multi-intention recognition method further includes:
when the intention of each sub-sentence to be identified is identified, acquiring an intention vector of each sub-sentence to be identified;
using the trained field recognition model to take the intention vector of the sub-sentence to be recognized as input, and outputting the field vector of the sub-sentence to be recognized;
Respectively splicing the field vectors by using the word vector of each word in the sub-sentence to be identified to obtain the spliced word vector of each word in the sub-sentence to be identified;
combining the spliced word vectors of each word in the sub-sentence to be recognized according to the word sequence to obtain an intermediate vector representation of the sub-sentence to be recognized;
inputting the intermediate vector representation of the sub-sentence to be identified into a trained slot position identification model to obtain the slot position of the sub-sentence to be identified;
determining whether each intention in the plurality of intents is executable according to the plurality of intents and the slots of the statement to be identified.
In another possible implementation manner, the determining whether each intention of the plurality of intents is executable according to the plurality of intents and the slot of the sentence to be identified includes:
for a given intent of the plurality of intents, obtaining an executable path set and a preset number of slots for the given intent;
determining that the given intent is not executable when the given intent does not exist in the executable path set, or the number of slots of the given intent is smaller than the preset number of slots, or a target intent contradicting the given intent exists in the plurality of intents;
When the given intention exists in the executable path set, the number of slots of the given intention is equal to the preset number of slots, and no target intention contradicting the given intention exists in the plurality of intents, determining that the given intention is executable.
A second aspect of the present application provides a multi-intention recognition apparatus including:
the generation module is used for generating statement samples based on the single-intention statements;
the training module is used for training the sentence-breaking model by using the sentence sample;
the acquisition module is used for acquiring the statement to be identified;
the labeling module is used for labeling a plurality of sub-sentences to be identified of the sentences to be identified by the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
and the recognition module is used for respectively recognizing the intentions of the plurality of sub-sentences to be recognized by using the trained intention recognition model.
A third aspect of the application provides a computer device comprising a processor for implementing the multi-intent recognition method when executing a computer program stored in a memory.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the multi-intent recognition method.
Marking a plurality of sub-sentences to be identified of the sentences to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention; and respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model. The statement to be identified is marked as a plurality of sub statements to be identified, so that the accuracy of multi-intention identification is enhanced; the intention of the plurality of sub-sentences to be recognized is recognized by using the trained intention recognition model, so that the efficiency of multi-intention recognition is improved. Therefore, the method and the device realize the recognition of multiple intentions of the statement to be recognized, and improve the accuracy and the efficiency of multi-intention recognition.
Drawings
Fig. 1 is a flowchart of a multi-intention recognition method according to an embodiment of the present application.
Fig. 2 is a block diagram of a multi-intention recognition apparatus according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and the described embodiments are merely some, rather than all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Preferably, the multi-intent recognition method of the present invention is applied in one or more computer devices. The computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable gate array (Field-Programmable Gate Array, FPGA), a digital processor (Digital Signal Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
Example 1
Fig. 1 is a flowchart of a multi-intent recognition method according to an embodiment of the present invention. The multi-intention recognition method is applied to computer equipment and is used for recognizing a plurality of intents of a sentence to be recognized.
As shown in fig. 1, the multi-intention recognition method includes:
101, generating sentence samples based on the single-intention sentences.
In a specific embodiment, the generating the sentence sample based on the single intent sentence includes:
obtaining a plurality of single-intent sentences from a single-intent sentence set;
combining the plurality of single intent statements into an intermediate statement;
acquiring a first vector of each word in the intermediate sentence from a preset word encoding table;
generating a second vector for each word in the intermediate sentence according to the part of speech;
acquiring a third vector of each word in the intermediate sentence in a preset knowledge graph;
splicing a first vector, a second vector and a third vector of each word in the intermediate sentence to obtain a word vector of each word in the intermediate sentence;
And combining the word vectors of each word in the intermediate sentence according to the word sequence to obtain the sentence sample, wherein each word vector in the sentence sample corresponds to one sentence breaking label.
For example, 3 single intent sentences are obtained from the single intent sentence set, namely, three risks are too high, three risks are removed, and wading risks are increased. The 3 single intention sentences are combined into one middle sentence, and the middle sentence is that the three risks are too high, the three risks are removed, and the wading risk is increased. A first vector for each word in the intermediate sentence is obtained from a preset word encoding table. A second vector (e.g., noun, verb corresponding to a different vector) is generated for each word in the intermediate sentence based on the part of speech. A third vector of each word in the intermediate sentence in the preset knowledge graph is obtained (the third vector is an explanatory expression of the word, such as an explanatory expression of a "three" word is "number", and a vector representation of the "number" is determined as the third vector of the "three" word). And splicing the first vector, the second vector and the third vector of each word in the intermediate sentence to obtain the word vector of each word in the intermediate sentence. And combining the word vectors of each word in the intermediate sentence according to the word sequence to obtain the sentence sample, wherein each word vector in the sentence sample corresponds to one sentence breaking label. For example, the "three risk fees are too high, and the removed" break labels are "B_INTENT I_INTENT O B_INTENT I_INTENT", respectively.
In a specific embodiment, before the obtaining the plurality of single intent statements from the single intent statement set, the method further includes:
converting the single-intention voice information into text information;
generalizing the text information to obtain a plurality of target sentences;
adding the target sentences into the single-intention sentence set.
Specifically, the generalizing the text information includes:
acquiring each verb and each noun in the text information; querying the paraphrasing of each verb and the paraphrasing of each noun; replacing each verb with its close meaning word, and replacing each noun with its close meaning word to obtain the target sentences.
102, training a sentence-breaking model by using the sentence sample.
In a specific embodiment, the sentence-breaking model includes:
models based on BERT, two-way long and short term memory networks, and conditional random fields; or (b)
Models based on BERT, biglu and conditional random fields.
The BERT can capture the relation between words, and the conditional random field can capture the relation between labels, so that the recognition accuracy is improved.
BERT (Bidirectional Encoder Representations fromTransformers) model, a google release model, whose purpose is to pretrain the depth bi-directional representation by jointly adjusting the context in all layers. The BERT model achieves good effects on 11 NLP tasks through pre-training and fine tuning, and meanwhile, the used transducer enables the whole model to be more efficient and can capture long-distance dependence.
The two-way long-short-term memory network takes the output of the BERT model as the input. Splicing the hidden state sequence output by the forward LSTM and the hidden state output by the reverse LSTM at each position in the two-way long-short-term memory network according to the positions to obtain a complete hidden state sequence; after dropout is set, the hidden state vector is reduced in dimension by accessing a linear layer, and statement sample vector representation of the statement sample is obtained.
And expressing the sentence sample vector to input a conditional random field to obtain a sequence labeling result. The conditional random field model can make sentence-level sequence labeling. The access CRF layer (i.e. conditional random field) can make sentence-level tag prediction by using tag transition probability, so that the labeling process does not classify each word independently.
When the sentence-breaking model comprises a BERT, two-way long-short term memory network and conditional random field based model, the training the sentence-breaking model with the sentence sample comprises:
pre-training the BERT to obtain a pre-trained BERT model;
calculating a first output according to the sentence sample by using the pre-trained BERT model;
calculating a second output from the first output using the two-way long and short term memory network;
Calculating a third output from the second output using a conditional random field;
and optimizing parameters of the two-way long-short-term memory network and the conditional random field according to the difference value between the third output and the label of the sentence sample.
103, acquiring a statement to be recognized.
When a user transacts business or requests help through voice, voice information recorded by the user can be acquired, and the voice information is recognized as a sentence to be recognized through a voice recognition technology.
When a user has a question about a description paragraph, the user can take a picture or screen shot of the description paragraph to obtain a picture; after the picture is acquired, the sentence to be identified is identified from the picture through a character identification technology.
The sentence to be recognized, which is directly input by the user, can also be received.
104, labeling a plurality of sub-sentences to be identified of the sentence to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention.
For example, the sentence to be identified is 'wading danger does not need to cancel the new car damage of the wading danger', the sentence to be identified is marked by the sentence-breaking model, obtaining a tag sequence 'B_INTENT I_INTENT I/u' of the statement to be recognized the INTENT B_INTENT I_INTENT "(i.e., the wading danger is unnecessary, the wading danger is canceled, and the loss of the vehicle is increased newly). The 3 sub-sentences to be identified are obtained, namely, wading danger is not necessary, wading danger is cancelled, and vehicle loss is increased. Each sub-sentence to be identified corresponds to an intent.
The sentence-breaking model can find the starting position (marked with b_intent) and ending position (last i_intent of each b_intent) of each sub-sentence to be identified. Thus, the multi-intention sentence is broken according to the single intention.
And 105, respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model.
For example, the intention recognition models after training are used for respectively recognizing the intention of "wading danger is not necessary", "wading danger is cancelled" and "new vehicle damage is increased", namely "negatively evaluating wading danger", "purchasing wading danger" and "purchasing vehicle damage".
Each intention is a preset label corresponding to the output of the intention recognition model. Preset labels corresponding to the outputs 100, 010 and 001 of the satisfactory pattern recognition model are respectively "negatively evaluating wading danger", "purchasing wading danger" and "purchasing vehicle loss".
The input of the intention recognition model is a vector representation of each sub-sentence to be recognized.
In a specific embodiment, the intent recognition model may include:
support vector machines or deep neural networks.
The intent recognition model is a classification model. The intention recognition model is used for recognizing the intention of each sub-sentence to be recognized according to the vector representation of each sub-sentence to be recognized. The vector representation of each sub-sentence to be identified contains semantic information (i.e., intention information) of the sub-sentence to be identified, and the intention recognition model extracts the intention characteristics of the sub-sentence to be identified to classify the intention of the sub-sentence to be identified.
The multi-intention recognition method of the first embodiment marks a plurality of sub-sentences to be recognized of the sentences to be recognized by using the sentence breaking model, so that each sub-sentence to be recognized corresponds to a single intention; and respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model. The statement to be identified is marked as a plurality of sub statements to be identified, so that the accuracy of multi-intention identification is enhanced; the intention of the plurality of sub-sentences to be recognized is recognized by using the trained intention recognition model, so that the efficiency of multi-intention recognition is improved. The first embodiment improves the accuracy and efficiency of multi-purpose recognition.
In another embodiment, the multi-intent recognition method further includes:
when the intention of each sub-sentence to be identified is identified, acquiring an intention vector of each sub-sentence to be identified;
using the trained field recognition model to take the intention vector of the sub-sentence to be recognized as input, and outputting the field vector of the sub-sentence to be recognized;
respectively splicing the field vectors by using the word vector of each word in the sub-sentence to be identified to obtain the spliced word vector of each word in the sub-sentence to be identified;
combining the spliced word vectors of each word in the sub-sentence to be recognized according to the word sequence to obtain an intermediate vector representation of the sub-sentence to be recognized;
Inputting the intermediate vector representation of the sub-sentence to be identified into a trained slot position identification model to obtain the slot position of the sub-sentence to be identified;
determining whether each intention in the plurality of intents is executable according to the plurality of intents and the slots of the statement to be identified.
The domain recognition model can recognize the domain to which each sub-sentence to be recognized belongs and output the domain vector of the sub-sentence to be recognized. For example, the sub-sentence to be identified is "wading risk is not necessary", and the domain identification model is capable of identifying that the domain to which the sub-sentence to be identified belongs is "wading risk", and the feature vector is "0002". For another example, the sub-sentence to be identified is "play xiao Sun swimming match", and the domain identification model can identify that the domain to which the sub-sentence to be identified belongs is "motion", and the feature vector is "00006".
The slot recognition model is a sequence annotation model. The slot is an entity word with specific attribute in the sub-sentence to be identified. For example, the sub-sentence to be identified is "play xiao Sun swimming race", and the slot is "swimmer= xiao Sun", "race type=swimming race". The corresponding slot vectors of the swimming race of the playing xiao Sun are 000000100100000010010010010, wherein two 100's correspond to xiao Sun and four 010's correspond to the swimming race. When recognizing that the sentence to be recognized belongs to the field of motion, only sequence labeling is needed from the slot positions in the field of motion (the input of the slot position recognition model comprises the field vector), so that the candidate range of the slot positions is reduced, and the recognition accuracy and efficiency are improved.
In another embodiment, the determining whether each intent of the plurality of intents is executable according to the plurality of intents and the slot of the statement to be identified comprises:
for a given intent of the plurality of intents, obtaining an executable path set and a preset number of slots for the given intent;
determining that the given intent is not executable when the given intent does not exist in the executable path set, or the number of slots of the given intent is smaller than the preset number of slots, or a target intent contradicting the given intent exists in the plurality of intents;
when the given intention exists in the executable path set, the number of slots of the given intention is equal to the preset number of slots, and no target intention contradicting the given intention exists in the plurality of intents, determining that the given intention is executable.
For example, the sub-sentence to be identified is "play xiao Sun", which is intended to play video, and the preset slot number is 2. The intended slot is "swimmer= xiao Sun", the slot number is 1, the intended slot number 1 is smaller than the preset slot number 2, and the intention is not executable.
Optionally selecting a selected intent from the plurality of intents, and determining the selected intent as a target intent when the verb set in the selected intent is semantically opposite to the verb set in the given intent.
For example, one selected intent is "play video", given intent is "stop playing video", the group of movements in the selected intent is semantically opposite to the group of verbs in the given intent, and the selected intent is determined as the target intent.
Example two
Fig. 2 is a block diagram of a multi-purpose recognition device according to a second embodiment of the present invention. The multi-intention recognition device 20 is applied to a computer apparatus. The multi-intention recognition device 20 is used for recognizing a plurality of intents of the sentence to be recognized.
As shown in fig. 2, the multi-purpose recognition device 20 may include a generation module 201, a training module 202, an acquisition module 203, a labeling module 204, and a recognition module 205.
A generating module 201, configured to generate a sentence sample based on the single-intent sentence.
In a specific embodiment, the generating the sentence sample based on the single intent sentence includes:
obtaining a plurality of single-intent sentences from a single-intent sentence set;
combining the plurality of single intent statements into an intermediate statement;
acquiring a first vector of each word in the intermediate sentence from a preset word encoding table;
generating a second vector for each word in the intermediate sentence according to the part of speech;
acquiring a third vector of each word in the intermediate sentence in a preset knowledge graph;
Splicing a first vector, a second vector and a third vector of each word in the intermediate sentence to obtain a word vector of each word in the intermediate sentence;
and combining the word vectors of each word in the intermediate sentence according to the word sequence to obtain the sentence sample, wherein each word vector in the sentence sample corresponds to one sentence breaking label.
For example, 3 single intent sentences are obtained from the single intent sentence set, namely, three risks are too high, three risks are removed, and wading risks are increased. The 3 single intention sentences are combined into one middle sentence, and the middle sentence is that the three risks are too high, the three risks are removed, and the wading risk is increased. A first vector for each word in the intermediate sentence is obtained from a preset word encoding table. A second vector (e.g., noun, verb corresponding to a different vector) is generated for each word in the intermediate sentence based on the part of speech. A third vector of each word in the intermediate sentence in the preset knowledge graph is obtained (the third vector is an explanatory expression of the word, such as an explanatory expression of a "three" word is "number", and a vector representation of the "number" is determined as the third vector of the "three" word). And splicing the first vector, the second vector and the third vector of each word in the intermediate sentence to obtain the word vector of each word in the intermediate sentence. And combining the word vectors of each word in the intermediate sentence according to the word sequence to obtain the sentence sample, wherein each word vector in the sentence sample corresponds to one sentence breaking label. For example, the "three risk fees are too high, and the removed" break labels are "B_INTENT I_INTENT O B_INTENT I_INTENT", respectively.
In a specific embodiment, before the acquiring the plurality of single-intent sentences from the single-intent sentence set, the multi-intent recognition device further includes a joining module for converting single-intent speech information into text information;
generalizing the text information to obtain a plurality of target sentences;
adding the target sentences into the single-intention sentence set.
Specifically, the generalizing the text information includes:
acquiring each verb and each noun in the text information; querying the paraphrasing of each verb and the paraphrasing of each noun; replacing each verb with its close meaning word, and replacing each noun with its close meaning word to obtain the target sentences.
A training module 202, configured to train a sentence-breaking model with the sentence sample.
In a specific embodiment, the sentence-breaking model includes:
models based on BERT, two-way long and short term memory networks, and conditional random fields; or (b)
Models based on BERT, biglu and conditional random fields.
The BERT can capture the relation between words, and the conditional random field can capture the relation between labels, so that the recognition accuracy is improved.
BERT (Bidirectional Encoder Representations fromTransformers) model, a google release model, whose purpose is to pretrain the depth bi-directional representation by jointly adjusting the context in all layers. The BERT model achieves good effects on 11 NLP tasks through pre-training and fine tuning, and meanwhile, the used transducer enables the whole model to be more efficient and can capture long-distance dependence.
The two-way long-short-term memory network takes the output of the BERT model as the input. Splicing the hidden state sequence output by the forward LSTM and the hidden state output by the reverse LSTM at each position in the two-way long-short-term memory network according to the positions to obtain a complete hidden state sequence; after dropout is set, the hidden state vector is reduced in dimension by accessing a linear layer, and statement sample vector representation of the statement sample is obtained.
And expressing the sentence sample vector to input a conditional random field to obtain a sequence labeling result. The conditional random field model can make sentence-level sequence labeling. The access CRF layer (i.e. conditional random field) can make sentence-level tag prediction by using tag transition probability, so that the labeling process does not classify each word independently.
When the sentence-breaking model comprises a BERT, two-way long-short term memory network and conditional random field based model, the training the sentence-breaking model with the sentence sample comprises:
pre-training the BERT to obtain a pre-trained BERT model;
calculating a first output according to the sentence sample by using the pre-trained BERT model;
calculating a second output from the first output using the two-way long and short term memory network;
Calculating a third output from the second output using a conditional random field;
and optimizing parameters of the two-way long-short-term memory network and the conditional random field according to the difference value between the third output and the label of the sentence sample.
And the obtaining module 203 is configured to obtain the statement to be identified.
When a user transacts business or requests help through voice, voice information recorded by the user can be acquired, and the voice information is recognized as a sentence to be recognized through a voice recognition technology.
When a user has a question about a description paragraph, the user can take a picture or screen shot of the description paragraph to obtain a picture; after the picture is acquired, the sentence to be identified is identified from the picture through a character identification technology.
The sentence to be recognized, which is directly input by the user, can also be received.
The labeling module 204 is configured to label a plurality of sub-sentences to be identified of the sentence to be identified with the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention.
For example, the sentence to be identified is 'wading danger does not need to cancel the new car damage of the wading danger', the sentence to be identified is marked by the sentence-breaking model, obtaining a tag sequence 'B_INTENT I_INTENT I/u' of the statement to be recognized the INTENT B_INTENT I_INTENT "(i.e., the wading danger is unnecessary, the wading danger is canceled, and the loss of the vehicle is increased newly). The 3 sub-sentences to be identified are obtained, namely, wading danger is not necessary, wading danger is cancelled, and vehicle loss is increased. Each sub-sentence to be identified corresponds to an intent.
The sentence-breaking model can find the starting position (marked with b_intent) and ending position (last i_intent of each b_intent) of each sub-sentence to be identified. Thus, the multi-intention sentence is broken according to the single intention.
The recognition module 205 is configured to recognize the intentions of the plurality of sub-sentences to be recognized by using the trained intent recognition model, respectively.
For example, the intention recognition models after training are used for respectively recognizing the intention of "wading danger is not necessary", "wading danger is cancelled" and "new vehicle damage is increased", namely "negatively evaluating wading danger", "purchasing wading danger" and "purchasing vehicle damage".
Each intention is a preset label corresponding to the output of the intention recognition model. Preset labels corresponding to the outputs 100, 010 and 001 of the satisfactory pattern recognition model are respectively "negatively evaluating wading danger", "purchasing wading danger" and "purchasing vehicle loss".
The input of the intention recognition model is a vector representation of each sub-sentence to be recognized.
In a specific embodiment, the intent recognition model may include:
support vector machines or deep neural networks.
The intent recognition model is a classification model. The intention recognition model is used for recognizing the intention of each sub-sentence to be recognized according to the vector representation of each sub-sentence to be recognized. The vector representation of each sub-sentence to be identified contains semantic information (i.e., intention information) of the sub-sentence to be identified, and the intention recognition model extracts the intention characteristics of the sub-sentence to be identified to classify the intention of the sub-sentence to be identified.
The multi-intention recognition device 20 of the second embodiment marks a plurality of sub-sentences to be recognized of the sentence to be recognized with the sentence breaking model, so that each sub-sentence to be recognized corresponds to a single intention; and respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model. The statement to be identified is marked as a plurality of sub statements to be identified, so that the accuracy of multi-intention identification is enhanced; the intention of the plurality of sub-sentences to be recognized is recognized by using the trained intention recognition model, so that the efficiency of multi-intention recognition is improved. The second embodiment improves the accuracy and efficiency of multi-purpose recognition.
In another embodiment, the multi-intention recognition device further includes a determination module, configured to obtain an intention vector of each sub-sentence to be recognized when recognizing the intention of each sub-sentence to be recognized;
using the trained field recognition model to take the intention vector of the sub-sentence to be recognized as input, and outputting the field vector of the sub-sentence to be recognized;
respectively splicing the field vectors by using the word vector of each word in the sub-sentence to be identified to obtain the spliced word vector of each word in the sub-sentence to be identified;
combining the spliced word vectors of each word in the sub-sentence to be recognized according to the word sequence to obtain an intermediate vector representation of the sub-sentence to be recognized;
Inputting the intermediate vector representation of the sub-sentence to be identified into a trained slot position identification model to obtain the slot position of the sub-sentence to be identified;
determining whether each intention in the plurality of intents is executable according to the plurality of intents and the slots of the statement to be identified.
The domain recognition model can recognize the domain to which each sub-sentence to be recognized belongs and output the domain vector of the sub-sentence to be recognized. For example, the sub-sentence to be identified is "wading risk is not necessary", and the domain identification model is capable of identifying that the domain to which the sub-sentence to be identified belongs is "wading risk", and the feature vector is "0002". For another example, the sub-sentence to be identified is "play xiao Sun swimming match", and the domain identification model can identify that the domain to which the sub-sentence to be identified belongs is "motion", and the feature vector is "00006".
The slot recognition model is a sequence annotation model. The slot is an entity word with specific attribute in the sub-sentence to be identified. For example, the sub-sentence to be identified is "play xiao Sun swimming race", and the slot is "swimmer= xiao Sun", "race type=swimming race". The corresponding slot vectors of the swimming race of the playing xiao Sun are 000 000 100 100 000 010 010 010 010, wherein two 100's correspond to xiao Sun and four 010's correspond to the swimming race. When recognizing that the sentence to be recognized belongs to the field of motion, only sequence labeling is needed from the slot positions in the field of motion (the input of the slot position recognition model comprises the field vector), so that the candidate range of the slot positions is reduced, and the recognition accuracy and efficiency are improved.
In another embodiment, the determining whether each intent of the plurality of intents is executable according to the plurality of intents and the slot of the statement to be identified comprises:
for a given intent of the plurality of intents, obtaining an executable path set and a preset number of slots for the given intent;
determining that the given intent is not executable when the given intent does not exist in the executable path set, or the number of slots of the given intent is smaller than the preset number of slots, or a target intent contradicting the given intent exists in the plurality of intents;
when the given intention exists in the executable path set, the number of slots of the given intention is equal to the preset number of slots, and no target intention contradicting the given intention exists in the plurality of intents, determining that the given intention is executable.
For example, the sub-sentence to be identified is "play xiao Sun", which is intended to play video, and the preset slot number is 2. The intended slot is "swimmer= xiao Sun", the slot number is 1, the intended slot number 1 is smaller than the preset slot number 2, and the intention is not executable.
Optionally selecting a selected intent from the plurality of intents, and determining the selected intent as a target intent when the verb set in the selected intent is semantically opposite to the verb set in the given intent.
For example, one selected intent is "play video", given intent is "stop playing video", the group of movements in the selected intent is semantically opposite to the group of verbs in the given intent, and the selected intent is determined as the target intent.
Example III
The present embodiment provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the multi-intent recognition method embodiment described above, such as steps 101-105 shown in fig. 1:
101, generating sentence samples based on single-intention sentences;
102, training a sentence-breaking model by using the sentence sample;
103, acquiring a statement to be identified;
104, marking a plurality of sub-sentences to be identified of the sentences to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
and 105, respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model.
Alternatively, the computer program, when executed by a processor, performs the functions of the modules in the above apparatus embodiments, for example, the modules 201-205 in fig. 2:
a generation module 201, configured to generate a sentence sample based on a single-intent sentence;
a training module 202, configured to train a sentence-breaking model with the sentence sample;
An obtaining module 203, configured to obtain a sentence to be identified;
the labeling module 204 is configured to label a plurality of sub-sentences to be identified of the sentence to be identified with the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
the recognition module 205 is configured to recognize the intentions of the plurality of sub-sentences to be recognized by using the trained intent recognition model, respectively.
Example IV
Fig. 3 is a schematic diagram of a computer device according to a third embodiment of the present invention. The computer device 30 comprises a memory 301, a processor 302 and a computer program 303, such as a multi-purpose recognition program, stored in the memory 301 and executable on the processor 302. The processor 302, when executing the computer program 303, implements the steps of the multi-purpose recognition method embodiments described above, such as 101-105 shown in fig. 1:
101, generating sentence samples based on single-intention sentences;
102, training a sentence-breaking model by using the sentence sample;
103, acquiring a statement to be identified;
104, marking a plurality of sub-sentences to be identified of the sentences to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
and 105, respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intent identification model.
Alternatively, the computer program, when executed by a processor, performs the functions of the modules in the above apparatus embodiments, for example, the modules 201-205 in fig. 2:
a generation module 201, configured to generate a sentence sample based on a single-intent sentence;
a training module 202, configured to train a sentence-breaking model with the sentence sample;
an obtaining module 203, configured to obtain a sentence to be identified;
the labeling module 204 is configured to label a plurality of sub-sentences to be identified of the sentence to be identified with the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
the recognition module 205 is configured to recognize the intentions of the plurality of sub-sentences to be recognized by using the trained intent recognition model, respectively.
Illustratively, the computer program 303 may be partitioned into one or more modules, which are stored in the memory 301 and executed by the processor 302 to perform the method. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program 303 in the computer device 30. For example, the computer program 303 may be divided into a generating module 201, a training module 202, an obtaining module 203, a labeling module 204, and an identifying module 205 in fig. 2, where the specific functions of each module are referred to in embodiment two.
Those skilled in the art will appreciate that the schematic diagram 3 is merely an example of the computer device 30 and is not meant to be limiting of the computer device 30, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the computer device 30 may also include input and output devices, network access devices, buses, etc.
The processor 302 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor 302 may be any conventional processor or the like, the processor 302 being the control center of the computer device 30, with various interfaces and lines connecting the various parts of the overall computer device 30.
The memory 301 may be used to store the computer program 303, and the processor 302 may implement various functions of the computer device 30 by running or executing the computer program or module stored in the memory 301 and invoking data stored in the memory 301. The memory 301 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the computer device 30, or the like. In addition, the memory 301 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), at least one disk storage device, a Flash memory device, or other non-volatile solid state storage device.
The modules integrated by the computer device 30 may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in hardware plus software functional modules.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute some of the steps of the multi-purpose recognition method according to the embodiments of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other modules or steps, and that the singular does not exclude a plurality. A plurality of modules or means recited in the system claims can also be implemented by means of one module or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (8)
1. A multi-intent recognition method, the multi-intent recognition method comprising:
generating sentence samples based on the single-intent sentences;
training a sentence-breaking model by using the sentence sample;
acquiring a statement to be identified;
labeling a plurality of sub-sentences to be identified of the sentences to be identified by using the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
respectively identifying the intentions of the plurality of sub-sentences to be identified by using the trained intention identification model;
the multi-intention recognition method further includes: when the intention of each sub-sentence to be identified is identified, acquiring an intention vector of each sub-sentence to be identified; using the trained field recognition model to take the intention vector of the sub-sentence to be recognized as input, and outputting the field vector of the sub-sentence to be recognized; respectively splicing the field vectors by using the word vector of each word in the sub-sentence to be identified to obtain the spliced word vector of each word in the sub-sentence to be identified; combining the spliced word vectors of each word in the sub-sentence to be recognized according to the word sequence to obtain an intermediate vector representation of the sub-sentence to be recognized; inputting the intermediate vector representation of the sub-sentence to be identified into a trained slot position identification model to obtain the slot position of the sub-sentence to be identified; determining whether each intention in the plurality of intents is executable according to the plurality of intents and the slots of the statement to be identified;
Wherein the determining whether each intent of the plurality of intents is executable according to the plurality of intents and the slot of the sentence to be identified comprises: for a given intent of the plurality of intents, obtaining an executable path set and a preset number of slots for the given intent; determining that the given intent is not executable when the given intent does not exist in the executable path set, or the number of slots of the given intent is smaller than the preset number of slots, or a target intent contradicting the given intent exists in the plurality of intents; when the given intention exists in the executable path set, the number of slots of the given intention is equal to the preset number of slots, and no target intention contradicting the given intention exists in the plurality of intents, determining that the given intention is executable.
2. The multi-intent recognition method of claim 1, wherein generating sentence samples based on single-intent sentences comprises:
obtaining a plurality of single-intent sentences from a single-intent sentence set;
combining the plurality of single intent statements into an intermediate statement;
acquiring a first vector of each word in the intermediate sentence from a preset word encoding table;
Generating a second vector for each word in the intermediate sentence according to the part of speech;
acquiring a third vector of each word in the intermediate sentence in a preset knowledge graph;
splicing a first vector, a second vector and a third vector of each word in the intermediate sentence to obtain a word vector of each word in the intermediate sentence;
and combining the word vectors of each word in the intermediate sentence according to the word sequence to obtain the sentence sample, wherein each word vector in the sentence sample corresponds to one sentence breaking label.
3. The multi-intent recognition method of claim 2, wherein prior to said obtaining a plurality of single intent statements from a set of single intent statements, the method further comprises:
converting the single-intention voice information into text information;
acquiring each verb and each noun in the text information;
inquiring the paraphrasing of each verb and the paraphrasing of each noun;
replacing each verb with a near meaning word of each verb, and replacing each noun with a near meaning word of each noun to obtain a plurality of target sentences;
adding the target sentences into the single-intention sentence set.
4. The multi-intent recognition method of claim 1, wherein the sentence-breaking model comprises a BERT, two-way long and short term memory network, and conditional random field based model, the training the sentence-breaking model with the sentence sample comprising:
Pre-training the BERT to obtain a pre-trained BERT model;
calculating a first output according to the sentence sample by using the pre-trained BERT model;
calculating a second output from the first output using the two-way long and short term memory network;
calculating a third output from the second output using a conditional random field;
and optimizing parameters of the two-way long-short-term memory network and the conditional random field according to the difference value between the third output and the label of the sentence sample.
5. The multi-intent recognition method as claimed in claim 1, wherein said intent recognition model comprises:
and the support vector machine or the deep neural network is used for identifying the intention of each sub-sentence to be identified according to the vector representation of each sub-sentence to be identified.
6. A multi-intent recognition device, characterized in that the device comprises means for implementing a multi-intent recognition method as claimed in any of the claims 1-5, the multi-intent recognition device comprising:
the generation module is used for generating statement samples based on the single-intention statements;
the training module is used for training the sentence-breaking model by using the sentence sample;
the acquisition module is used for acquiring the statement to be identified;
the labeling module is used for labeling a plurality of sub-sentences to be identified of the sentences to be identified by the sentence breaking model, so that each sub-sentence to be identified corresponds to a single intention;
And the recognition module is used for respectively recognizing the intentions of the plurality of sub-sentences to be recognized by using the trained intention recognition model.
7. A computer device, characterized in that it comprises a processor for executing a computer program stored in a memory for implementing the multi-purpose recognition method according to any one of claims 1-5.
8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the multi-intent recognition method as claimed in any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010600015.7A CN111738016B (en) | 2020-06-28 | 2020-06-28 | Multi-intention recognition method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010600015.7A CN111738016B (en) | 2020-06-28 | 2020-06-28 | Multi-intention recognition method and related equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111738016A CN111738016A (en) | 2020-10-02 |
CN111738016B true CN111738016B (en) | 2023-09-05 |
Family
ID=72651489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010600015.7A Active CN111738016B (en) | 2020-06-28 | 2020-06-28 | Multi-intention recognition method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111738016B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112214588B (en) * | 2020-10-16 | 2024-04-02 | 深圳赛安特技术服务有限公司 | Multi-intention recognition method, device, electronic equipment and storage medium |
CN112580368B (en) * | 2020-12-25 | 2023-09-26 | 网易(杭州)网络有限公司 | Method, device, equipment and storage medium for identifying intention sequence of conversation text |
CN112818996A (en) * | 2021-01-29 | 2021-05-18 | 青岛海尔科技有限公司 | Instruction identification method and device, storage medium and electronic equipment |
CN113158692B (en) * | 2021-04-22 | 2023-09-12 | 中国平安财产保险股份有限公司 | Semantic recognition-based multi-intention processing method, system, equipment and storage medium |
CN113064997B (en) * | 2021-04-22 | 2024-05-07 | 中国平安财产保险股份有限公司 | Intention analysis method, device, equipment and medium based on BERT model |
CN113343677B (en) * | 2021-05-28 | 2023-04-07 | 中国平安人寿保险股份有限公司 | Intention identification method and device, electronic equipment and storage medium |
CN115440200B (en) * | 2021-06-02 | 2024-03-12 | 上海擎感智能科技有限公司 | Control method and control system of vehicle-mounted system |
CN113515611B (en) * | 2021-06-22 | 2022-04-26 | 镁佳(北京)科技有限公司 | Intention recognition method and recognition system for task type multi-intention conversation |
CN113850078B (en) * | 2021-09-29 | 2024-06-18 | 平安科技(深圳)有限公司 | Multi-intention recognition method, equipment and readable storage medium based on machine learning |
CN114943306A (en) * | 2022-06-24 | 2022-08-26 | 平安普惠企业管理有限公司 | Intention classification method, device, equipment and storage medium |
CN117524215A (en) * | 2023-09-26 | 2024-02-06 | 镁佳(北京)科技有限公司 | Voice intention recognition method, device, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2014201250A1 (en) * | 2011-04-21 | 2014-03-27 | Accenture Global Services Limited | Analysis system for test artifact generation |
CN109597993A (en) * | 2018-11-30 | 2019-04-09 | 深圳前海微众银行股份有限公司 | Sentence analysis processing method, device, equipment and computer readable storage medium |
CN109918680A (en) * | 2019-03-28 | 2019-06-21 | 腾讯科技(上海)有限公司 | Entity recognition method, device and computer equipment |
CN110502608A (en) * | 2019-07-05 | 2019-11-26 | 平安科技(深圳)有限公司 | The interactive method and human-computer dialogue device of knowledge based map |
CN110659366A (en) * | 2019-09-24 | 2020-01-07 | Oppo广东移动通信有限公司 | Semantic analysis method and device, electronic equipment and storage medium |
CN110674314A (en) * | 2019-09-27 | 2020-01-10 | 北京百度网讯科技有限公司 | Sentence recognition method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10303768B2 (en) * | 2015-05-04 | 2019-05-28 | Sri International | Exploiting multi-modal affect and semantics to assess the persuasiveness of a video |
-
2020
- 2020-06-28 CN CN202010600015.7A patent/CN111738016B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2014201250A1 (en) * | 2011-04-21 | 2014-03-27 | Accenture Global Services Limited | Analysis system for test artifact generation |
CN109597993A (en) * | 2018-11-30 | 2019-04-09 | 深圳前海微众银行股份有限公司 | Sentence analysis processing method, device, equipment and computer readable storage medium |
CN109918680A (en) * | 2019-03-28 | 2019-06-21 | 腾讯科技(上海)有限公司 | Entity recognition method, device and computer equipment |
CN110502608A (en) * | 2019-07-05 | 2019-11-26 | 平安科技(深圳)有限公司 | The interactive method and human-computer dialogue device of knowledge based map |
CN110659366A (en) * | 2019-09-24 | 2020-01-07 | Oppo广东移动通信有限公司 | Semantic analysis method and device, electronic equipment and storage medium |
CN110674314A (en) * | 2019-09-27 | 2020-01-10 | 北京百度网讯科技有限公司 | Sentence recognition method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111738016A (en) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111738016B (en) | Multi-intention recognition method and related equipment | |
Uc-Cetina et al. | Survey on reinforcement learning for language processing | |
CN110717339B (en) | Semantic representation model processing method and device, electronic equipment and storage medium | |
KR102577514B1 (en) | Method, apparatus for text generation, device and storage medium | |
CN112487182B (en) | Training method of text processing model, text processing method and device | |
US11093707B2 (en) | Adversarial training data augmentation data for text classifiers | |
US20210256390A1 (en) | Computationally efficient neural network architecture search | |
CN111831813B (en) | Dialog generation method, dialog generation device, electronic equipment and medium | |
WO2022188584A1 (en) | Similar sentence generation method and apparatus based on pre-trained language model | |
CN112541060B (en) | End-to-end task type dialogue learning framework and method based on confrontation training | |
CN111739520B (en) | Speech recognition model training method, speech recognition method and device | |
CA3207902C (en) | Auditing citations in a textual document | |
CN110674260B (en) | Training method and device of semantic similarity model, electronic equipment and storage medium | |
US20220414463A1 (en) | Automated troubleshooter | |
CN111144093B (en) | Intelligent text processing method and device, electronic equipment and storage medium | |
Saha et al. | Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning | |
Li et al. | Intention understanding in human–robot interaction based on visual-NLP semantics | |
CN114722832A (en) | Abstract extraction method, device, equipment and storage medium | |
CN113705207A (en) | Grammar error recognition method and device | |
Yan et al. | Empowering conversational AI is a trip to Mars: Progress and future of open domain human-computer dialogues | |
CN118132687A (en) | Sentence processing and category model training method, sentence processing and category model training device, sentence processing equipment and category model training medium | |
CN117235205A (en) | Named entity recognition method, named entity recognition device and computer readable storage medium | |
CN114357964A (en) | Subjective question scoring method, model training method, computer device, and storage medium | |
CN114298032A (en) | Text punctuation detection method, computer device and storage medium | |
KR102731307B1 (en) | Method, apparatus and recording medium storing instructions of transformation of natural language by using neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |