CN113157892B - User intention processing method, device, computer equipment and storage medium - Google Patents
User intention processing method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113157892B CN113157892B CN202110567377.5A CN202110567377A CN113157892B CN 113157892 B CN113157892 B CN 113157892B CN 202110567377 A CN202110567377 A CN 202110567377A CN 113157892 B CN113157892 B CN 113157892B
- Authority
- CN
- China
- Prior art keywords
- intention
- vector
- sentence
- factor
- vectors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 506
- 238000012545 processing Methods 0.000 claims abstract description 67
- 230000011218 segmentation Effects 0.000 claims abstract description 31
- 239000011159 matrix material Substances 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 22
- 238000000034 method Methods 0.000 claims description 22
- 238000012216 screening Methods 0.000 claims description 19
- 238000010606 normalization Methods 0.000 claims description 18
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 239000010410 layer Substances 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000008451 emotion Effects 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 230000036541 health Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012163 sequencing technique Methods 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000008909 emotion recognition Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 101100153581 Bacillus anthracis topX gene Proteins 0.000 description 1
- 101150041570 TOP1 gene Proteins 0.000 description 1
- 101150104012 TOP2 gene Proteins 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
- G06F16/316—Indexing structures
- G06F16/322—Trees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Human Computer Interaction (AREA)
- Machine Translation (AREA)
Abstract
The application discloses a user intention processing method, a device, computer equipment and a storage medium, comprising the following steps: acquiring a target sentence to be identified; encoding the target sentence according to a preset encoder to generate a first sentence vector of the target sentence; performing word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector; inputting the first sentence vectors and the first intention factor vectors into a preset first attention model, and generating a second sentence vector and a plurality of second intention factor vectors; generating an intention score of the intention vector according to the second sentence vector and the intention vector; at least one true intent of the target sentence is determined from at least one intent vector according to the intent score. The calculated intention score more objectively characterizes the real intention of the user.
Description
Technical Field
The embodiment of the invention relates to the field of natural language, in particular to a method and a device for processing user intention, computer equipment and a storage medium.
Background
User intent processing (Natural Language Processing, NLP) is an important direction in the computer science and artificial intelligence fields. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. The user intention processing is a science integrating linguistics, computer science and mathematics. The user intention processing is mainly applied to the aspects of machine translation, public opinion monitoring, automatic abstract, viewpoint extraction, text classification, question answering, text semantic comparison, voice recognition, chinese OCR and the like.
The inventor researches of the invention find that when the user intends to process in the prior art, the corresponding reply information is usually determined by a keyword comparison mode. This way of replying is not able to understand the true intent of the sentence in the natural language, resulting in a lower accuracy of understanding the natural language.
Disclosure of Invention
The embodiment of the invention provides a user intention processing method, a device, computer equipment and a storage medium, wherein the user intention processing method, the device, the computer equipment and the storage medium can understand the real intention of a user statement.
In order to solve the technical problems, the embodiment of the invention adopts the following technical scheme: provided is a user intention processing method, including:
Acquiring a target sentence to be identified;
Encoding the target sentence according to a preset encoder to generate a first sentence vector of the target sentence;
Performing word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
Inputting the first sentence vectors and the first intention factor vectors into a preset first attention model, and generating a second sentence vector and a plurality of second intention factor vectors;
generating an intention score of the intention vector according to the second sentence vector and the intention vector;
At least one true intent of the target sentence is determined from at least one intent vector according to the intent score.
Optionally, inputting the first sentence vector and each of the first intention factor vectors into a preset first attention model, and generating the second sentence vector and the plurality of second intention factor vectors includes:
Calculating a first attention distribution between each of the first intention factor vectors and the first sentence vectors;
normalizing each first attention distribution to generate a first parameter value;
Multiplying the first parameter value by a corresponding first intention factor vector to generate a corresponding second intention factor vector;
and splicing the plurality of second intention factor vectors to generate the second statement vector.
Optionally, the generating the intent score of the intent vector according to the second sentence vector and the intent vector includes:
calculating a second attention distribution between each of the intent vectors and the second sentence vector;
Normalizing each second attention distribution to generate a second parameter value;
Multiplying the second parameter value by the corresponding intention vector to generate an attention vector corresponding to each intention vector;
And splicing the concentration vectors to generate a third sentence vector, and generating sentence scores of the intention vectors through the third sentence vector.
Optionally, after the stitching the focus vectors to generate a third sentence vector and generating the sentence score of each intention vector by using the third sentence vector, the method includes:
Converting each second intention factor vector and each intention vector into a corresponding vector matrix;
multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix;
And adding the sentence score and the factor score to generate the intention score.
Optionally, the determining at least one real intent of the target sentence from at least one intent vector according to the intent score includes:
The intention scores are used as sorting conditions, and the at least one real intention is arranged in a descending order;
and determining at least one real intention of the target sentence from the at least one intention vector according to a preset screening rule.
Optionally, after determining the true intent of the target sentence from at least one intent vector according to the intent score, the method includes:
searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in historical data as a root node;
Extracting statement parameters in the target statement according to the semantic relation tree;
and inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence.
Optionally, after the sentence parameters are input into the semantic relation tree to generate the atlas query sentence of the target sentence, the method includes:
searching reply information of the map query statement in a preset reply database according to the map query statement;
And sending the reply information to the user terminal sending the target sentence.
In order to solve the above technical problem, an embodiment of the present invention further provides a user intention processing device, including:
The acquisition module is used for acquiring the target statement to be identified;
the processing module is used for carrying out coding processing on the target sentence according to a preset coder to generate a first sentence vector of the target sentence;
The word segmentation module is used for carrying out word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
the attention module is used for inputting the first sentence vectors and the first intention factor vectors into a preset first attention model to generate a second sentence vector and a plurality of second intention factor vectors;
the scoring module is used for generating an intention score of the intention vector according to the second sentence vector and the intention vector;
An intent module for determining at least one true intent of the target sentence from at least one intent vector based on the intent score.
Optionally, the user intention processing device further comprises:
a first calculation sub-module for calculating a first attention distribution between each of the first intention factor vectors and the first sentence vectors;
the first processing submodule is used for carrying out normalization processing on each first attention distribution to generate a first parameter value;
The first generation sub-module is used for multiplying the first parameter value with the corresponding first intention factor vector to generate a corresponding second intention factor vector;
And the first splicing sub-module is used for splicing the plurality of second intention factor vectors to generate the second statement vectors.
Optionally, the user intention processing device further comprises:
A second calculation sub-module for calculating a second attention distribution between each of the intent vectors and the second sentence vectors;
the second processing submodule is used for carrying out normalization processing on each second attention distribution to generate a second parameter value;
the second generation submodule is used for multiplying the second parameter value with the corresponding intention vector to generate an attention vector corresponding to each intention vector;
And the second splicing sub-module is used for splicing the concentration vectors to generate a third statement vector, and generating statement scores of the intention vectors through the third statement vector.
Optionally, the user intention processing device further comprises:
The first conversion submodule is used for converting each second intention factor vector and each intention vector into a corresponding vector matrix;
A third computing sub-module, configured to multiply each of the second intention factor vectors with a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generate a factor score of the factor vector matrix;
and the first intention submodule is used for generating the intention score by adding the statement score and the factor score.
Optionally, the user intention processing device further comprises:
the first sequencing submodule is used for sequencing the at least one real intention in a descending order by taking the intention score as a sequencing condition;
and the first screening sub-module is used for determining at least one real intention of the target sentence from the at least one intention vector according to a preset screening rule.
Optionally, the user intention processing device further comprises:
the first retrieval sub-module is used for searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in the history data as a root node;
the first extraction submodule is used for extracting statement parameters in the target statement according to the semantic relation tree;
and the third generation sub-module is used for inputting the sentence parameters into the semantic relation tree to generate the atlas query sentence of the target sentence.
Optionally, the user intention processing device further comprises:
The first query sub-module is used for searching the reply information of the map query statement in a preset reply database according to the map query statement;
and the first reply sub-module is used for sending the reply information to the user terminal sending the target sentence.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, including a memory and a processor, where the memory stores computer readable instructions, where the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the user intention processing method.
To solve the above technical problem, the embodiments of the present invention further provide a storage medium storing computer readable instructions, where the computer readable instructions when executed by one or more processors cause the one or more processors to perform the steps of the above user intention processing method.
The embodiment of the invention has the beneficial effects that: the method comprises the steps of carrying out coding processing on a target sentence to generate a first sentence vector, analyzing the target sentence, splitting the target sentence into at least one intention vector, splicing a plurality of first intention factor vectors into each intention vector, calculating the association degree between each first intention factor vector and the first sentence vector through an attention model, and updating and generating a second sentence vector and a second intention factor vector according to the association degree. Generating intention scores of the intention vectors through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the association degree between the first intention factor vector and the first sentence vector is calculated, the association degree between each first sentence vector and the first sentence vector can be determined, then the first sentence vector and the first intention factor vector are updated according to the association degree, and the second intention factor vector and the second sentence vector which are updated can bring the importance of scoring towards important information or intention representation, so that the calculated intention score represents the real intention of a user more objectively, and the accuracy of user intention processing is improved.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a basic flow diagram of a user intent processing method in accordance with one embodiment of the present application;
FIG. 2 is a flowchart of updating a first intention factor vector and a first sentence vector according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a process for generating sentence scores according to one embodiment of the present application;
FIG. 4 is a flow chart of intent scoring in accordance with one embodiment of the present application;
FIG. 5 is a flowchart of a method for screening real intent according to an embodiment of the present application;
FIG. 6 is a flow chart of a atlas query statement according to an embodiment of the application;
FIG. 7 is a schematic diagram of an example of a semantic relationship tree according to one embodiment of the present application;
fig. 8 is a schematic diagram of a transmission flow of reply information according to an embodiment of the application;
FIG. 9 is a schematic diagram of a basic structure of a user intention processing device according to an embodiment of the present application;
Fig. 10 is a basic structural block diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, a "terminal" as used herein includes both devices of a wireless signal receiver that have only wireless signal receivers without transmitting capabilities and devices of receiving and transmitting hardware that have devices capable of performing two-way communications over a two-way communications link. Such a device may include: a cellular or other communication device having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service, personal communications System) that may combine voice, data processing, facsimile and/or data communications capabilities; PDA (Personal DIGITAL ASSISTANT ) that may include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, a "terminal" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, to operate at any other location(s) on earth and/or in space. The "terminal" used herein may also be a communication terminal, a network access terminal, or a music/video playing terminal, for example, may be a PDA, a MID (Mobile INTERNET DEVICE ) and/or a Mobile phone with a music/video playing function, and may also be a smart tv, a set-top box, or other devices.
Referring to fig. 1, fig. 1 is a basic flow chart of a user intention processing method in the present embodiment. As shown in fig. 1, a user intention processing method includes:
s110, acquiring a target sentence to be identified;
the target sentence in this embodiment means: in the man-machine interaction process, characters or words and sentences converted by voice information sent by a user. For example, when the user interacts through voice, the voice information of the user is converted into corresponding text information, and the text information is the target sentence. The composition of the target sentence can be a sentence of the user, and can also be the whole or whole text information set forth by the user.
In some embodiments, the target sentence can also be various text information, and the text information is text generated in the non-man-machine interaction process, and can be text of various literary works, legal works, medical works and the like.
S120, carrying out coding processing on the target sentence according to a preset coder, and generating a first sentence vector of the target sentence;
Inputting the target sentence into a preset encoder for compiling, and generating a first sentence vector capable of representing the complete meaning representation of the target sentence.
The encoder in the embodiment is a double-layer convolutional neural network, and the double-layer convolutional neural network generates a first sentence vector after performing convolution extraction on a target sentence twice. The composition of the encoder is not limited to this, and in some implementations the encoder can be a single layer convolutional neural network, a three layer convolutional neural network model, a four layer convolutional neural network model, or a more layer convolutional neural network model.
S130, performing word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
The target sentence is subjected to word segmentation, and the word segmentation aims at: differentiating the target sentence into a plurality of keywords or key sentences, the keywords or key sentences being composed of: at least one word composition of Domain, predicate and target. The subject, predicate, or object that make up a keyword or key sentence is defined as the first intention factor of the target sentence. The first intention factors are typically words commonly used in natural language, and thus, a vector for each first intention factor can be queried through a pre-established vector library. The vector of each first intention factor can be obtained through query comparison.
In some embodiments, when the segmentation is performed, granularity of the segmentation is further reduced, and the obtained segmentation result is directly Domain (subject), prediction (predicate) or target (object). Then, the intention vectors are generated by combining and stitching with each other.
S140, inputting the first sentence vectors and the first intention factor vectors into a preset first attention model, and generating a second sentence vector and a plurality of second intention factor vectors;
After the first sentence vectors and the first intention factor vectors are generated, the association degree between each first intention factor vector and other first intention factor vectors in the first sentence vectors is calculated through an attention model in order to identify the association degree between the first intention factor vectors and the first sentence vectors, the association degree between the intention factor vectors and the first sentence vectors is calculated through the positions between the intention factors, and the association degree between the disagreeable picture factor vectors is calculated through a pre-constructed association database between Chinese characters. And constructing vector factors of each intention factor according to the degree of correlation between the calculated intention factors, wherein the factor vector generated according to the first attention model is a second intention factor vector. And after the second intention factor vector is generated, splicing the second intention factors in turn according to the arrangement mode of the vector factors in the target sentence, and generating an updated second sentence vector.
Wherein the first attention model is a question-factor attention module (query-factor attention). The first attention model updates the first factor vector according to the association degree among the factor vectors, and then updates the first statement vector according to the second factor vector in a vector matrix splicing mode to generate a second statement vector. The updated second intention factor vector and the second sentence vector comprise the association relation among the intention factors, so that the character vector of the important information in the sentence is more prominent, and the accuracy of later intention understanding is improved.
S150, generating an intention score of the intention vector according to the second sentence vector and the intention vector;
And after the second intention factor vector and the second sentence vector are updated, calculating the intention score of each intention vector according to the plurality of second intention factor vectors, the second sentence vector and the intention vector.
A second attention distribution between each of the intent vectors and the second sentence vector is calculated, the second attention distribution being calculated by calculating a vector distance between each of the intent vectors and the second sentence vector.
Calculation of the second attention profile enables calculation of a distance between the intent vector and the second sentence vector: euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, angle cosine, hamming distance, or jaccard distance.
Calculating the concentration vector of each intention vector according to the calculated second concentration distribution, wherein the concentration vector is calculated in the following way: and multiplying the second attention distribution by the intention vectors to obtain a multiplication result which is the concentration vector corresponding to each intention vector. After the concentration vector is calculated, according to the position information of the intention vector corresponding to the concentration vector in the target sentence, splicing the concentration vector in sequence to generate a third sentence vector, and calculating the score of the third sentence vector through a full-connection layer to obtain the sentence score of the second sentence vector.
After the sentence score is calculated, the factor score between each second intention factor and each intention vector corresponding to the second intention factor is continuously calculated. The specific calculation mode is as follows: converting each second intention factor to generate a vector matrix, converting the intention vectors to generate corresponding vector matrixes, multiplying the second intention factors by the vector matrixes corresponding to the intention vectors to obtain factor vector matrixes, and calculating the factor scores of the factor vector matrixes through the full-connection layer.
And adding the calculated sentence score and the factor score to generate a corresponding intention score of each intention vector. The calculation of the intent score is not limited thereto, and in some embodiments, the intent score can be calculated by means of weighted calculation, for example, by counting the intent weight values of the individual intent vectors in different sentence patterns and the weight values of the individual factor vectors in the intent vectors through historical data, and then calculating the intent score according to the weighted manner.
S160, determining at least one real intention of the target sentence from at least one intention vector according to the intention score.
After each intention vector is calculated, the intention vectors are filtered according to a set filtering threshold, the intention vectors with the intention scores larger than or equal to the filtering threshold in the intention vectors are filtered out, and the intention vectors are determined to be the real intention of the target sentence. The number of the real intentions of the target sentence is not limited to one, and can be set according to actual requirements according to different specific application scenes.
In some embodiments, the screening of the real intents is accomplished by extracting a predetermined number of real intents. According to the calculated intention scores, the intention vectors are arranged in a descending order, and then the intention corresponding to the first n intention vectors is extracted from the arrangement list as the real intention of the target sentence.
In some embodiments, the filtering of the real intent is dynamically varied in order to identify as much as possible a more obscure representation of the meaning entrained in the user's natural language, prior to identifying the real intent of the user. Firstly, carrying out emotion recognition on voice information corresponding to a target sentence, wherein the emotion recognition can be carried out through a neural network model trained to a convergence state. When the emotion of the user is identified to be stable and only one emotion is expressed, the real intention of the user is determined to be 1, when the emotion of the user is identified to express two emotions, the real intention of the user is determined to be 2, when the emotion of the user is identified to be three, the real intention of the user is determined to be 3, and the like, and when several types of emotions of the user are identified, the corresponding real intention of the user is determined to be several types. The correspondence between the user emotion and the real intention screening number is not limited to this, and can be arbitrarily set according to the actual needs of the specific application scenario.
In the above embodiment, the target sentence is generated by encoding, the first sentence vector is analyzed, the target sentence is split into at least one intention vector, each intention vector is formed by splicing a plurality of first intention factor vectors, the degree of association between each first intention factor vector and the first sentence vector is calculated through the attention model, and the second sentence vector and the second intention factor vector are generated according to the degree of association update. Generating intention scores of the intention vectors through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the association degree between the first intention factor vector and the first sentence vector is calculated, the association degree between each first sentence vector and the first sentence vector can be determined, then the first sentence vector and the first intention factor vector are updated according to the association degree, and the second intention factor vector and the second sentence vector which are updated can bring the importance of scoring towards important information or intention representation, so that the calculated intention score represents the real intention of a user more objectively, and the accuracy of user intention processing is improved.
In some implementations, the first intention factor vector and the first sentence vector need to be updated by a similarity between the first intention factor vector and the first sentence vector. Referring to fig. 2, fig. 2 is a flow chart illustrating updating of the first intention factor vector and the first sentence vector according to the present embodiment.
As shown in fig. 2, S140 includes:
S141, calculating first attention distribution between each first intention factor vector and the first statement vector;
The first attention distribution between the first intention factor vector and the first sentence vector is calculated, the similarity between the intention factor vector and the first sentence vector is calculated, the calculation can be carried out through the position between each intention factor, and the similarity between the disagreement picture factor vectors can also be calculated through a pre-built relevance database between Chinese characters. By the method, the first attention distribution between the first intention factor vector and the first statement vector is calculated.
S142, carrying out normalization processing on each first attention distribution to generate a first parameter value;
after the first attention distribution is calculated, since the distance lengths between the vectors are different, the numerical span of the first attention distribution is large, and therefore, normalization processing needs to be performed on the calculated first attention distribution. In the normalization process, a linear function normalization method or a 0-mean normalization method can be adopted for normalization.
S143, multiplying the first parameter value by a corresponding first intention factor vector to generate a corresponding second intention factor vector;
The calculated first parameter values of the first attention distribution are multiplied by the corresponding first intention factor vectors, the calculated similarity is used as weight, the calculated similarity is solidified into the second intention factor vectors through multiplication operation, the weight values of important characters in a target sentence are highlighted, the weight values of conventional characters are reduced, the feature vector layers corresponding to the target sentence can be more clear, and the accuracy of intention recognition is improved.
The vector obtained by multiplying the first parameter value and the corresponding first intention factor vector is defined as a second intention factor vector.
S144, splicing the second intention factor vectors to generate the second statement vector.
And after the second intention factor vector is generated, splicing the second intention factors in turn according to the arrangement mode of the vector factors in the target sentence, and generating an updated second sentence vector.
In some implementations, it is desirable to update the third sentence vector by the similarity between the intent vector and the second sentence vector, and generate a sentence score for the third sentence vector. Referring to fig. 3, fig. 3 is a flowchart illustrating a sentence score generation process according to the present embodiment.
As shown in fig. 3, S150 includes:
S151, calculating second attention distribution between each intention vector and the second statement vector;
A second attention distribution between each of the intent vectors and the second sentence vector is calculated, the second attention distribution being calculated by calculating a vector distance between each of the intent vectors and the second sentence vector.
Calculation of the second attention profile enables calculation of a distance between the intent vector and the second sentence vector: euclidean distance, manhattan distance, chebyshev distance, minkowski distance, normalized euclidean distance, mahalanobis distance, angle cosine, hamming distance, or jaccard distance.
S152, carrying out normalization processing on each second attention distribution to generate a second parameter value;
After the second attention distribution is calculated, since the distance lengths between the vectors are different, the numerical span of the second attention distribution is large, and therefore, normalization processing needs to be performed on the calculated second attention distribution. In the normalization process, a linear function normalization method or a 0-mean normalization method can be adopted for normalization.
S153, multiplying the second parameter value by the corresponding intention vector to generate an attention vector corresponding to each intention vector;
Multiplying the calculated second parameter values of each second attention distribution by the corresponding intention vectors, solidifying the calculated similarity serving as weight into updated intention vectors through multiplication operation, and calculating the relative concentration degree of each intention vector.
The vector obtained by multiplying the second parameter value by its corresponding intention vector is defined as the concentration vector.
And S154, splicing the concentration vectors to generate a third sentence vector, and generating sentence scores of the intention vectors through the third sentence vector.
After the concentration vectors are obtained through calculation, according to the arrangement relation among the intention vectors corresponding to the concentration vectors, all the concentration vectors are spliced in sequence to generate a third statement vector.
After a third sentence vector is generated, feature distances between each intention vector and the third sentence vector are calculated, the calculated feature distances are mapped into a full-connection layer 1*1, the feature distances are subjected to plane expansion, and finally, the feature distances are subjected to classification scoring through a scoring mechanism of machine learning, so that sentence scores of each intention vector are obtained.
By calculating the score of each intention score in the third sentence vector, the importance of each intention vector to the overall target sentence intention representation can be calculated, and the sentence score of each intention vector can be calculated from the macroscopic level.
In some implementations, it is desirable to calculate factor scores in the intent vectors and to settle the intent scores for each intent vector based on the factor scores and statement scores. Referring to fig. 4, fig. 4 is a flowchart illustrating the scoring process according to the present embodiment.
As shown in fig. 4, S154 thereafter includes:
s155, converting each second intention factor vector and each intention vector into a corresponding vector matrix;
And converting each second intention factor vector into a corresponding vector matrix, wherein each second intention factor vector consists of characters corresponding to the intention factors, and forming the vector matrix of each second intention factor vector in the target sentence by counting the vectors corresponding to each character in each second intention factor vector.
Each intention vector is composed of one or more second intention factor vectors, and the second intention factor vectors composing the intention vectors are arranged in sequence to generate a corresponding vector matrix.
And converting to obtain a vector matrix of each intention vector, and converting each second intention factor vector in each intention vector into a corresponding vector matrix.
S156, multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix;
and multiplying each second intention factor vector by the vector matrix of the corresponding intention vector to generate a factor vector matrix. Mapping the calculated factor vector matrix into a 1*1 full-connection layer, carrying out plane expansion on the feature distance, and finally carrying out finger-dividing calculation on the expanded one-dimensional features through a scoring mechanism of machine learning to obtain factor scores of all intention vectors.
And S157, adding the sentence score and the factor score to generate the intention score.
And adding the sentence score and the factor score of each calculated intention vector to obtain the intention score of each intention vector, and screening the real intention of the target sentence according to the intention score.
The calculation method of the intention score is not limited to this, and in some embodiments, the intention score can be calculated according to a weighted manner according to the specific application scenario. For example, in the calculation, the longer the text length of the target sentence is, the larger the weighting of the corresponding sentence score is, and the smaller the weighting of the corresponding factor score is; otherwise, the smaller the sentence score weight, the larger the corresponding factor score weight. In one embodiment, comparing the text character length of the target sentence with a preset length threshold, and when the comparison result is greater than 1, the sentence score weight is greater than 0.5; when the comparison result is smaller than 1, the weight of the factor score is larger than 0.5. The specific conversion ratio between the weight and the character ratio can be set according to the actual scene requirement.
Calculation of factor score the scoring situation of each intention vector is measured from the micro-angle, and then the statement score of each intention vector at the macro level and the factor score at the micro level are combined, and the comprehensive score of the intention vector is calculated: intent score. The intention score is used for mixing the micro score and the macro score of the intention vector, so that each intention score is more objective, and the true intention of the target sentence can be screened out.
In some embodiments, after the intention score of each intention vector is obtained, the intention score needs to be screened out, so that the target vector of the real intention of the target sentence can be pointed out. Referring to fig. 5, fig. 5 is a flowchart illustrating a real intent screening process according to the present embodiment.
As shown in fig. 5, S160 includes:
s161, arranging the at least one real intention in a descending order by taking the intention scores as ordering conditions;
after the intention score of each intention vector is calculated, the identified intention vectors are ordered according to the intention score in a descending order.
S162, determining at least one real intention of the target sentence from the at least one intention vector according to a preset screening rule.
And screening the real intention of the target sentence from the ordered list according to a preset screening rule. Specifically, the screening rule is: and screening out the intention vectors of the first three ranks in the sorting list, and determining the intention characters corresponding to the screened out intention vectors as the real intention of the target sentence. The number of screens for real intent in the screening rules can be set according to specific needs, including (but not limited to): top1, top2, top4 or more.
In some embodiments, after determining the true intent of the target statement, it is desirable to generate the atlas query statement based on the true intent representation correspondence. Referring to fig. 6, fig. 6 is a flowchart of a spectrum query sentence according to the present embodiment.
As shown in fig. 6, S160 includes:
s171, searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in historical data as a root node;
After the real intention of the target sentence is obtained through screening, the intention character representing the real intention is used as a retrieval condition, and a semantic key tree corresponding to the real intention is searched in a preset sample database.
The semantic relationship tree is a data structure pre-built in the sample database for identifying various types of user intents. Specifically, various user intentions acquired in the historical data are taken as root nodes, and the user intentions are split into the following words: predicate, operator, type and attribute. The splitting of the user intention is not limited to the above four categories, and in practical application, the splitting of the user intention can be performed according to the actual requirements of the application scene, for example, the categories such as reducing the types in the user intention or adding subjects and the like can be reduced.
In user intention splitting, the user intention is split into tree topology. Referring to fig. 7, fig. 7 is a schematic diagram of an example of a semantic relationship tree according to the present embodiment. As shown in FIG. 7, the user intent is taken as the root node and predicates, operators, types and attributes are taken as child nodes of the root node. Predicates can document the main intent of the user's intent, for example: applying, refunding, claims or purchasing information, etc. Operators are used to identify logical relationships between predicates in the intended representation, e.g., juxtaposition, order, condition, greater than, equal to, belonging to, or containing, etc., information. The type refers to a condition of predicate execution in the user's intent, such as: monetary amount, age, product, occupation or disease, etc. Attributes refer to the type of user intent, for example: subject, object, age, occupation, weight, and payment period.
The semantic relation tree is constructed through a Biaffine model, the Biaffine model divides the various user intentions acquired in the historical data, then the probability of the dependency relation between the two words is directly predicted by using a neural network, and finally different words are classified to form the semantic relation tree through the probability of the dependency relation between the words in the user intentions.
S172, extracting statement parameters in the target statement according to the semantic relation tree;
after the semantic relation tree with the true intention is matched, collecting sentence parameters corresponding to the semantic relation tree according to words in leaf nodes in the semantic relation tree. For example, the target statement is: i want to buy the health risk for two years, wherein 'buying the health risk' is defined as the real intention of the user, the health risk becomes a leaf node of a semantic relation tree corresponding to the real intention of the user, the leaf node only comprises the noun of the health risk, no affiliated graduated word exists, and at the moment, the 'two years' is extracted as statement parameters matched with the 'health risk' in a target statement. When the user intends to be multiple, extracting the sentence parameters is completed by respectively extracting the parameter values corresponding to each leaf node.
S173, inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence.
And respectively inputting the extracted sentence parameters into leaf nodes corresponding to the semantic relation tree to generate a map query sentence of the target sentence. The atlas query statement links leaf nodes in the semantic relation tree in a topological form to generate a cascade atlas query statement.
In some embodiments, after retrieving the reply message, the reply message needs to be returned to the user terminal. Referring to fig. 8, fig. 8 is a schematic diagram illustrating a transmission flow of reply messages according to the present embodiment.
S181, searching reply information of the map query statement in a preset reply database according to the map query statement;
And when searching, sequentially carrying out answer inquiry according to the link sequence of the leaf nodes, and continuously narrowing the searching range of the matched answers until the answers obtained by searching can meet the matching requirements in all the leaf nodes, and replying the answers as reply information. For example, when two leaf nodes exist in the semantic relation tree, the two leaf nodes form a cascade map query statement, when the leaf node of the first stage performs answer searching, 20 groups of adaptive answers are searched, and when the leaf node of the second stage performs answer adaptation, adaptation is performed in the 20 groups of answers which have been recalled. The searching mode can avoid the defect that each user intention needs to be subjected to global searching matching during the traditional multi-intention recognition, can improve the accuracy of the reply information, can improve the searching efficiency, and saves the computational resources.
S182, the reply information is sent to the user terminal which sends the target sentence.
And after retrieving the reply information, sending the reply information to the user terminal of the target sentence. In some embodiments, after a certain user terminal obtains the reply information of the target sentence, the server side stores the target sentence in association with the user terminal. When the server side receives the same target statement again, terminal adaptation is carried out, query information is sent to the user terminal which obtains reply information in advance, the query information prompts the user whether to become a problem consultant or not, paid consultation can be carried out under partial conditions to obtain the permission of the prior user, after the permission of the prior user terminal is obtained, a point-to-point communication channel between the repeated problem consultation terminal and the user terminal is established, point-to-point communication between the user terminal and the repeated problem consultation terminal is realized, and then the user terminal sends corresponding reply information to the repeated problem consultation terminal. Therefore, a communication bridge between the existing user and the consultation user is established, so that the new user can obtain the most pertinent guidance and advice, and meanwhile, customer service resources are saved.
Referring specifically to fig. 9, fig. 9 is a schematic diagram illustrating a basic structure of a user intention processing device according to the present embodiment.
As shown in fig. 9, a user intention processing device includes: an acquisition module 1100, a processing module 1200, a word segmentation module 1300, an attention module 1400, a scoring module 1500, and an intent module 1600. The acquisition module is used for acquiring a target sentence to be identified; the processing module is used for carrying out coding processing on the target sentence according to a preset coder, and generating a first sentence vector of the target sentence; the word segmentation module is used for carrying out word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors; the attention module is used for inputting the first sentence vectors and the first intention factor vectors into a preset first attention model to generate a second sentence vector and a plurality of second intention factor vectors; the scoring module is used for generating an intention score of the intention vector according to the second sentence vector and the intention vector; an intent module is to determine at least one true intent of the target sentence from at least one intent vector based on the intent score.
The user intention processing device carries out coding processing on a target sentence to generate a first sentence vector, analyzes the target sentence, splits the target sentence into at least one intention vector, each intention vector is formed by splicing a plurality of first intention factor vectors, calculates the association degree between each first intention factor vector and the first sentence vector through an attention model, and updates and generates a second sentence vector and a second intention factor vector according to the association degree. Generating intention scores of the intention vectors through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the association degree between the first intention factor vector and the first sentence vector is calculated, the association degree between each first sentence vector and the first sentence vector can be determined, then the first sentence vector and the first intention factor vector are updated according to the association degree, and the second intention factor vector and the second sentence vector which are updated can bring the importance of scoring towards important information or intention representation, so that the calculated intention score represents the real intention of a user more objectively, and the accuracy of user intention processing is improved.
Optionally, the user intention processing device further comprises: the system comprises a first computing sub-module, a first processing sub-module, a first generating sub-module and a first splicing sub-module. Wherein, the first calculation submodule is used for calculating first attention distribution between each first intention factor vector and the first statement vector; the first processing sub-module is used for carrying out normalization processing on each first attention distribution to generate a first parameter value; the first generation sub-module is used for multiplying the first parameter value with the corresponding first intention factor vector to generate a corresponding second intention factor vector; the first splicing sub-module is used for splicing the plurality of second intention factor vectors to generate the second statement vectors.
Optionally, the user intention processing device further comprises: the system comprises a second calculation sub-module, a second processing sub-module, a second generation sub-module and a second splicing sub-module. Wherein a second calculation submodule is used for calculating a second attention distribution between each intention vector and the second statement vector; the second processing sub-module is used for carrying out normalization processing on each second attention distribution to generate a second parameter value; the second generation submodule is used for multiplying the second parameter value with the corresponding intention vector to generate an attention vector corresponding to each intention vector; the second splicing sub-module is used for splicing the concentration vectors to generate a third statement vector, and generating statement scores of the intention vectors through the third statement vector.
Optionally, the user intention processing device further comprises: the device comprises a first conversion sub-module, a third calculation sub-module and a first intention sub-module. The first conversion submodule is used for converting each second intention factor vector and each intention vector into a corresponding vector matrix; the third calculation sub-module is used for multiplying each second intention factor vector by a vector matrix of the corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix; the first intention submodule is used for generating the intention score by adding the statement score and the factor score.
Optionally, the user intention processing device further comprises: the system comprises a first sequencing sub-module and a first screening sub-module. The first sorting sub-module is used for descending order of the at least one real intention by taking the intention score as a sorting condition; the first screening submodule is used for determining at least one real intention of the target sentence from the at least one intention vector according to preset screening rules.
Optionally, the user intention processing device further comprises: the device comprises a first cable detection sub-module, a first extraction sub-module and a third generation sub-module. The first retrieval submodule is used for searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in historical data as a root node; the first extraction submodule is used for extracting statement parameters in the target statement according to the semantic relation tree; and the third generation sub-module is used for inputting the sentence parameters into the semantic relation tree to generate the atlas query sentence of the target sentence.
Optionally, the user intention processing device further comprises: the system comprises a first query sub-module and a first reply sub-module. The first query sub-module is used for searching reply information of the map query statement in a preset reply database according to the map query statement; the first reply sub-module is used for sending the reply information to the user terminal sending the target sentence.
In order to solve the technical problems, the embodiment of the invention also provides computer equipment. Referring specifically to fig. 10, fig. 10 is a basic structural block diagram of a computer device according to the present embodiment.
As shown in fig. 10, the internal structure of the computer device is schematically shown. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database, and computer readable instructions, where the database may store a control information sequence, and the computer readable instructions, when executed by the processor, may cause the processor to implement a user intent processing method. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a user intended process. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in FIG. 10 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor in this embodiment is configured to perform specific functions of the obtaining module 1100, the processing module 1200, the word segmentation module 1300, the attention module 1400, the scoring module 1500, and the intention module 1600 in fig. 9, and the memory stores program codes and various types of data required for executing the above modules. The network interface is used for data transmission between the user terminal or the server. The memory in this embodiment stores program codes and data required for executing all the sub-modules in the face image key point detection device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The method comprises the steps that a computer device carries out coding processing on a target sentence to generate a first sentence vector, analyzes the target sentence, splits the target sentence into at least one intention vector, each intention vector is formed by splicing a plurality of first intention factor vectors, calculates the association degree between each first intention factor vector and the first sentence vector through an attention model, and updates and generates a second sentence vector and a second intention factor vector according to the association degree. Generating intention scores of the intention vectors through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the association degree between the first intention factor vector and the first sentence vector is calculated, the association degree between each first sentence vector and the first sentence vector can be determined, then the first sentence vector and the first intention factor vector are updated according to the association degree, and the second intention factor vector and the second sentence vector which are updated can bring the importance of scoring towards important information or intention representation, so that the calculated intention score represents the real intention of a user more objectively, and the accuracy of user intention processing is improved.
The present invention also provides a storage medium storing computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of any of the embodiments of user intent processing method described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
Claims (7)
1. A user intention processing method, comprising:
Acquiring a target sentence to be identified;
Encoding the target sentence according to a preset encoder to generate a first sentence vector of the target sentence;
Performing word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
Inputting the first sentence vectors and the first intention factor vectors into a preset first attention model, and generating a second sentence vector and a plurality of second intention factor vectors;
generating an intention score of the intention vector according to the second sentence vector and the intention vector;
determining at least one true intent of the target sentence from at least one intent vector according to the intent score; inputting the first sentence vector and each first intention factor vector into a preset first attention model, and generating a second sentence vector and a plurality of second intention factor vectors comprises:
Calculating a first attention distribution between each of the first intention factor vectors and the first sentence vectors;
normalizing each first attention distribution to generate a first parameter value;
Multiplying the first parameter value by a corresponding first intention factor vector to generate a corresponding second intention factor vector;
Splicing the plurality of second intention factor vectors to generate the second statement vectors;
The generating an intent score for the intent vector from the second sentence vector and the intent vector comprises:
calculating a second attention distribution between each of the intent vectors and the second sentence vector;
Normalizing each second attention distribution to generate a second parameter value;
Multiplying the second parameter value by the corresponding intention vector to generate an attention vector corresponding to each intention vector;
Splicing the concentration vectors to generate a third sentence vector, and generating sentence scores of the intention vectors through the third sentence vector
The step of generating a third sentence vector by splicing the concentration vectors, and generating sentence scores of the intention vectors through the third sentence vector comprises the following steps:
Converting each second intention factor vector and each intention vector into a corresponding vector matrix;
multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix;
And adding the sentence score and the factor score to generate the intention score.
2. The user intent processing method as claimed in claim 1, wherein the determining at least one real intent of the target sentence from at least one intent vector according to the intent score comprises:
The intention scores are used as sorting conditions, and the at least one real intention is arranged in a descending order;
and determining at least one real intention of the target sentence from the at least one intention vector according to a preset screening rule.
3. The method according to claim 1, wherein after determining the true intent of the target sentence from at least one intent vector according to the intent score, comprising:
searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in historical data as a root node;
Extracting statement parameters in the target statement according to the semantic relation tree;
and inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence.
4. A user intention processing method as claimed in claim 3, wherein after said inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence, comprising:
searching reply information of the map query statement in a preset reply database according to the map query statement;
And sending the reply information to the user terminal sending the target sentence.
5. A user intention processing device, comprising:
The acquisition module is used for acquiring the target statement to be identified;
the processing module is used for carrying out coding processing on the target sentence according to a preset coder to generate a first sentence vector of the target sentence;
The word segmentation module is used for carrying out word segmentation processing on the target sentence according to a preset word segmentation rule, and generating at least one intention vector to be confirmed for representing the intention of the target sentence according to a word segmentation result and a first sentence vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
the attention module is used for inputting the first sentence vectors and the first intention factor vectors into a preset first attention model to generate a second sentence vector and a plurality of second intention factor vectors;
the scoring module is used for generating an intention score of the intention vector according to the second sentence vector and the intention vector;
An intent module to determine at least one true intent of the target sentence from at least one intent vector based on the intent score;
the user intention processing device further includes:
a first calculation sub-module for calculating a first attention distribution between each of the first intention factor vectors and the first sentence vectors;
the first processing submodule is used for carrying out normalization processing on each first attention distribution to generate a first parameter value;
The first generation sub-module is used for multiplying the first parameter value with the corresponding first intention factor vector to generate a corresponding second intention factor vector;
The first splicing sub-module is used for splicing the plurality of second intention factor vectors to generate the second statement vectors;
the user intention processing device further includes:
A second calculation sub-module for calculating a second attention distribution between each of the intent vectors and the second sentence vectors;
the second processing submodule is used for carrying out normalization processing on each second attention distribution to generate a second parameter value;
the second generation submodule is used for multiplying the second parameter value with the corresponding intention vector to generate an attention vector corresponding to each intention vector;
The second splicing sub-module is used for splicing the concentration vectors to generate a third sentence vector, and generating sentence scores of the intention vectors through the third sentence vector;
the user intention processing device further includes:
The first conversion submodule is used for converting each second intention factor vector and each intention vector into a corresponding vector matrix;
A third computing sub-module, configured to multiply each of the second intention factor vectors with a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generate a factor score of the factor vector matrix;
and the first intention submodule is used for generating the intention score by adding the statement score and the factor score.
6. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the user intended processing method of any of claims 1 to 4.
7. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the user intent processing method as claimed in any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110567377.5A CN113157892B (en) | 2021-05-24 | 2021-05-24 | User intention processing method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110567377.5A CN113157892B (en) | 2021-05-24 | 2021-05-24 | User intention processing method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113157892A CN113157892A (en) | 2021-07-23 |
CN113157892B true CN113157892B (en) | 2024-09-06 |
Family
ID=76877820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110567377.5A Active CN113157892B (en) | 2021-05-24 | 2021-05-24 | User intention processing method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113157892B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113722457B (en) * | 2021-08-11 | 2024-08-06 | 北京零秒科技有限公司 | Intention recognition method and device, storage medium and electronic device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109388793A (en) * | 2017-08-03 | 2019-02-26 | 阿里巴巴集团控股有限公司 | Entity mask method, intension recognizing method and corresponding intrument, computer storage medium |
CN109815492A (en) * | 2019-01-04 | 2019-05-28 | 平安科技(深圳)有限公司 | A kind of intension recognizing method based on identification model, identification equipment and medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815314B (en) * | 2019-01-04 | 2023-08-08 | 平安科技(深圳)有限公司 | Intent recognition method, recognition device and computer readable storage medium |
CN110825949B (en) * | 2019-09-19 | 2024-09-13 | 平安科技(深圳)有限公司 | Information retrieval method based on convolutional neural network and related equipment thereof |
CN111125331B (en) * | 2019-12-20 | 2023-10-31 | 京东方科技集团股份有限公司 | Semantic recognition method, semantic recognition device, electronic equipment and computer readable storage medium |
CN111782965B (en) * | 2020-06-29 | 2023-08-11 | 北京百度网讯科技有限公司 | Intention recommendation method, device, equipment and storage medium |
CN112699679B (en) * | 2021-03-25 | 2021-06-29 | 北京沃丰时代数据科技有限公司 | Emotion recognition method and device, electronic equipment and storage medium |
-
2021
- 2021-05-24 CN CN202110567377.5A patent/CN113157892B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109388793A (en) * | 2017-08-03 | 2019-02-26 | 阿里巴巴集团控股有限公司 | Entity mask method, intension recognizing method and corresponding intrument, computer storage medium |
CN109815492A (en) * | 2019-01-04 | 2019-05-28 | 平安科技(深圳)有限公司 | A kind of intension recognizing method based on identification model, identification equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113157892A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112084331B (en) | Text processing and model training method and device, computer equipment and storage medium | |
CN110598206B (en) | Text semantic recognition method and device, computer equipment and storage medium | |
CN108829822B (en) | Media content recommendation method and device, storage medium and electronic device | |
CN111581510A (en) | Shared content processing method and device, computer equipment and storage medium | |
CN113159187B (en) | Classification model training method and device and target text determining method and device | |
CN118093834B (en) | AIGC large model-based language processing question-answering system and method | |
CN114186013A (en) | Entity recognition model hot updating method and device, equipment, medium and product thereof | |
CN116977701A (en) | Video classification model training method, video classification method and device | |
CN113515589A (en) | Data recommendation method, device, equipment and medium | |
CN114490949B (en) | Document retrieval method, device, equipment and medium based on BM25 algorithm | |
CN116975271A (en) | Text relevance determining method, device, computer equipment and storage medium | |
CN115186085A (en) | Reply content processing method and interaction method of media content interaction content | |
CN117494815A (en) | File-oriented credible large language model training and reasoning method and device | |
CN113741759B (en) | Comment information display method and device, computer equipment and storage medium | |
CN114676346A (en) | News event processing method and device, computer equipment and storage medium | |
CN113157892B (en) | User intention processing method, device, computer equipment and storage medium | |
CN117828024A (en) | Plug-in retrieval method, device, storage medium and equipment | |
CN112925983A (en) | Recommendation method and system for power grid information | |
CN113486174A (en) | Model training, reading understanding method and device, electronic equipment and storage medium | |
CN117076946A (en) | Short text similarity determination method, device and terminal | |
CN117131273A (en) | Resource searching method, device, computer equipment, medium and product | |
CN112149424A (en) | Semantic matching method and device, computer equipment and storage medium | |
CN115203206A (en) | Data content searching method and device, computer equipment and readable storage medium | |
CN114925681A (en) | Knowledge map question-answer entity linking method, device, equipment and medium | |
CN114595370A (en) | Model training and sorting method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |