CN108460111B - Personal character modeling and generating method and device based on conversation - Google Patents
Personal character modeling and generating method and device based on conversation Download PDFInfo
- Publication number
- CN108460111B CN108460111B CN201810130189.4A CN201810130189A CN108460111B CN 108460111 B CN108460111 B CN 108460111B CN 201810130189 A CN201810130189 A CN 201810130189A CN 108460111 B CN108460111 B CN 108460111B
- Authority
- CN
- China
- Prior art keywords
- emotion
- user
- transfer
- session data
- conversation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Machine Translation (AREA)
Abstract
The invention provides a personal character modeling and generating method and device based on a conversation. The method comprises the following steps: acquiring a plurality of pieces of session data of a specified user; determining a plurality of actual emotion transfer sequences of the designated user based on the reference emotion transfer sequence of the designated user and the plurality of pieces of session data; continuously adjusting the response of the conversation according to the conversation input of the specified user based on the actual emotion transfer sequence of the specified user so as to stimulate the emotion of the specified user to adjust to the expected emotion. In the embodiment, the actual emotion transfer sequence of the specified user is obtained, and then the answer of the conversation is adjusted in the conversation process, so that the emotion of the specified user is stimulated to be adjusted towards the expected emotion, and the use experience of the user is improved. In addition, in the embodiment, the dialog can be guided according to the emotion of the user, and the accuracy and robustness of the conversation are improved.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a personal character modeling and generating method and device based on a dialogue.
Background
In order to obtain an appointed user emotion model, the emotion process of a human is regarded as a two-layer random process according to the appointed user dialogue in the related technology, and a psychological model with different character characteristics is constructed by adjusting initial parameters of the model; meanwhile, the method is used as an emotion engine to predict the final result of the emotion process in a probability mode. However, the above method is implemented based on a statistical model rule, and can only classify the emotion of a given user into: sadness, blankness and happiness, which are less appropriate for the actual life of the given user.
Disclosure of Invention
The embodiment of the invention provides a personal character modeling and generating method and device based on a conversation, and aims to overcome the defects in the related technology.
In a first aspect, the present invention provides a dialog-based personal personality modeling and generating method, comprising:
acquiring a plurality of pieces of session data of a specified user;
determining a plurality of actual emotion transfer sequences of the designated user based on the reference emotion transfer sequence of the designated user and the plurality of pieces of session data;
continuously adjusting the response of the conversation according to the conversation input of the specified user based on the actual emotion transfer sequence of the specified user so as to stimulate the emotion of the specified user to adjust to the expected emotion.
Optionally, before the step of determining a plurality of actual emotion transfer sequences of the designated user based on the reference emotion transfer sequence of the designated user and the plurality of pieces of session data, the method further includes:
acquiring a first preset number of pieces of session data of a user;
carrying out emotion recognition on the first preset number of pieces of session data by using a Support Vector Machine (SVM) to obtain a second preset number of emotions;
respectively marking the session data of the second preset number of emotions;
carrying out probability statistics on the first preset number of pieces of session data to obtain an emotion transfer probability table and an emotion transfer distribution tensor of the specified user;
sampling the emotion transfer probability distribution according to a preset sampling algorithm to obtain a reference emotion transfer sequence which accords with the emotion characteristics of the specified user; the emotion transition probability distribution is corresponding data in the emotion transition probability table.
Optionally, the conversation data includes a television drama speech, a movie speech, or a dialogue of a conversation website in a different language; the second predetermined number of emotions includes neutral, happy, surprised, sad and angry.
Optionally, the emotion transfer distribution tensor is expressed in the form of:
{ initial emotion, stimulated emotion, generated emotion, transition probability }.
Optionally, the preset sampling algorithm is a markov monte carlo sampling method.
Optionally, the emotion transfer probability distribution is sampled according to a preset sampling algorithm to obtain a reference emotion transfer sequence conforming to the emotion characteristics of the specified user:
based on the initial quantity of the sampling session length, the session length is adjusted according to a preset step length, and the emotion transfer probability distribution is sampled according to a preset sampling algorithm, so that a reference emotion transfer sequence which accords with the emotion characteristics of the specified user is obtained.
In a second aspect, the present invention provides a dialog-based personal personality modeling and generating apparatus, the apparatus comprising:
the session data acquisition module is used for acquiring a plurality of pieces of session data of a specified user;
the actual emotion sequence determination module is used for determining a plurality of actual emotion transfer sequences of the specified user based on the reference emotion transfer sequence of the specified user and the plurality of pieces of session data;
and the appointed user emotion adjusting module is used for continuously adjusting the answer of the conversation according to the conversation input of the appointed user based on the actual emotion transfer sequence of the appointed user so as to stimulate the emotion of the appointed user to be adjusted to the expected emotion.
Optionally, the apparatus further comprises:
the session data acquisition module is further used for acquiring a first preset number of pieces of session data of the user;
the emotion recognition module is used for carrying out emotion recognition on the first preset number of pieces of session data by using a Support Vector Machine (SVM) to obtain a second preset number of emotions;
the session data marking module is used for respectively marking the session data of the second preset number of emotions;
the emotion transfer probability statistics module is used for carrying out probability statistics on the first preset number of pieces of session data to obtain an emotion transfer probability table and an emotion transfer distribution tensor of the specified user;
the emotion transfer distribution sampling module is used for sampling emotion transfer probability distribution according to a preset sampling algorithm to obtain a reference emotion transfer sequence which accords with the emotion characteristics of the specified user; the emotion transition probability distribution is corresponding data in the emotion transition probability table.
Optionally, the emotion transfer distribution sampling module includes:
based on the initial quantity of the sampling session length, the session length is adjusted according to a preset step length, and the emotion transfer probability distribution is sampled according to a preset sampling algorithm, so that a reference emotion transfer sequence which accords with the emotion characteristics of the specified user is obtained.
According to the technical scheme, the multiple actual emotion transfer sequences of the appointed user are determined according to the multiple session data and the reference emotion transfer sequences of the appointed user; and then continuously adjusting the response of the conversation according to the conversation input of the specified user based on the actual emotion transfer sequence of the specified user so as to stimulate the emotion of the specified user to be adjusted to the expected emotion. Therefore, in the embodiment, the actual emotion transfer sequence of the specified user is obtained, and then the answer of the conversation is adjusted in the conversation process, so that the emotion of the specified user is stimulated to be adjusted towards the expected emotion, and the use experience of the specified user is improved. In addition, in the embodiment, the dialog can be guided according to the emotion of the specified user, and the accuracy and the robustness of the conversation are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for dialogue-based personal character modeling and generation according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for obtaining a reference emotion transfer sequence according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a process of classifying emotion in session data by using SVM in an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating MCMC sampling principles according to an embodiment of the present invention;
FIG. 5 is a sample emotion transfer sequence diagram of Beform sunset;
FIG. 6 is a block diagram of a personal personality modeling and generating device of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for modeling and generating a personal character based on a dialog according to an embodiment of the present invention. Referring to fig. 1, the method includes:
101, a plurality of pieces of session data of a specified user are acquired.
102, determining a plurality of actual emotion transfer sequences of the specified user based on the reference emotion transfer sequence of the specified user and the plurality of session data.
103, continuously adjusting the response of the conversation according to the conversation input of the specified user based on the actual emotion transfer sequence of the specified user so as to stimulate the emotion of the specified user to adjust to the expected emotion.
According to the technical scheme, the actual emotion transfer sequence of the specified user is obtained, and then the response of the conversation is adjusted in the conversation process, so that the emotion of the specified user is stimulated to be adjusted towards the expected emotion, and the use experience of the specified user is improved. In addition, in the embodiment, the dialog can be guided according to the emotion of the specified user, and the accuracy and the robustness of the conversation are improved.
The following describes in detail the steps of the public sentiment transfer distribution modeling method provided by the embodiment of the present invention with reference to the accompanying drawings and embodiments.
First, a step of acquiring a plurality of pieces of session data of a user is introduced 101.
The above-mentioned multiple pieces of session data may be 1, 10, 100, 1000, etc., and those skilled in the art may set them according to a specific scenario, which is not limited herein.
The session data includes a question and an answer. For example, the question "do you eat", and the answer "i have eaten", which is one piece of session data.
In the embodiment of the invention, a crawler technology in the related technology is utilized to obtain a plurality of pieces of session data from social media such as televisions, websites and the like. Wherein the conversation data can be television drama lines, movie lines or dialogs of conversation websites in different languages, etc.
Next, a step of presenting 102, determining a plurality of actual emotion transfer sequences of the specified user based on the reference emotion transfer sequence of the specified user and the plurality of pieces of session data.
It should be noted that, in this embodiment, the reference emotion transfer sequence and the actual emotion transfer sequence can be represented in the following form: { initial emotion, stimulated emotion, generated emotion, transition probability }. Wherein the options for initiating emotion, stimulating emotion, and generating emotion include: neutral, happy, surprised, hurting heart and engendering qi, the corresponding labels are: "0, 1,2, 3, 4". The meaning of the emotion transfer distribution tensor is: the initial emotion of the designated user is a certain kind of emotion, and when the emotion of the designated user is stimulated by a certain kind of emotion of the external interface, the emotion of the designated user is transferred to the probability value for generating the emotion.
In this embodiment, a session input of a designated user is obtained, and then a plurality of actual emotion transfer sequences of the designated user are determined based on the reference emotion transfer sequence and the session input. For example, if the user is specified to input "do you have eaten", the reply content is determined as "i have eaten" based on the reference emotion transfer sequence.
It can be understood that the actual emotion transfer sequence is the actual emotion in the current conversation situation of the specified user. Therefore, the current emotion of the designated user is acquired in real time, and the use experience is favorably improved.
Finally, a step is introduced 103 of continuously adjusting the answer of the conversation according to the conversation input of the specified user based on the actual emotion transfer sequence of the specified user so as to stimulate the emotion of the specified user to be adjusted to the expected emotion.
In this embodiment, the conversation input of the specified user is continuously acquired, and then the answer corresponding to the conversation input is adjusted based on the actual emotion transfer sequence of the specified user. Therefore, in the embodiment, the answer of the conversation is adjusted according to the conversation input and the actual emotion transfer sequence of the specified user, and the emotion of the specified user can be stimulated to be adjusted towards the expected emotion, so that the conversation experience of the specified user is improved.
It is understood that steps 102 and 103 can be performed once or more than once, and the actual emotion transfer sequence can be continuously updated during the execution, and then the answer of the conversation can be continuously adjusted according to the new conversation data and the actual emotion transfer sequence, so that the development direction of the conversation can be slowly adjusted to stimulate the emotion of the specified user to adjust to the expected emotion.
It should be noted that, in the embodiment of the present invention, before the dialog-based personal character modeling and generating method is executed, the reference emotion transfer sequence needs to be generated. As shown in FIG. 2, the step of obtaining the reference emotion transfer sequence comprises:
first, introduction 201 is a step of obtaining a first preset number of pieces of session data in social media.
The first preset number may be 1000, 10000, 100000, etc., and may be set by a person skilled in the art according to a specific scenario, which is not limited herein. In one embodiment, the first predetermined number is 22172.
The session data includes a question and an answer. For example, the question "do you eat", and the answer "i have eaten", which is one piece of session data.
In the embodiment of the invention, a crawler technology in the related technology is utilized to obtain a first preset number of pieces of session data from social media such as televisions, websites and the like. Wherein the conversation data can be television drama lines, movie lines or dialogs of conversation websites in different languages, etc.
Therefore, the session data acquired by the crawler technology has randomness, and the conversation or speech of social media is not specific to a specific designated user, so that the acquired session data can introduce the uncertainty representation of emotion transfer of a person into the method, and is convenient for machine learning, thereby overcoming the problem that computational linguistics experts design or write a conversation scheme and improving the maintainability of the system.
Introducing 202, and performing emotion recognition on the first preset number of pieces of session data by using a Support Vector Machine (SVM) to obtain a second preset number of emotions.
The second preset number may be 3, 4, 5 or more, and may be set by those skilled in the art according to the specific situation. In one embodiment, the second predetermined number is 5, i.e. the second predetermined number of emotions may be neutral, happy, surprised, sad and angry.
In an embodiment of the present invention, a Support Vector Machine (SVM) is utilized to perform emotion recognition on the first preset number of pieces of session data, that is, each piece of session data corresponds to neutral, happy, surprised, sad or angry. The specific process of the SVM performing emotion classification on the session data is shown in FIG. 2. And (3) performing text vectorization on the session data with the 5 types of labels, performing feature selection, then calculating the weight (TF (T) IDF) of each feature, and finally performing model training and prediction to obtain a classification result of the session data.
Therefore, the emotion classification is carried out from the session data through the support vector machine SVM, and the emotion classification accuracy can be ensured.
Then, the step of marking the session data of the second preset number of emotions is introduced 203.
In order to facilitate subsequent quantitative calculation, the 5 emotions are replaced by the labels "0, 1,2, 3 and 4" in one embodiment of the invention, namely, the labels "0, 1,2, 3 and 4" are respectively marked as "neutral, happy, surprised, wounded and angry".
Then, introducing 204, and performing probability statistics on the first preset number of pieces of session data to obtain an emotion transfer probability table and an emotion transfer distribution tensor.
The emotion transfer distribution tensor is expressed by the following form: { initial emotion, stimulated emotion, generated emotion, transition probability }. Wherein the options for initiating emotion, stimulating emotion, and generating emotion include: neutral, happy, surprised, hurting heart and engendering qi, the corresponding labels are: "0, 1,2, 3, 4". The meaning of the emotion transfer distribution tensor is: the initial emotion of the designated user is a certain kind of emotion, and when the emotion of the designated user is stimulated by a certain kind of emotion of the external interface, the emotion of the designated user is transferred to the probability value for generating the emotion.
In an embodiment, probability statistics is performed on a first preset number of pieces of session data, and then each piece of session data is sequentially used as an initial emotion, a stimulated emotion and a generated emotion, so that an emotion transition probability table of a second preset number x a second preset number can be obtained. In one embodiment, if the second predetermined number is 5, the size of the emotion transfer probability table is 5 × 5.
Finally, introduction 205 is performed, and the emotion transition probability distribution is sampled according to a preset sampling algorithm to obtain a reference emotion transition sequence conforming to the emotion characteristics of the specified user.
In this embodiment, the preset sampling algorithm may adopt a markov monte carlo sampling method, and the sampling process of the sampling method includes:
where 0.. t, t +1 is the time series at sampling, x0... is the sample series, and α (i, j) represents the acceptance rate from state i to state j, the physical meaning being understood as accepting this transition with the probability of α (i, j) when jumping from state i to state j with the probability of q (i, j) on the original mahalanobis chain.
As shown in fig. 3, the algorithm of the markov monte carlo sampling method is as follows:
assuming that there is a transition matrix Q (corresponding to Q (i, j)), the initial state of the mahalanobis chain refers to the probability distribution, e.g., p ═ 0.2,0.1,0.3,0.2, 0.2) and then is sampled according to the following algorithm:
the first step is to randomly select an initial state xt, such as state 1 (representing the happy emotion), then sample from the next time t +1 to y, the state of y is determined according to the polynomial distribution sampling of probability transition, and then it needs to judge whether to accept the sample y, i.e. a value u (between 0 and 1) is adopted from the binomial distribution to compare with an acceptance rate alpha, the value of alpha is equal to p (y), q (xt | y). If u is less than alpha, accepting the transition from Xt to y, namely assigning the value of y to Xt +1, otherwise not accepting the transition, and Xt +1 is still the state corresponding to Xt. In the experimental process, a convergence limit L can be set, namely, the result of sampling for L times is not changed, namely, the sampling is received, and the representativeness of the sampling sequence is enhanced. This results in a sample sequence: for example, 1,2,0,0,1,1,0,1 (assuming that we set the sampling length to be 8), this is a transfer sequence representing emotion, happy- > surprised- > neutral.
In the embodiment, the Markov Monte Carlo sampling method is adopted to sample the emotion transfer probability distribution, so that a reference emotion transfer sequence which accords with the emotion characteristics of the designated user can be obtained. In one embodiment, obtaining the sequence of emotion transitions may further comprise:
based on the initial quantity of the sampling session length, the session length is adjusted according to a preset step length, and the emotion transfer probability distribution is sampled according to a preset sampling algorithm, so that a reference emotion transfer sequence which accords with the emotion characteristics of the specified user is obtained.
Therefore, in the embodiment, by acquiring the dialog of the unspecified user, the uncertainty of emotion transfer of the person can be introduced, and the defect that the conversation database is excessively depended in the related technology can be overcome, so that the maintainability of the system is improved, and the method is more suitable for machine learning. In addition, in the embodiment, through a statistical method, the emotion distribution model of the designated user can be determined, that is, the emotion transfer sequence is generated, so that the dialog can be guided according to the emotion of the designated user, and the accuracy and robustness of the conversation are improved.
Example one
In this embodiment, emotion transfer distribution is generated by using session data of an english session website. First, 22172 sentences are collected for the session data in the movie "beforee sunset", where ∞ represents the end of a session and the session data is shown in fig. 1. The conversation data has no other conversation characters and is low in noise.
TABLE 1 Session data
And then, carrying out emotion classification on the session data by using the SVM, and respectively labeling 'neutral, happy, surprised, wounded and angry' on the emotion category of each session data by using labels '0, 1,2, 3 and 4'. The classification results are shown in table 2.
Then, the emotion transfer probability of the session data is calculated, and the emotion transfer of the session data is counted by a statistical method according to the emotion transfer distribution tensor, and the statistical results are shown in tables 2(a) to (e).
TABLE 2 Emotion transition probability of Session data of American dramas Beforee sunset
Finally, in this embodiment, different sampling session lengths are set: sampling session lengths are respectively 2, 4, 6, 8 and 10, an initial emotion is selected as [ hurt heart ], a stimulation emotion is respectively [ neutral, happy, surprised, hurt heart and angry ], five groups of emotion transfer probability distributions are obtained, and an emotion transfer sequence of the American drama Before sunset is obtained by using an MCMC sampling method and is shown in Table 3.
TABLE 3 MCMC sampling results
In this embodiment, an initial emotion is selected as a heart injury, and as five groups of probability values in table 2(d), each group of samples is set to 5 different convergence limits (2, 4, 6, 8, 10), so as to obtain emotion transition sequences shown in fig. 5(a) - (e) (in fig. 5, abscissa is a conversation sequence, ordinate is an emotion label, and lines with different thick lines indicate convergence limits).
In this embodiment, the result of random sampling may be changed by setting different convergence values. It can be understood that as the convergence value becomes larger, the longer the sampling time, the more the result conforms to the emotional characteristics of the specified user. On the basis, the human-machine conversation or the robot conversation can be simulated in the embodiment, and by sampling a plurality of different emotion transfer sequences, an appropriate sampling sequence is selected based on the actual emotion and the expected emotion of the specified user during conversation, namely, the emotion transfer of the robot is purposefully controlled by using the emotion represented by the selected appropriate sampling sequence, and the emotion of the robot is guided to develop towards the expected emotion.
For example, the designated user a and the robot B initially chat in several sentences of conversation, and based on the several sentences of conversation, the robot B can obtain the current emotion of the designated user a, and when the robot B finds that the emotion of the designated user a is a negative emotion such as a feeling of oppression, it is desirable that the designated user a is happy. At this time, according to various results of the actual emotion transfer sequences, the actual emotion transfer sequences corresponding to the happy emotion are selected, thereby stimulating emotion transfer of the specified user a.
Fig. 6 is a block diagram of a personal character modeling and generating apparatus based on dialog according to an embodiment of the present invention. Referring to fig. 6, the apparatus includes:
a session data obtaining module 601, configured to obtain multiple pieces of session data of a specified user;
an actual emotion sequence determination module 602, configured to determine a plurality of actual emotion transfer sequences of the specified user based on the reference emotion transfer sequence of the specified user and the plurality of pieces of session data;
and the designated user emotion adjusting module 603 is configured to continuously adjust the answer of the conversation according to the conversation input of the designated user based on the actual emotion transfer sequence of the designated user, so as to stimulate the emotion of the designated user to adjust to the expected emotion.
Optionally, the apparatus further comprises:
the session data acquisition module is further used for acquiring a first preset number of pieces of session data of the user;
the emotion recognition module is used for carrying out emotion recognition on the first preset number of pieces of session data by using a Support Vector Machine (SVM) to obtain a second preset number of emotions;
the session data marking module is used for respectively marking the session data of the second preset number of emotions;
the emotion transfer probability statistics module is used for carrying out probability statistics on the first preset number of pieces of session data to obtain an emotion transfer probability table and an emotion transfer distribution tensor of the specified user;
the emotion transfer distribution sampling module is used for sampling emotion transfer probability distribution according to a preset sampling algorithm to obtain a reference emotion transfer sequence which accords with the emotion characteristics of the specified user; the emotion transition probability distribution is corresponding data in the emotion transition probability table.
Optionally, the emotion transfer distribution sampling module includes:
based on the initial quantity of the sampling session length, the session length is adjusted according to a preset step length, and the emotion transfer probability distribution is sampled according to a preset sampling algorithm, so that a reference emotion transfer sequence which accords with the emotion characteristics of the specified user is obtained.
It should be noted that the personal character modeling and generating device based on the dialog provided by the embodiment of the present invention has a one-to-one correspondence relationship with the above method, and the implementation details of the above method are also applicable to the above device, and the embodiment of the present invention does not describe the above system in detail.
In the description of the present invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
Claims (7)
1. A dialog-based personal personality modeling and generating method, the method comprising:
acquiring a plurality of pieces of session data of a specified user;
determining a plurality of actual emotion transfer sequences of the designated user based on the reference emotion transfer sequence and the plurality of session data of the designated user, wherein the reference emotion transfer sequence and the actual emotion transfer sequences are represented in the following forms: { initial emotion, stimulated emotion, emotion generation, transition probability };
continuously adjusting the response of the conversation according to the conversation input of the designated user based on the actual emotion transfer sequence of the designated user so as to stimulate the emotion of the designated user to adjust to the expected emotion;
before the step of determining a plurality of actual emotion transfer sequences of the designated user based on the reference emotion transfer sequence of the designated user and the plurality of pieces of session data, the method further comprises:
acquiring a first preset number of pieces of session data of a user;
carrying out emotion recognition on the first preset number of pieces of session data by using a Support Vector Machine (SVM) to obtain a second preset number of emotions;
respectively marking the session data of the second preset number of emotions;
carrying out probability statistics on the first preset number of pieces of session data to obtain an emotion transfer probability table and an emotion transfer distribution tensor of the specified user, wherein the emotion transfer distribution tensor has the meanings as follows: the initial emotion of the user is appointed to be a certain kind of emotion, and when the emotion of the user is stimulated by the certain kind of emotion of the external interface, the emotion of the user is appointed to be transferred to the probability value for generating the emotion;
sampling the emotion transfer probability distribution according to a preset sampling algorithm to obtain a reference emotion transfer sequence which accords with the emotion characteristics of the specified user; the emotion transition probability distribution is corresponding data in the emotion transition probability table.
2. The personal personality modeling and generating method of claim 1, wherein the conversation data includes television title, movie title, or dialogs of different language conversation web sites; the second predetermined number of emotions includes neutral, happy, surprised, sad and angry.
3. The method of personal character modeling and generation of claim 1, wherein said emotion transfer distribution tensor is expressed in the form:
{ initial emotion, stimulated emotion, generated emotion, transition probability }.
4. The personal personality modeling and generating method of claim 1, wherein the predetermined sampling algorithm is a markov monte carlo sampling method.
5. The personal character modeling and generating method of claim 1, wherein sampling the emotion transition probability distribution according to a preset sampling algorithm to obtain a reference emotion transition sequence conforming to the emotion characteristics of the specified user comprises:
based on the initial quantity of the sampling session length, the session length is adjusted according to a preset step length, and the emotion transfer probability distribution is sampled according to a preset sampling algorithm, so that a reference emotion transfer sequence which accords with the emotion characteristics of the specified user is obtained.
6. A dialog-based personal personality modeling and generating apparatus, the apparatus comprising:
the session data acquisition module is used for acquiring a plurality of pieces of session data of a specified user;
an actual emotion sequence determination module, configured to determine a plurality of actual emotion transfer sequences of the specified user based on the reference emotion transfer sequence of the specified user and the plurality of pieces of session data, where the reference emotion transfer sequence and the actual emotion transfer sequences may be represented in the following forms: { initial emotion, stimulated emotion, emotion generation, transition probability };
the appointed user emotion adjusting module is used for continuously adjusting the answer of the conversation according to the conversation input of the appointed user based on the actual emotion transfer sequence of the appointed user so as to stimulate the emotion of the appointed user to adjust to the expected emotion;
the device further comprises:
the session data acquisition module is further used for acquiring a first preset number of pieces of session data of the user;
the emotion recognition module is used for carrying out emotion recognition on the first preset number of pieces of session data by using a Support Vector Machine (SVM) to obtain a second preset number of emotions;
the session data marking module is used for respectively marking the session data of the second preset number of emotions;
an emotion transfer probability statistics module, configured to perform probability statistics on the first preset number of pieces of session data to obtain an emotion transfer probability table and an emotion transfer distribution tensor of the specified user, where the emotion transfer distribution tensor has the following meanings: the initial emotion of the user is appointed to be a certain kind of emotion, and when the emotion of the user is stimulated by the certain kind of emotion of the external interface, the emotion of the user is appointed to be transferred to the probability value for generating the emotion;
the emotion transfer distribution sampling module is used for sampling emotion transfer probability distribution according to a preset sampling algorithm to obtain a reference emotion transfer sequence which accords with the emotion characteristics of the specified user; the emotion transition probability distribution is corresponding data in the emotion transition probability table.
7. The personal personality modeling and generating device of claim 6, wherein the emotion transfer distribution sampling module comprises:
based on the initial quantity of the sampling session length, the session length is adjusted according to a preset step length, and the emotion transfer probability distribution is sampled according to a preset sampling algorithm, so that a reference emotion transfer sequence which accords with the emotion characteristics of the specified user is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810130189.4A CN108460111B (en) | 2018-02-08 | 2018-02-08 | Personal character modeling and generating method and device based on conversation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810130189.4A CN108460111B (en) | 2018-02-08 | 2018-02-08 | Personal character modeling and generating method and device based on conversation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108460111A CN108460111A (en) | 2018-08-28 |
CN108460111B true CN108460111B (en) | 2020-10-16 |
Family
ID=63238981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810130189.4A Active CN108460111B (en) | 2018-02-08 | 2018-02-08 | Personal character modeling and generating method and device based on conversation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108460111B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615077A (en) * | 2018-10-17 | 2019-04-12 | 合肥工业大学 | Affective state modeling and feeling shifting method and device based on dialogue |
CN110706785B (en) * | 2019-08-29 | 2022-03-15 | 合肥工业大学 | Emotion adjusting method and system based on conversation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200705224A (en) * | 2005-07-29 | 2007-02-01 | Shu-Zhun Zhuang | Intelligent information processing device and method |
CN104504032A (en) * | 2014-12-12 | 2015-04-08 | 北京智谷睿拓技术服务有限公司 | Method and equipment for providing service upon user emotion tendencies |
CN106055662A (en) * | 2016-06-02 | 2016-10-26 | 竹间智能科技(上海)有限公司 | Emotion-based intelligent conversation method and system |
CN106649704A (en) * | 2016-12-20 | 2017-05-10 | 竹间智能科技(上海)有限公司 | Intelligent dialogue control method and intelligent dialogue control system |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6757362B1 (en) * | 2000-03-06 | 2004-06-29 | Avaya Technology Corp. | Personal virtual assistant |
-
2018
- 2018-02-08 CN CN201810130189.4A patent/CN108460111B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200705224A (en) * | 2005-07-29 | 2007-02-01 | Shu-Zhun Zhuang | Intelligent information processing device and method |
CN104504032A (en) * | 2014-12-12 | 2015-04-08 | 北京智谷睿拓技术服务有限公司 | Method and equipment for providing service upon user emotion tendencies |
CN106055662A (en) * | 2016-06-02 | 2016-10-26 | 竹间智能科技(上海)有限公司 | Emotion-based intelligent conversation method and system |
CN106649704A (en) * | 2016-12-20 | 2017-05-10 | 竹间智能科技(上海)有限公司 | Intelligent dialogue control method and intelligent dialogue control system |
CN106683672A (en) * | 2016-12-21 | 2017-05-17 | 竹间智能科技(上海)有限公司 | Intelligent dialogue method and system based on emotion and semantics |
Non-Patent Citations (1)
Title |
---|
情感虚拟人研究;王琦;《中国优秀硕士学位论文全文数据库信息科技辑》;20090115;39-40,49-51 * |
Also Published As
Publication number | Publication date |
---|---|
CN108460111A (en) | 2018-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109241255B (en) | Intention identification method based on deep learning | |
CN108287820B (en) | Text representation generation method and device | |
CN110096698B (en) | Topic-considered machine reading understanding model generation method and system | |
CN109948152A (en) | A kind of Chinese text grammer error correcting model method based on LSTM | |
US20200257757A1 (en) | Machine Learning Techniques for Generating Document Summaries Targeted to Affective Tone | |
CN111160452A (en) | Multi-modal network rumor detection method based on pre-training language model | |
CN111428010A (en) | Man-machine intelligent question and answer method and device | |
CN107247751B (en) | LDA topic model-based content recommendation method | |
CN112329476B (en) | Text error correction method and device, equipment and storage medium | |
CN107273348B (en) | Topic and emotion combined detection method and device for text | |
CN111666409A (en) | Integrated emotion intelligent classification method for complex comment text based on comprehensive deep capsule network | |
CN112084769B (en) | Dependency syntax model optimization method, apparatus, device and readable storage medium | |
CN111428490A (en) | Reference resolution weak supervised learning method using language model | |
CN108460111B (en) | Personal character modeling and generating method and device based on conversation | |
CN108364066B (en) | Artificial neural network chip and its application method based on N-GRAM and WFST model | |
CN111949762B (en) | Method and system for context-based emotion dialogue and storage medium | |
CN114547293A (en) | Cross-platform false news detection method and system | |
CN111178082A (en) | Sentence vector generation method and device and electronic equipment | |
CN108197274B (en) | Abnormal personality detection method and device based on conversation | |
CN114416969A (en) | LSTM-CNN online comment sentiment classification method and system based on background enhancement | |
CN108197276B (en) | Public emotion transfer distribution modeling method and device based on session | |
CN112115229A (en) | Text intention recognition method, device and system and text classification system | |
CN112863518B (en) | Method and device for recognizing voice data subject | |
CN115796141A (en) | Text data enhancement method and device, electronic equipment and storage medium | |
CN112783324B (en) | Man-machine interaction method and device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |