Nothing Special   »   [go: up one dir, main page]

CN110134316B - Model training method, emotion recognition method, and related device and equipment - Google Patents

Model training method, emotion recognition method, and related device and equipment Download PDF

Info

Publication number
CN110134316B
CN110134316B CN201910309245.5A CN201910309245A CN110134316B CN 110134316 B CN110134316 B CN 110134316B CN 201910309245 A CN201910309245 A CN 201910309245A CN 110134316 B CN110134316 B CN 110134316B
Authority
CN
China
Prior art keywords
touch
user
emotional state
emotion
emotion recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910309245.5A
Other languages
Chinese (zh)
Other versions
CN110134316A (en
Inventor
李向东
田艳
王剑平
张艳存
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910309245.5A priority Critical patent/CN110134316B/en
Publication of CN110134316A publication Critical patent/CN110134316A/en
Priority to PCT/CN2020/084216 priority patent/WO2020211701A1/en
Application granted granted Critical
Publication of CN110134316B publication Critical patent/CN110134316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a model training method and an emotion recognition method, and the application adopts a machine learning technology to train a classification model based on a touch mode when a user operates a terminal device and an emotion state corresponding to the touch mode to obtain an emotion recognition model; furthermore, in practical application, the current emotional state of the user can be correspondingly determined by using the emotion recognition model according to the touch mode when the user operates the terminal device. Therefore, the classification model is trained by utilizing the touch mode and the corresponding emotion state when a user operates a certain terminal device, and an emotion recognition model suitable for the terminal device is obtained; correspondingly, when the terminal equipment identifies the emotional state of the user by applying the emotion identification model, the emotion identification model can identify the emotional state of the user accurately according to the touch control mode when the user controls the terminal equipment in a targeted manner.

Description

Model training method, emotion recognition method, and related device and equipment
Technical Field
The application relates to the technical field of computers, in particular to a model training method, an emotion recognition method, a related device and equipment.
Background
Nowadays, the roles played by terminal devices such as smart phones and tablet computers in daily life of people are more and more important, and the user experience brought by the terminal devices becomes a key factor for users to measure the terminal devices, so that how to provide personalized services for the users and improve the user experience becomes a research and development focus concerned by terminal device manufacturers.
At present, some terminal devices have been developed to provide personalized services to users according to the recognized emotional states of the users from the perspective of recognizing the emotional states of the users; whether reasonable personalized services can be provided for the user depends mainly on the accuracy of the emotional state recognition of the user. At present, according to a common method for recognizing emotion based on facial expression, along with changes of factors such as ambient light, relative position between a user's face and a terminal device and the like, accuracy of facial expression recognition is changed, that is, the method cannot guarantee accurate recognition of facial expression of the user, and accordingly emotion state recognition based on facial expression is not accurate.
In addition, in the prior art, a method for determining the emotional state of the user based on the human physiological signals also exists, the method measures the human physiological signals of the user, such as heart rate, body temperature, blood pressure and the like, by means of additional measuring equipment, and then determines the emotional state of the user by analyzing and processing the human physiological signals; in the implementation process of the method, additional external equipment is needed, and the use of the external equipment is burdensome for users.
Disclosure of Invention
The embodiment of the application provides a model training method, an emotion recognition method, a related device and equipment, and the emotion recognition method, the emotion recognition model and the related device and equipment can accurately recognize the emotion state of a user based on the trained emotion recognition model, so that the terminal equipment can conveniently provide more reasonable personalized service for the user according to the recognized emotion state.
In view of this, a first aspect of the present application provides a model training method, which may be applied to a terminal device and a server, in the method, a touch manner when a user operates the terminal device is obtained, an emotional state corresponding to the touch manner is marked, and the touch manner and the emotional state corresponding to the touch manner are used as training samples; further, a Machine Learning Technology (MLT) is adopted, the preset classification model is trained by using the training samples, so as to obtain an emotion recognition model applicable to the terminal device, and the emotion recognition model can correspondingly determine an emotion state corresponding to a touch manner according to the touch manner when the user operates the terminal device.
The model training method can be used for training the emotion recognition model which is pertinently suitable for the terminal equipment by adopting a machine learning algorithm according to different terminal equipment and correspondingly based on the touch control mode when the user of the terminal equipment controls the terminal equipment and the emotion state corresponding to the touch control mode; therefore, the emotion recognition model obtained by training the terminal equipment is applied to the terminal equipment, so that the emotion recognition model can be ensured to accurately determine the emotion state of the user according to the touch control mode when the user of the terminal equipment controls the terminal equipment.
In a first implementation manner of the first aspect of the embodiment of the present application, when determining an emotional state corresponding to a certain touch manner, a reference time interval may be determined according to a trigger time corresponding to the touch manner, then operation data content generated by a user operating a terminal device within the reference time interval, such as text content and voice content of a user input terminal device, is obtained, and then the emotional state corresponding to the operation data content is determined as the emotional state corresponding to the touch manner by analyzing the operation data content obtained within the reference time interval.
Therefore, the emotion state corresponding to the touch mode is determined by using the operation data content generated when the user operates the terminal device, the determined emotion state can be ensured to be reasonable and accurate, and the corresponding relation between the determined touch mode and the emotion state is ensured to be reasonable and accurate.
In a second implementation manner of the first aspect of the present application, when determining an emotional state corresponding to a certain touch manner, a preset emotional state mapping relationship table may be called, where a correspondence relationship between the touch manner and the emotional state is recorded in the emotional state mapping relationship table; and then, searching the emotional state corresponding to the touch mode in the emotional state mapping relation table.
At present, relevant experiments are researched and researched aiming at the corresponding relation between the touch control mode when a user touches the terminal equipment and the emotional state of the user, some research results capable of reflecting the corresponding relation are generated, an emotional state mapping table is correspondingly generated according to the research results, the emotional state corresponding to the touch control mode is determined based on the emotional state mapping table, and the emotional state determined aiming at the touch control mode can be effectively guaranteed to be objective and reasonable.
In a third implementation manner of the first aspect of the embodiment of the present application, when a training sample is obtained, touch data generated by a user operating a terminal device may be collected within a preset time period, then a cluster processing is performed on the touch data to generate a touch data set, a touch manner corresponding to the touch data set is determined, then, the touch data set including the most touch data is used as a target touch data set, the touch manner corresponding to the target touch data set is used as a target touch manner, an emotional state corresponding to the target touch manner is further marked, and the target touch manner and an emotional state corresponding to the target touch manner are used as the training sample.
In general, a user may use multiple different touch manners to operate a terminal device within a period of time, and the emotional state of the user may not change too much within the period of time, and therefore, a touch manner that can most represent the current emotional state of the user, that is, the target touch manner needs to be selected from multiple touch manners that the user uses within the period of time through the implementation manner, and further, the target touch manner and the current emotional state of the user are used as training samples, so that the corresponding relationship between the touch manner and the emotional state can be effectively ensured to be accurate and reasonable.
In a fourth implementation manner of the first aspect of the embodiment of the present application, the touch data mentioned in the third implementation manner includes: the screen capacitance value change data and the coordinate value change data. Because the touch screen adopted by most touch screen devices at present is a capacitive screen, the capacitance value change data and the coordinate value change data of the screen are used as touch data, and the method provided by the embodiment of the application can be widely applied to daily work and life.
In a fifth implementation manner of the first aspect of the embodiment of the present application, after the emotion recognition model is obtained through training, a touch manner when the user subsequently operates the terminal device may be further obtained as an optimized touch manner, an emotion state corresponding to the optimized touch manner is marked, and the optimized touch manner and the emotion state corresponding to the optimized touch manner are used as an optimized training sample; and then carrying out optimization training on the emotion recognition model by using the optimization training sample.
With the increase of the service time, the touch control mode adopted when the user touches the terminal device may also be changed, in order to ensure that the emotion recognition model can always accurately recognize the emotion state of the user according to the touch control mode of the user, after the emotion recognition model is obtained through training, the touch control mode when the user operates the terminal device and the corresponding emotion state can be continuously collected to serve as an optimized training sample, and further, when the emotion recognition model cannot accurately recognize the emotion state of the user, the optimized training sample can be utilized to further optimize and train the emotion recognition model, so that the emotion recognition model is ensured to have better model performance all the time.
In a sixth implementation manner of the first aspect of the embodiment of the present application, feedback information of a user for an emotion recognition model may be obtained, and when the performance of the emotion recognition model represented by the feedback information does not meet the user requirements, the emotion recognition model is optimally trained by using the optimal training sample obtained in the fifth implementation manner.
Because the object to which the emotion recognition model is oriented in the application is the user of the terminal device, the use experience of the user can be one of the most important standards for measuring the performance of the emotion recognition model, and when the user feeds back that the performance of the emotion recognition model cannot meet the self requirement, that is, when the user considers that the emotion state recognized by the emotion recognition model is not accurate enough, the emotion recognition model can be optimally trained by using the optimal training sample obtained in the fifth implementation mode, so that the use requirement of the user is met, and the use experience of the user is improved.
In a seventh implementation manner of the first aspect of the embodiment of the present application, the terminal device may perform optimization training on the emotion recognition model by using the optimization training sample obtained in the fifth implementation manner, when any one or more of the three conditions that the terminal device is in the charging state, the remaining power is higher than the preset power, and the duration of the terminal device in the idle state exceeds the preset duration is met.
Because a large amount of electricity is generally consumed when the emotion recognition model is optimally trained, and other functions of the terminal device may be influenced to some extent, in order to ensure that the normal use of the terminal device by a user is not influenced, the terminal device can optimally train the emotion recognition model when any one or more conditions of the three conditions are met, and the user experience is guaranteed.
In the method, the terminal device obtains a touch mode when the user operates the terminal device, and determines an emotional state corresponding to the touch mode by using an emotion recognition model operated by the terminal device, wherein the emotional state is used as the current emotional state of the user, and the emotion recognition model is obtained by training the terminal device by using the model training method provided by the first aspect.
The emotion recognition method utilizes the emotion recognition model to pointedly determine the emotion state of the user according to the touch control mode when the user operates the terminal equipment, and can ensure the accuracy of the determined emotion state; in addition, in the process of determining the emotional state of the user, the method does not need any additional external equipment, and the purpose of improving the user experience is really realized.
In a first implementation manner of the second aspect of the embodiment of the present application, the terminal device may switch the display style of the desktop interface according to the current emotional state of the user, which is identified by the emotion identification model, in a case that the terminal device displays the desktop interface by itself. Therefore, the terminal equipment directly changes the visual experience of the user by changing the display style of the desktop interface of the terminal equipment, and adjusts or matches the emotional state of the user in the aspect of visual impression, so that the use experience of the user is improved.
In a second implementation manner of the second aspect of the embodiment of the present application, the terminal device may recommend, by the application, related content, for example, related music content, video content, text content, and the like, according to the current emotional state of the user, which is identified by the emotion identification model, in a case that the terminal device itself starts the application. Therefore, related content is recommended to the user through the corresponding application program in combination with the emotional state of the user, the emotional state of the user is adjusted in real time from multiple angles, and the user experience is improved.
A third aspect of the present application provides a model training apparatus, the apparatus comprising:
the training sample acquisition module is used for acquiring a touch mode when a user operates the terminal equipment and marking an emotional state corresponding to the touch mode; taking the touch control mode and the emotion state corresponding to the touch control mode as training samples;
the model training module is used for training the classification model by using the training sample by adopting a machine learning algorithm to obtain an emotion recognition model; the emotion recognition model takes a touch control mode when a user operates the terminal device as input and takes an emotion state corresponding to the touch control mode as output.
In a first implementation manner of the third aspect of the embodiment of the present application, the training sample obtaining module is specifically configured to:
determining a reference time interval according to the trigger time corresponding to the touch control mode;
acquiring operation data content generated by the user operating the terminal equipment in the reference time interval;
and determining the emotional state of the user according to the operation data content, and using the emotional state as the emotional state corresponding to the touch mode.
In a second implementation manner of the third aspect of the embodiment of the present application, the training sample obtaining module is specifically configured to:
calling a preset emotional state mapping relation table; the emotional state mapping relation table records the corresponding relation between the touch mode and the emotional state;
and searching the emotional state mapping relation table, and determining the emotional state corresponding to the touch mode.
In a third implementation manner of the third aspect of the embodiment of the present application, the training sample obtaining module is specifically configured to:
acquiring touch data generated by a user for controlling the terminal equipment within a preset time period;
clustering the touch data to generate a touch data set, and determining a touch mode corresponding to the touch data set;
taking a touch data set with the most touch data as a target touch data set, and taking a touch mode corresponding to the target touch data set as a target touch mode; marking the emotional state corresponding to the target touch manner;
and taking the target touch mode and the emotion state corresponding to the target touch mode as training samples.
In a fourth implementation manner of the third aspect of the embodiment of the present application, the touch data includes: the data of the change of the capacitance value of the screen and the data of the change of the coordinate value.
In a fifth implementation manner of the third aspect of the embodiment of the present application, the apparatus further includes:
the optimized training sample acquisition module is used for acquiring a touch mode when a user operates the terminal equipment as an optimized touch mode; marking the emotional state corresponding to the optimized touch control mode; taking the optimized touch control mode and the emotion state corresponding to the optimized touch control mode as an optimized training sample; the optimized training sample is used for performing optimized training on the emotion recognition model.
In a sixth implementation manner of the third aspect of the embodiment of the present application, the apparatus further includes:
the feedback information acquisition module is used for acquiring feedback information of the user aiming at the emotion recognition model; the feedback information is used for representing whether the performance of the emotion recognition model meets the requirements of a user;
and the first optimization training module is used for performing optimization training on the emotion recognition model by using the optimization training sample when the feedback information represents that the performance of the emotion recognition model does not meet the requirements of a user.
In a seventh implementation manner of the third aspect of the embodiment of the present application, the apparatus further includes:
and the second optimization training module is used for optimizing training of the emotion recognition model by utilizing the optimization training sample when the terminal equipment is in a charging state and/or the residual electric quantity of the terminal equipment is higher than the preset electric quantity and/or the time length of the terminal equipment in an idle state exceeds the preset time length.
A fourth aspect of the present application provides an emotion recognition apparatus, the apparatus including:
the touch control mode acquisition module is used for acquiring a touch control mode when a user operates the terminal equipment;
the emotion state identification module is used for determining an emotion state corresponding to the touch mode by using an emotion identification model as the current emotion state of the user; the emotion recognition model is obtained by performing the model training method according to the first aspect.
In a first implementation manner of the fourth aspect of the embodiment of the present application, the apparatus further includes:
and the display style switching module is used for switching the display style of the desktop interface according to the current emotional state of the user under the condition that the terminal equipment displays the desktop interface.
In a second implementation manner of the fourth aspect of the embodiment of the present application, the apparatus further includes:
and the content recommending module is used for recommending related content through the application program according to the current emotional state of the user under the condition that the terminal equipment starts the application program.
A fifth aspect of the present application provides a server comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the model training method according to the first aspect according to instructions in the program code.
A sixth aspect of the present application provides a terminal device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the model training method according to the first aspect and/or to perform the emotion recognition method according to the second aspect, according to instructions in the program code.
A seventh aspect of the present application provides a computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the model training method of the first aspect and/or perform the emotion recognition method of the second aspect.
Drawings
Fig. 1 is a schematic view of an application scenario of a model training method and an emotion recognition method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a model training method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of an emotion recognition method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a model training apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an emotion recognition apparatus provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to further improve the user experience brought by the terminal device to the user and provide more intimate and personalized services for the user, some terminal device manufacturers develop the function of recognizing the emotional state of the user by the terminal device from the perspective of recognizing the emotional state of the user, and the current common methods applied to recognizing the emotional state by the terminal device include the following three methods:
the facial expression recognition method comprises the steps that a camera device on terminal equipment is used for collecting facial expressions of a user, and then the facial expressions of the user are analyzed to determine the emotional state of the user; due to different light rays in different scenes and unstable relative positions between the face of the user and the terminal equipment, the method cannot ensure that the facial expression of the user is accurately identified under various conditions, and accordingly, the accuracy of the emotion state of the identified user cannot be ensured under the condition that the identification accuracy of the facial expression of the user is low.
The voice recognition method comprises the steps that terminal equipment is used for collecting voice content input by a user, and the emotional state of the user is determined by analyzing and processing the voice content; the method requires the user to actively input the voice content expressing the emotional state to the terminal equipment, and then the terminal equipment can correspondingly determine the emotional state of the user; in most cases, the user does not actively inform the terminal device of the emotional state of the terminal device, and thus the method has low application value in practical application.
In the human body physiological signal identification method, terminal equipment acquires human body physiological signals such as heart rate, body temperature, blood pressure and the like through an additional measuring device or a sensor, and further, the terminal equipment analyzes and processes the acquired human body physiological signals to determine the emotional state of a user; in the implementation process of the method, additional external equipment is needed, and the use of the external equipment is usually burdensome for the user, namely, poor user experience is brought to the user from another aspect, and the purpose of improving the user experience is not really achieved.
In order to enable the terminal equipment to accurately identify the emotional state of the user and ensure that the user experience brought by the terminal equipment is improved in a real sense, a new method is developed in the application, the screen occupation ratio based on the current terminal equipment is higher and higher, and the user frequently interacts with the terminal equipment through a touch screen (TP), the characteristic that the touch mode of the user for controlling the terminal equipment under the same emotional state has similar rules is utilized, the touch mode based on the user for controlling the terminal equipment is trained, the model of the current emotional state of the user is determined, so that the terminal equipment can identify the emotional state of the user by utilizing the model, and reasonable personalized service is provided for the user.
Specifically, in the model training method provided by the embodiment of the application, the terminal device obtains a touch mode when the user operates the terminal device, marks an emotional state corresponding to the touch mode, and takes the touch mode and the emotional state corresponding to the touch mode as a training sample; and training the classification model by using the training samples by using a machine learning algorithm to obtain an emotion recognition model. Correspondingly, in the emotion recognition method provided by the embodiment of the application, the terminal device acquires the touch mode of the user when the user operates the terminal device, and further determines the emotion state corresponding to the acquired touch mode by using the emotion recognition model obtained by training through the model training method, and the emotion state is used as the current emotion state of the user.
It should be noted that, the model training method provided in the embodiment of the present application may be used for training, for different terminal devices, an emotion recognition model specifically applicable to the terminal device by using a machine learning algorithm based on a touch manner when a user of the terminal device operates the terminal device and an emotion state corresponding to the touch manner; therefore, the emotion recognition model obtained by training the terminal equipment is applied to the terminal equipment, so that the emotion recognition model can be ensured to accurately determine the emotion state of the user according to the touch control mode when the user of the terminal equipment controls the terminal equipment.
Compared with a plurality of emotion recognition methods commonly used in the prior art, the method provided by the embodiment of the application can determine the emotion state of the user by utilizing the emotion recognition model according to the touch control mode when the user operates the terminal device in a targeted manner, and the accuracy of the determined emotion state is ensured; in addition, in the process of determining the emotional state of the user, the method provided by the embodiment of the application does not need any additional external equipment, and the purpose of improving the user experience is really achieved.
It should be understood that the model training method and the emotion recognition method provided by the embodiments of the present application may be applied to a terminal device (may also be referred to as an electronic device) configured with a touch screen and a server; the terminal device may be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), or the like; the server may specifically be an application server, and may also be a Web server.
In order to facilitate understanding of the technical solution provided by the embodiment of the present application, an application scenario of the model training method and the emotion recognition method provided by the embodiment of the present application is described below with a terminal device as an example.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a model training method and an emotion recognition method provided in an embodiment of the present application. As shown in fig. 1, the application scenario includes a terminal device 101, where the terminal device 101 is configured to execute the model training method provided in the embodiment of the present application to train the emotion recognition model, and is also configured to execute the emotion recognition method provided in the embodiment of the present application to recognize the emotion state of the user.
In the model training phase, the terminal device 101 obtains a touch mode when the user operates the terminal device, where the touch mode may specifically include: click operation, sliding operation and the like with different force and/or different frequency; marking the emotion state corresponding to the acquired touch control mode; and then, the acquired touch control mode and the corresponding emotional state are used as training samples. After the terminal device 101 obtains the training samples, training a classification model pre-constructed in the terminal device 101 by using the obtained training samples by using a machine learning algorithm, thereby obtaining an emotion recognition model.
In the model application stage, the terminal device 101 executes the emotion recognition method provided by the embodiment of the application, and recognizes the emotion state of the user by using the emotion recognition model obtained by training in the model training stage; specifically, the terminal device 101 acquires a touch manner when the user operates the terminal device, and determines an emotional state corresponding to the acquired touch manner by using the emotion recognition model as the current emotional state of the user.
It should be noted that, in the model training phase, the terminal device 101 is an emotion recognition model obtained through training based on a touch manner and an emotion state corresponding to the touch manner when a user of the terminal device controls the terminal device, and the emotion recognition model is specifically applicable to the terminal device 101; accordingly, in the model application stage, the terminal device 101 determines the current emotional state of the user according to the touch mode of the user when the user operates the terminal device, and the accuracy of the determined emotional state can be effectively guaranteed.
It should be understood that the application scenario shown in fig. 1 is only an example, and in practical applications, the model training method and the emotion recognition method provided in the embodiment of the present application may also be applied to other application scenarios, and the application scenarios of the model training method and the emotion recognition method provided in the embodiment of the present application are not specifically limited herein.
The model training method provided by the present application is first described below by way of example.
Referring to fig. 2, fig. 2 is a schematic flowchart of a model training method according to an embodiment of the present disclosure. As shown in fig. 2, the model training method includes the following steps:
step 201: acquiring a touch mode when a user operates a terminal device, and marking an emotional state corresponding to the touch mode; and taking the touch control mode and the emotion state corresponding to the touch control mode as training samples.
When a user operates the terminal device, a touch manner when the user touches the touch screen is obtained, where the touch manner may also be understood as a touch operation, and the touch operation may specifically be a single-step touch operation initiated by the user for the touch screen, such as a click operation under different forces, a slide operation under different forces, and the like, or a continuous touch operation initiated by the user for the touch screen, such as a continuous click operation under different frequencies, a continuous slide operation under different frequencies, and the like.
Further, the terminal equipment marks the emotion state corresponding to the acquired touch mode, wherein the emotion state is the emotion state when the user initiates the touch mode; and taking the touch control mode and the emotion state corresponding to the touch control mode as training samples.
It should be understood that in order to ensure that the emotion recognition model trained based on the training samples has good model performance, a large number of training samples are generally required to be obtained; of course, in order to reduce the data processing amount of the terminal device, the number of the acquired training samples may also be reduced according to actual requirements, and the number of the acquired training samples is not specifically limited herein.
It should be noted that the touch manner generally needs to be determined based on touch data generated when a user touches the touch screen; for the capacitive screen, the touch data generally includes screen capacitance value change data and screen coordinate value change data, where the screen capacitance value change data can represent the force when the user clicks or slides the touch screen and the contact area between the user and the touch screen when the user clicks or slides the touch screen, the greater the force when the user clicks or slides the touch screen, the greater the change amplitude of the screen capacitance value, the greater the contact area between the user and the touch screen when the user clicks or slides the touch screen, and the more the changed screen capacitance value; the screen coordinate value change data is actually determined according to the screen capacitance value change data, and can represent the clicking position when the user clicks the touch screen, and the sliding direction and the sliding distance when the user slides the touch screen; when a user touches a touch screen of the terminal equipment, a bottom layer drive of the terminal equipment reports screen capacitance value change data and corresponding position coordinates to a processor of the terminal equipment through an input subsystem, and the sliding direction and the sliding distance can be determined by recording the continuously changed position coordinates.
It should be understood that for other types of touch screens, the user touching the touch screen will correspondingly generate other touch data, for example, for a resistive screen, the user touching the resistive screen will correspondingly generate screen resistance value change data and screen coordinate value change data, which can respectively reflect the current touch manner of the user, and no limitation is made to the specific type of touch data.
Specifically, when a touch mode is determined and a training sample is constructed based on touch data, the terminal device can acquire the touch data generated by a user operating the terminal device within a preset time period; clustering the collected touch data to generate a touch data set, and determining a touch mode corresponding to the touch data set; taking a touch data set with the most touch data as a target touch data set, taking a touch mode corresponding to the target touch data set as a target touch mode, and further marking an emotional state corresponding to the target touch mode; and finally, taking the target touch mode and the emotion state corresponding to the target touch mode as training samples.
Specifically, a user usually operates the terminal device for multiple times within a preset time period, and accordingly, the terminal device can acquire multiple touch data; clustering touch data with similar characteristics, for example, clustering screen capacitance value change data with similar change amplitudes together, clustering screen capacitance value change data with similar corresponding click positions together, clustering screen coordinate value change data with similar represented sliding tracks together, and the like, thereby obtaining a plurality of touch data sets; further, the touch manner corresponding to each touch set is correspondingly marked according to the type of the touch data in each touch set, for example, for a touch data set composed of touch data whose change amplitudes exceed a preset amplitude threshold, the touch manner corresponding to the touch data set may be marked as a heavy click, for a touch data set composed of touch data whose change frequencies exceed a preset frequency threshold, the touch manner corresponding to the touch data set may be marked as a frequent click, for a touch data set composed of screen coordinate value change data whose change frequencies exceed a preset frequency threshold, the touch manner corresponding to the touch data set may be marked as a frequent slide, and so on.
Further, determining a touch data set comprising the most touch data as a target touch data set, and correspondingly taking a touch mode corresponding to the target touch data set as a target touch mode; determining an emotional state corresponding to a target touch mode according to operation data content which is collected by terminal equipment and can represent the emotional state of a user in a preset time period and/or according to a corresponding relation between the touch mode and the emotional state recorded in an emotional state mapping table; and finally, taking the target touch mode and the corresponding emotional state as training samples.
It should be understood that, in the process of acquiring the training sample, a plurality of target touch manners and corresponding emotional states may be generally acquired through the above manners, and accordingly, when the emotion recognition model is trained, touch manners that can be recognized by the emotion recognition model may be determined based on the categories of the acquired target touch manners, and the emotional states that can be determined by the emotion recognition model may be determined based on the emotional states corresponding to the target touch manners.
Aiming at the implementation method of the emotional state corresponding to the mark touch mode, the application provides the following two implementation methods:
in the first method, the terminal equipment determines a reference time interval according to trigger time corresponding to a touch mode; acquiring operation data content generated by a user operating the terminal equipment in the reference time interval; and further, determining the emotional state of the user according to the operation data content as the emotional state corresponding to the touch mode.
Specifically, the terminal device may determine a trigger time corresponding to the touch manner, and determine a reference time interval according to a preset reference time interval length by using the trigger time as a central point; in addition, the terminal device may also use the trigger time corresponding to the touch manner as a starting point or an ending point, and determine the reference time interval according to the preset reference time interval length, and of course, the terminal device may also use other manners to determine the reference time interval according to the trigger time corresponding to the touch manner, where no limitation is made on the manner of determining the reference time interval.
It should be understood that the length of the reference time interval may be set according to actual requirements, and the length of the reference time interval is not specifically limited herein.
After the reference time interval is determined, the terminal device obtains operation data content generated by a user operating the terminal device in the reference time interval, wherein the operation data content is related data content generated by the user operating the terminal device, the operation data content specifically can be text content input into the terminal device by the user in the reference time interval, can also be voice content input into the terminal device by the user in the reference time interval, and can also be other operation data content generated by the user through an application program on the terminal device, and the type of the operation data content is not limited at all.
After the operation data content is acquired, the terminal device can correspondingly determine the emotional state corresponding to the operation data content by analyzing and processing the operation data content; for example, when the operation data content is the text content input by the user, the terminal device may determine the emotional state of the text content by performing semantic analysis on the text content; when the operation data content is voice content input by a user, the terminal equipment can determine the corresponding emotional state of the voice content by performing voice recognition analysis on the voice content; when the operation data content is data content in other forms, the terminal device may also correspondingly determine the emotional state corresponding to the operation data content in other ways, and the way of determining the emotional state corresponding to the operation data content is not limited. And finally, taking the emotional state corresponding to the operation data content as the emotional state corresponding to the touch control mode. .
It should be understood that when the target touch manner is determined by clustering the touch data in the preset time period, the preset time period may be directly used as a reference time interval, and then the emotional state corresponding to the operation data content is determined as the emotional state corresponding to the target touch manner according to the operation data content generated by the user operating the terminal device in the preset time period.
It should be noted that, before the terminal device acquires the operation data content, the terminal device needs to obtain the permission right of the user, and only when the user allows the terminal device to acquire the operation data content, the terminal device may acquire the operation data content generated by the user operating the terminal device, and mark a corresponding emotional state for the touch mode based on the acquired operation data content; moreover, after the terminal device acquires the operation data content, the acquired operation data content also needs to be encrypted and stored, so that the data privacy security of the user is guaranteed.
In the second method, the terminal equipment calls a preset emotional state mapping relation table; the corresponding relation between the touch mode and the emotional state is recorded in the emotional state mapping relation table; and then, searching the emotional state corresponding to the touch mode in the emotional state mapping relation table.
At present, relevant research and research have been carried out to find that a certain mapping relationship exists between a touch mode when a user touches a terminal device and an emotional state of the user, some research results capable of reflecting the mapping relationship are generated, an emotional state mapping relationship table is correspondingly generated by sorting the existing research results, and the emotional state mapping relationship table is used for recording the corresponding relationship between various touch modes and the emotional state.
After the touch control mode when the user operates the terminal device is obtained, the terminal device can call the emotion state mapping relation table preset by the terminal device, and further look up the emotion state corresponding to the obtained touch control mode in the emotion state mapping relation table.
It should be understood that, when the target touch manner is determined by clustering the touch data in the preset time period, the emotional state corresponding to the target touch manner may be searched in the emotional state mapping relation table.
It should be noted that, with the first method, after the emotional state corresponding to the touch mode is marked for the touch mode according to the operation data content generated by the user operating the terminal device, the corresponding relationship between the touch mode and the emotional state determined in this way may be further utilized to perform optimization and update processing on the emotional state mapping table, so as to continuously enrich the mapping relationship recorded in the emotional state mapping table.
It should be noted that, in practical applications, the first method or the second method may be separately used to mark the emotional state corresponding to the touch manner, or the first method and the second method may be combined to mark the emotional state corresponding to the touch manner, that is, when the emotional state corresponding to the touch manner cannot be accurately determined by using the first method, the emotional state corresponding to the touch manner may be determined by using the second method, when the emotional state corresponding to the touch manner cannot be accurately determined by using the second method, the emotional state corresponding to the touch manner may be determined by using the first method, or the emotional state corresponding to the touch manner may be determined according to the emotional states respectively determined by using the two methods.
It should be understood that, in practical applications, in addition to the manner of using the two methods to mark the emotional state for the touch manner, other methods may be selected to determine the emotional state corresponding to the touch manner according to actual requirements, and the method of marking the emotional state is not limited herein.
It should be noted that, for the same user, the emotional state that the user often shows is basically specific, and the touch manner adopted by the touch terminal device in the specific emotional state is also specific; the training samples are collected based on the method, so that most of the touch control modes included in the collected training samples are the touch control modes frequently adopted by the user, most of the emotion states corresponding to the touch control modes belong to the emotion states frequently expressed by the user, and accordingly, the emotion recognition models obtained based on the training samples can be ensured, and the emotion states frequently expressed by the user can be determined more sensitively according to the touch control modes frequently adopted when the user touches the terminal device.
Step 202: training a classification model by using the training sample by using a machine learning algorithm to obtain an emotion recognition model; the emotion recognition model takes a touch control mode when a user operates the terminal device as input and takes an emotion state corresponding to the touch control mode as output.
After the training samples for training the emotion recognition model are acquired, the terminal device can use a machine learning algorithm to train the classification model preset in the terminal device by using the acquired training samples so as to continuously optimize the model parameters of the classification model, and after the classification model meets the training end conditions, the emotion recognition model is generated according to the model structure and the model parameters of the classification model.
When the emotion recognition model is specifically trained, the terminal device can input the touch mode in the training sample into the classification model, the classification model analyzes and processes the touch mode, outputs the emotion state corresponding to the touch mode, constructs a loss function according to the emotion state output by the classification model and the emotion state corresponding to the touch mode in the training sample, and further adjusts the model parameters in the classification model according to the loss function, so that the classification model is optimized, and when the classification model meets the training end condition, the emotion recognition model can be generated according to the model structure and the model parameters of the current classification model.
Specifically, when judging whether the classification model meets training end conditions, a test sample can be used for verifying the first model, wherein the test sample is similar to the training sample and comprises a touch mode and an emotional state corresponding to the touch mode, and the first model is obtained by performing first round training optimization on the classification model by using a plurality of training samples; specifically, the terminal device inputs the touch mode in the test sample into the first model, and the touch mode is correspondingly processed by using the first model to obtain an emotional state corresponding to the touch mode; and then, calculating a prediction accuracy according to the emotion state corresponding to the touch mode in the test sample and the emotion state output by the first model, and when the prediction accuracy is greater than a preset threshold, determining that the model performance of the first model can meet the requirement, and generating an emotion recognition model according to the model parameters and the model structure of the first model.
It should be understood that the preset threshold may be set according to actual situations, and the preset threshold is not specifically limited herein.
In addition, when judging whether the classification model meets the training end condition, whether the classification model is continuously trained or not can be determined according to a plurality of models obtained through a plurality of rounds of training so as to obtain the emotion recognition model with the optimal model performance. Specifically, a plurality of classification models obtained through a plurality of rounds of training can be verified respectively by using test samples, if the difference between the prediction accuracy rates of the models obtained through each round of training is judged to be small, the performance of the classification models is considered to have no space for improvement, the classification model with the highest prediction accuracy rate can be selected, and the emotion recognition model is determined according to the model parameters and the model structure of the classification model; if the prediction accuracy rates of the classification models obtained through the training of the rounds have larger differences, the performance of the classification model is considered to have a space for improvement, and the classification model can be continuously trained until the emotion recognition model with the most stable and optimal model performance is obtained.
In addition, the terminal equipment can also determine whether the classification model meets the training end condition according to the feedback information of the user. Specifically, the terminal device may prompt the user to test and use the classification model being trained, and correspondingly feed back feedback information for the classification model, and if the feedback information of the user for the classification model indicates that the current performance of the classification model still cannot meet the current requirements of the user, the terminal device needs to utilize a training sample to continue optimization training for the classification model; on the contrary, if the feedback information of the user for the classification model represents that the current performance of the classification model is better and basically meets the current requirements of the user, the terminal device can generate the emotion recognition model according to the model structure and the model parameters of the classification model.
It should be noted that the touch manner of the user touch terminal device may change with the increase of the usage time, and therefore, after the emotion recognition model is obtained through training, the terminal device may further continue to collect the optimized training sample, and further perform optimized training on the emotion recognition model by using the collected optimized training sample, so as to optimize the model performance of the emotion recognition model, and enable the emotion recognition model to more accurately determine the emotion state of the user according to the touch manner of the user.
Specifically, after the emotion recognition model is obtained, the terminal device can continue to obtain a touch mode when the user operates the terminal device, and the touch mode is used as an optimized touch mode; and labeling the emotional state corresponding to the optimized touch manner, wherein the specific method for labeling the emotional state can refer to the related description in step 101, and the optimized touch manner and the emotional state corresponding to the optimized touch manner are used as an optimized training sample for performing optimized training on the emotion recognition model.
In one possible implementation, the terminal device may initiate optimal training of the emotion recognition model in response to feedback information of the user. That is, the terminal device may obtain feedback information of the user for the emotion recognition model, where the feedback information is used to represent whether the performance of the emotion recognition model meets the user requirements; and when the performance of the obtained feedback information representation emotion recognition model does not meet the requirements of the user, performing optimization training on the emotion recognition model by using the optimization training sample.
Specifically, the terminal device may periodically initiate a feedback information obtaining operation, for example, the terminal device may periodically display an emotion recognition model feedback information obtaining interface, so as to obtain feedback information of the user for the emotion recognition model through the interface; of course, the terminal device may also obtain the feedback information in other manners, and the obtaining manner of the feedback information is not limited herein.
After the terminal equipment acquires the feedback information, if the current performance of the emotion recognition model represented by the feedback information does not meet the requirements of the user, correspondingly acquiring an optimized training sample, and performing further optimized training on the emotion recognition model; on the contrary, if the feedback information representation emotion recognition model is determined to meet the requirements of the user, the emotion recognition model is not subjected to further optimization training temporarily.
In another possible implementation manner, the terminal device may perform optimal training on the emotion recognition model by using the optimal training sample directly when the terminal device is in the charging state, and/or when the remaining power of the terminal device is higher than the preset power, and/or when the duration of the terminal device being in the idle state exceeds the preset duration.
When the emotion recognition model is optimally trained, the electric quantity of the terminal device needs to be consumed, and the optimal training process may have certain influence on other functions of the terminal device, for example, the running speed of an application program on the terminal device is influenced; in order to ensure that the emotion recognition model is optimally trained in time under the condition that the terminal equipment is not influenced by the use of a user, the terminal equipment can be in a charging state, and the emotion recognition model is optimally trained by utilizing an optimal training sample; or the terminal equipment can perform optimization training on the emotion recognition model by using the optimization training sample when the residual electric quantity of the terminal equipment is higher than the preset electric quantity; or, the terminal device may perform optimal training on the emotion recognition model by using the optimal training sample when the duration of the terminal device in the idle state exceeds a preset duration, where the idle state specifically refers to a state in which the terminal device is when the user does not use the terminal device; or the terminal equipment can perform optimization training on the emotion recognition model by using the optimization training sample when any two or three conditions of the state of charge, the residual electric quantity higher than the preset electric quantity and the idle state duration longer than the preset duration are met.
It should be understood that the preset electric quantity can be set according to actual requirements, and the numerical value of the preset electric quantity is not specifically limited herein; the preset time length can also be set according to actual requirements, and the numerical value of the preset time length is not specifically limited.
It should be understood that, in practical applications, in addition to determining the timing for optimally training the emotion recognition model by using the two implementation manners, the timing for optimally training the emotion recognition model may also be determined according to other conditions, for example, the emotion recognition model may be optimally trained when the optimal training samples reach a preset number, or for example, an optimal training period may be set, the emotion recognition model may be optimally trained according to the optimal training period, and any limitation is not made on the manner for determining the timing for optimally training the emotion recognition model.
The model training method provided by the embodiment of the application can be used for training the emotion recognition model which is pertinently suitable for the terminal equipment by adopting a machine learning algorithm according to different terminal equipment and correspondingly based on the touch control mode when the user of the terminal equipment controls the terminal equipment and the emotion state corresponding to the touch control mode; therefore, the emotion recognition model obtained by training the terminal equipment is applied to the terminal equipment, so that the emotion recognition model can be ensured to accurately determine the emotion state of the user according to the touch control mode when the user of the terminal equipment controls the terminal equipment.
Based on the model training method provided by the embodiment, the emotion recognition model with better model performance can be obtained through training, and based on the emotion recognition model, the application further provides an emotion recognition method so as to more clearly understand the role of the emotion recognition model in practical application. The emotion recognition method provided by the present application is specifically described below by way of an example.
Referring to fig. 3, fig. 3 is a schematic flowchart of an emotion recognition method provided in an embodiment of the present application. As shown in fig. 3, the emotion recognition method includes the steps of:
step 301: and acquiring a touch mode when a user operates the terminal equipment.
When a user operates a terminal device, the terminal device may correspondingly obtain a touch manner of the user, where the touch manner may also be understood as a touch operation, and the touch operation may specifically be a single-step touch operation initiated by the user for a touch screen, such as a click operation under different forces, a slide operation under different forces, and the like, or a continuous touch operation initiated by the user for the touch screen, such as a continuous click operation at different frequencies, a continuous slide operation at different frequencies, and the like.
It should be noted that, in general, the touch manner is determined based on the touch data acquired by the terminal device, that is, when the terminal device is operated by a user, the terminal device acquires the touch data generated by the touch screen of the user, and further determines the touch manner based on the acquired touch data.
For the capacitive screen, the touch data generally includes screen capacitance value change data and screen coordinate value change data, where the screen capacitance value change data can represent the force when the user clicks or slides the touch screen, and the contact area between the user clicks or slides the touch screen and the touch screen; the screen coordinate value change data is actually determined according to the screen capacitance value change data, and can represent the click position when the user clicks the touch screen, and the sliding direction and the sliding distance when the user slides the touch screen.
Correspondingly, after the terminal equipment acquires the screen capacitance value change data and the screen coordinate value change data, the touch control mode of the current touch control terminal equipment of the user can be determined according to the screen capacitance value change data and the screen coordinate value change data; for example, according to the change amplitude of the screen capacitance value change data, it can be determined whether the current touch mode of the user is a heavy click or a light click, according to the change frequency of the screen capacitance value change data, it can be determined whether the current touch mode of the user is a frequent click, according to the sliding track represented by the screen coordinate value change data, it can be determined whether the current touch mode of the user is a large-range sliding or a small-range sliding, and according to the change frequency of the screen coordinate value change data, it can be determined whether the current touch mode of the user is a frequent sliding. Of course, the terminal device may also determine other touch manners according to the touch data, and the touch manners are only examples, and the touch manners are not specifically limited herein.
It should be understood that for other types of touch screens, the user touching the touch screen will generate other touch data accordingly, for example, for a resistive screen, the user touching the resistive screen will generate screen resistance value change data and screen coordinate value change data accordingly, and the current touch manner of the user can be determined accordingly according to these data, and the specific type of touch data is not limited at all.
Step 302: determining an emotional state corresponding to the touch mode by using an emotional recognition model as the current emotional state of the user; the emotion recognition model is obtained by performing the model training method shown in fig. 2.
After the terminal equipment acquires the touch control mode, the acquired touch control mode is input to an emotion recognition model running in the terminal equipment, the acquired touch control mode is analyzed and processed by the emotion recognition model, and then the emotion state corresponding to the touch control mode is output and serves as the current emotion state of the user.
It should be noted that the emotion recognition model is a model obtained by training with the model training method shown in fig. 2, and in the training process of the model, based on the touch data when the user operates the terminal device and the emotion state corresponding to the touch data, the emotion recognition model which is specifically applicable to the terminal device is obtained by training, and the emotion recognition model can accurately determine the emotion state of the user according to the touch mode when the user operates the terminal device.
It should be understood that the emotional state that can be recognized by the emotion recognition model depends on the training sample used in training the emotion recognition model; the touch manner included in the training sample is a touch manner when the user operates the terminal device, and the emotional state included in the training sample is an emotional state when the user uses the terminal device, that is, the training sample is generated completely based on the touch manner of the user of the terminal device and the emotional state shown by the touch manner. Correspondingly, the emotion recognition model trained by the training sample can accurately determine the current emotion state of the user according to the touch control mode of the user when the user operates the terminal device, namely, the emotion recognition model trained by the training sample can sensitively recognize the corresponding emotion state according to the touch control mode familiar to the user.
After the current emotional state of the user is identified by the emotion identification model, the terminal device can further provide personalized service for the user according to the identified current emotional state of the user correspondingly, so that the user experience brought to the user by the terminal device is improved.
In a possible implementation manner, the terminal device may switch the display style of the desktop interface under the condition that the terminal device displays the desktop interface; for example, switching display themes of the desktop interface, displaying wallpaper, displaying fonts, and the like.
For example, when the terminal device acquires that the touch manner of the user is that the touch screen is frequently slid, the touch manner is input into the emotion recognition model, and the emotion recognition model may determine that the emotion state corresponding to the touch manner is fidgety; at this time, if the interface displayed by the terminal device is a desktop interface, the terminal device may correspondingly switch the wallpaper of the desktop to a brighter and pleasant picture, or the terminal device may also change the display theme and/or the display font, so as to bring a pleasant viewing experience to the user.
Of course, the terminal device may also change other display styles on the desktop interface according to the current emotional state of the user, and no limitation is made to the display styles that can be changed here.
In another possible implementation manner, the terminal device may recommend the relevant content to the user through the application program under the condition that the terminal device itself starts the application program.
For example, assuming that the currently-started application program of the terminal device is a music playing program, correspondingly, if the emotion recognition model determines that the current emotional state of the user is low according to the touch manner of the user, the music playing program may recommend some cheerful music for the user to alleviate the current low emotion of the user; or, assuming that the currently started application program of the terminal device is a video playing program, correspondingly, if the emotion recognition model determines that the current emotion state of the user is too difficult according to the touch manner of the user, the video playing program may recommend some funny videos for the user to adjust the current too difficult emotion of the user. Of course, the terminal device may also recommend relevant text content to the user accordingly according to the current emotional state of the user through other application programs, for example, recommend relevant articles, jokes, and the like to the user.
The application program capable of recommending the relevant content according to the emotional state of the user is not limited at all, and the relevant content recommended by the application program is not specifically limited.
It should be understood that, in addition to the above two possible implementation manners, the terminal device may also adopt other manners according to actual situations, and provide a reasonable personalized service for the user according to the current emotional state of the user, for example, recommend the user to perform a relevant operation that can alleviate emotion, and the like, and the personalized service that the terminal device can provide is not specifically limited herein.
In the emotion recognition method provided by the embodiment of the application, the terminal device determines the current emotion state of the user according to the touch control mode when the user operates the terminal device by using the emotion recognition model obtained by training the terminal device. Compared with the commonly used emotion recognition method in the prior art, the method can utilize the emotion recognition model to pertinently determine the emotion state of the user according to the touch control mode when the user operates the terminal equipment, and the accuracy of the determined emotion state is ensured; in addition, in the process of determining the emotional state of the user, the method does not need any additional external equipment, and the purpose of improving the user experience is really realized.
Aiming at the model training method described above, the present application also provides a corresponding model training device, so that the model training method described above can be applied and implemented in practice.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a model training apparatus provided in the embodiment of the present application; as shown in fig. 4, the model training apparatus includes:
a training sample acquisition module 401, configured to acquire a touch manner when a user operates a terminal device, and mark an emotional state corresponding to the touch manner; taking the touch control mode and the emotion state corresponding to the touch control mode as training samples;
the model training module 402 is configured to train a classification model by using the training samples through a machine learning algorithm to obtain an emotion recognition model; the emotion recognition model takes a touch control mode when a user operates the terminal device as input and takes an emotion state corresponding to the touch control mode as output.
In a specific implementation, the training sample obtaining module 401 may be specifically configured to execute the method in step 201, specifically refer to the description of step 201 in the embodiment of the method shown in fig. 2; the model training module 402 may be specifically configured to execute the method in step 202, and please refer to the description of step 202 in the embodiment of the method shown in fig. 2, which is not described herein again.
Optionally, the training sample obtaining module 401 is specifically configured to:
determining a reference time interval according to the trigger time corresponding to the touch control mode;
acquiring operation data content generated by the user operating the terminal equipment in the reference time interval;
and determining the emotional state of the user according to the operation data content, and using the emotional state as the emotional state corresponding to the touch mode.
In a specific implementation, the training sample obtaining module 401 may refer to the description of the content related to determining the emotional state corresponding to the touch manner in the embodiment shown in fig. 2.
Optionally, the training sample obtaining module 401 is specifically configured to:
calling a preset emotional state mapping relation table; the emotional state mapping relation table records the corresponding relation between the touch mode and the emotional state;
and searching the emotional state mapping relation table, and determining the emotional state corresponding to the touch mode.
In a specific implementation, the training sample obtaining module 401 may refer to the description of the content related to determining the emotional state corresponding to the touch manner in the embodiment shown in fig. 2.
Optionally, the training sample obtaining module 401 is specifically configured to:
acquiring touch data generated by a user for controlling the terminal equipment within a preset time period;
clustering the touch data to generate a touch data set, and determining a touch mode corresponding to the touch data set;
taking a touch data set with the most touch data as a target touch data set, and taking a touch mode corresponding to the target touch data set as a target touch mode; marking the emotional state corresponding to the target touch manner;
and taking the target touch mode and the emotion state corresponding to the target touch mode as training samples.
In a specific implementation, the training sample obtaining module 401 may refer to the description of the content related to determining the emotional state corresponding to the touch manner in the embodiment shown in fig. 2.
Optionally, the touch data includes: the data of the change of the capacitance value of the screen and the data of the change of the coordinate value.
Optionally, the apparatus further comprises:
the optimized training sample acquisition module is used for acquiring a touch mode when a user operates the terminal equipment as an optimized touch mode; marking the emotional state corresponding to the optimized touch control mode; taking the optimized touch control mode and the emotion state corresponding to the optimized touch control mode as an optimized training sample; the optimized training sample is used for performing optimized training on the emotion recognition model.
In a specific implementation, the optimized training sample obtaining module may refer to the description about obtaining the related content of the optimized training sample in the embodiment shown in fig. 2.
Optionally, the apparatus further comprises:
the feedback information acquisition module is used for acquiring feedback information of the user aiming at the emotion recognition model; the feedback information is used for representing whether the performance of the emotion recognition model meets the requirements of a user;
and the first optimization training module is used for performing optimization training on the emotion recognition model by using the optimization training sample when the feedback information represents that the performance of the emotion recognition model does not meet the requirements of a user.
In a specific implementation, the feedback information obtaining module and the first optimization training module may specifically refer to the description of the relevant content for performing optimization training on the emotion recognition model in the embodiment shown in fig. 2.
Optionally, the apparatus further comprises:
and the second optimization training module is used for optimizing training of the emotion recognition model by utilizing the optimization training sample when the terminal equipment is in a charging state and/or the residual electric quantity of the terminal equipment is higher than the preset electric quantity and/or the time length of the terminal equipment in an idle state exceeds the preset time length.
In a specific implementation, the feedback information obtaining module and the first optimization training module may specifically refer to the description of the relevant content for performing optimization training on the emotion recognition model in the embodiment shown in fig. 2.
The model training device provided in the embodiment of the application can correspondingly train the emotion recognition model which is pertinently suitable for the terminal equipment by adopting a machine learning algorithm based on the touch control mode when the user of the terminal equipment controls the terminal equipment and the emotion state corresponding to the touch control mode aiming at different terminal equipment; therefore, the emotion recognition model obtained by training the terminal equipment is applied to the terminal equipment, so that the emotion recognition model can be ensured to accurately determine the emotion state of the user according to the touch control mode when the user of the terminal equipment controls the terminal equipment.
Aiming at the emotion recognition method described above, the application also provides a corresponding emotion recognition device, so that the emotion recognition method can be applied and realized in practice.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an emotion recognition apparatus provided in an embodiment of the present application; as shown in fig. 5, the emotion recognition apparatus includes:
a touch mode obtaining module 501, configured to obtain a touch mode when a user operates a terminal device;
an emotion state identification module 502, configured to determine, by using an emotion identification model, an emotion state corresponding to the touch manner, as a current emotion state of the user; the emotion recognition model is obtained by executing the model training method described in fig. 2.
In a specific implementation, the touch manner obtaining module 501 may be specifically configured to execute the method in step 301, specifically refer to the description of step 301 in the embodiment of the method shown in fig. 3; the emotional state identification module 502 may be specifically configured to execute the method in step 302, and please refer to the description of the step 302 in the embodiment of the method shown in fig. 3, which is not described herein again.
Optionally, the apparatus further comprises:
and the display style switching module is used for switching the display style of the desktop interface according to the current emotional state of the user under the condition that the terminal equipment displays the desktop interface.
In a specific implementation, the display style switching module may specifically refer to the description about the related content of switching the desktop interface display style in the embodiment shown in fig. 3.
Optionally, the apparatus further comprises:
and the content recommending module is used for recommending related content through the application program according to the current emotional state of the user under the condition that the terminal equipment starts the application program.
In a specific implementation, the content recommendation module may specifically refer to the description about recommending related content through an application program in the embodiment shown in fig. 3.
In the emotion recognition device provided by the embodiment of the application, the terminal equipment determines the current emotion state of the user according to the touch control mode when the user controls the terminal equipment by using the emotion recognition model obtained by training the terminal equipment. The device can utilize the emotion recognition model to pointedly determine the emotion state of the user according to the touch control mode when the user operates the terminal equipment, and the accuracy of the determined emotion state is ensured; in addition, the device does not need any additional external equipment in the process of determining the emotional state of the user, and the purpose of improving the user experience is really realized.
The application also provides a server for training the model; referring to fig. 6, fig. 6 is a schematic diagram of a server structure for training a model according to an embodiment of the present application, where the server 600 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 622 (e.g., one or more processors) and a memory 632, and one or more storage media 630 (e.g., one or more mass storage devices) for storing applications 642 or data 644. Memory 632 and storage medium 630 may be, among other things, transient or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 622 may be configured to communicate with the storage medium 630 and execute a series of instruction operations in the storage medium 630 on the server 600.
The server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 650, one or more input-output interfaces 658, and/or one or more operating systems 641, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 6.
The CPU622 is configured to execute the following steps:
acquiring a touch mode when a user operates a terminal device, and marking an emotional state corresponding to the touch mode; taking the touch control mode and the emotion state corresponding to the touch control mode as training samples;
training a classification model by using the training sample by using a machine learning algorithm to obtain an emotion recognition model; the emotion recognition model takes a touch control mode when a user operates the terminal device as input and takes an emotion state corresponding to the touch control mode as output.
Optionally, the CPU622 can also execute the method steps of any specific implementation of the model training method in the embodiment of the present application.
It should be noted that, when the server shown in fig. 6 is used to train the emotion recognition model, the server needs to communicate with the terminal device to obtain the training samples from the terminal device, it should be understood that the training samples from different terminal devices should configure the identifiers of the corresponding terminal devices accordingly, so that the CPU622 of the server can train the emotion recognition model suitable for the terminal device by using the training samples from the same terminal device by using the model training method provided in the embodiment of the present application.
The embodiment of the present application further provides another electronic device (the electronic device may be the terminal device described above) for training a model and recognizing emotion, and the electronic device is configured to execute the model training method provided by the embodiment of the present application, and train an emotion recognition model suitable for the electronic device; and/or executing the emotion recognition method provided by the embodiment of the application, and correspondingly recognizing the current emotion state of the user according to the touch control mode of the user by using the trained emotion recognition model.
Fig. 7 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine a touch manner. Visual output related to touch operations may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 8 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 8, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 8, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
The present application further provides a computer-readable storage medium for storing a program code for executing any one of the implementation manners of the model training method and/or any one of the implementation manners of the emotion recognition method described in the foregoing embodiments.
The present embodiments also provide a computer program product including instructions, which when run on a computer, cause the computer to perform any one of the embodiments of the model training method and/or any one of the embodiments of the emotion recognition method described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (22)

1. A method of model training, the method comprising:
acquiring a touch mode when a user operates a terminal device, and marking an emotional state corresponding to the touch mode; taking the touch control mode and the emotion state corresponding to the touch control mode as training samples;
training a classification model by using the training sample by using a machine learning algorithm to obtain an emotion recognition model; the emotion recognition model takes a touch control mode when a user operates the terminal equipment as input and takes an emotion state corresponding to the touch control mode as output, and the emotion recognition model is pertinently suitable for the terminal equipment;
the method comprises the steps of obtaining a touch mode when a user operates a terminal device, and marking an emotional state corresponding to the touch mode; taking the touch mode and the emotional state corresponding to the touch mode as training samples, including:
acquiring touch data generated by a user for controlling the terminal equipment within a preset time period;
clustering the touch data to generate a touch data set, and determining a touch mode corresponding to the touch data set;
taking a touch data set with the most touch data as a target touch data set, and taking a touch mode corresponding to the target touch data set as a target touch mode; marking the emotional state corresponding to the target touch manner;
and taking the target touch mode and the emotion state corresponding to the target touch mode as training samples.
2. The method according to claim 1, wherein the marking of the emotional state corresponding to the touch manner comprises:
determining a reference time interval according to the trigger time corresponding to the touch control mode;
acquiring operation data content generated by the user operating the terminal equipment in the reference time interval;
and determining the emotional state of the user according to the operation data content, and using the emotional state as the emotional state corresponding to the touch mode.
3. The method according to claim 1, wherein the marking of the emotional state corresponding to the touch manner comprises:
calling a preset emotional state mapping relation table; the emotional state mapping relation table records the corresponding relation between the touch mode and the emotional state;
and searching the emotional state mapping relation table, and determining the emotional state corresponding to the touch mode.
4. The method of claim 1, wherein the touch data comprises: the data of the change of the capacitance value of the screen and the data of the change of the coordinate value.
5. The method of claim 1, wherein after obtaining the emotion recognition model, the method further comprises:
acquiring a touch mode when a user operates the terminal equipment as an optimized touch mode; marking the emotional state corresponding to the optimized touch control mode; taking the optimized touch control mode and the emotion state corresponding to the optimized touch control mode as an optimized training sample; the optimized training sample is used for performing optimized training on the emotion recognition model.
6. The method of claim 5, further comprising:
acquiring feedback information of a user for the emotion recognition model; the feedback information is used for representing whether the performance of the emotion recognition model meets the requirements of a user;
and when the feedback information represents that the performance of the emotion recognition model does not meet the requirements of the user, performing optimization training on the emotion recognition model by using the optimization training sample.
7. The method of claim 5, further comprising:
when the terminal equipment is in a charging state, and/or when the residual electric quantity of the terminal equipment is higher than a preset electric quantity, and/or when the time length of the terminal equipment in an idle state exceeds a preset time length, the emotion recognition model is optimally trained by utilizing the optimal training sample.
8. A method of emotion recognition, the method comprising:
acquiring a touch mode when a user operates a terminal device;
determining an emotional state corresponding to the touch mode by using an emotional recognition model as the current emotional state of the user; the emotion recognition model is trained by performing the model training method according to any one of claims 1 to 7.
9. The method of claim 8, further comprising:
and under the condition that the terminal equipment displays the desktop interface, switching the display style of the desktop interface according to the current emotional state of the user.
10. The method of claim 8, further comprising:
and under the condition that the terminal equipment starts an application program, recommending related content through the application program according to the current emotional state of the user.
11. A model training apparatus, the apparatus comprising:
the training sample acquisition module is used for acquiring a touch mode when a user operates the terminal equipment and marking an emotional state corresponding to the touch mode; taking the touch control mode and the emotion state corresponding to the touch control mode as training samples;
the model training module is used for training the classification model by using the training sample by adopting a machine learning algorithm to obtain an emotion recognition model; the emotion recognition model takes a touch control mode when a user operates the terminal equipment as input and takes an emotion state corresponding to the touch control mode as output; the emotion recognition model is pertinently applicable to the terminal device;
the training sample acquisition module is specifically configured to:
acquiring touch data generated by a user for controlling the terminal equipment within a preset time period;
clustering the touch data to generate a touch data set, and determining a touch mode corresponding to the touch data set;
taking a touch data set with the most touch data as a target touch data set, and taking a touch mode corresponding to the target touch data set as a target touch mode; marking the emotional state corresponding to the target touch manner;
and taking the target touch mode and the emotion state corresponding to the target touch mode as training samples.
12. The apparatus of claim 11, wherein the training sample acquisition module is specifically configured to:
determining a reference time interval according to the trigger time corresponding to the touch control mode;
acquiring operation data content generated by the user operating the terminal equipment in the reference time interval;
and determining the emotional state of the user according to the operation data content, and using the emotional state as the emotional state corresponding to the touch mode.
13. The apparatus of claim 11, wherein the training sample acquisition module is specifically configured to:
calling a preset emotional state mapping relation table; the emotional state mapping relation table records the corresponding relation between the touch mode and the emotional state;
and searching the emotional state mapping relation table, and determining the emotional state corresponding to the touch mode.
14. The apparatus of claim 11, wherein the touch data comprises: the data of the change of the capacitance value of the screen and the data of the change of the coordinate value.
15. The apparatus of claim 11, further comprising:
the optimized training sample acquisition module is used for acquiring a touch mode when a user operates the terminal equipment as an optimized touch mode; marking the emotional state corresponding to the optimized touch control mode; taking the optimized touch control mode and the emotion state corresponding to the optimized touch control mode as an optimized training sample; the optimized training sample is used for performing optimized training on the emotion recognition model.
16. The apparatus of claim 15, further comprising:
the feedback information acquisition module is used for acquiring feedback information of the user aiming at the emotion recognition model; the feedback information is used for representing whether the performance of the emotion recognition model meets the requirements of a user;
and the first optimization training module is used for performing optimization training on the emotion recognition model by using the optimization training sample when the feedback information represents that the performance of the emotion recognition model does not meet the requirements of a user.
17. The apparatus of claim 15, further comprising:
and the second optimization training module is used for optimizing training of the emotion recognition model by utilizing the optimization training sample when the terminal equipment is in a charging state and/or the residual electric quantity of the terminal equipment is higher than the preset electric quantity and/or the time length of the terminal equipment in an idle state exceeds the preset time length.
18. An emotion recognition apparatus, characterized in that the apparatus comprises:
the touch control mode acquisition module is used for acquiring a touch control mode when a user operates the terminal equipment;
the emotion state identification module is used for determining an emotion state corresponding to the touch mode by using an emotion identification model as the current emotion state of the user; the emotion recognition model is trained by performing the model training method according to any one of claims 1 to 7.
19. The apparatus of claim 18, further comprising:
and the display style switching module is used for switching the display style of the desktop interface according to the current emotional state of the user under the condition that the terminal equipment displays the desktop interface.
20. The apparatus of claim 18, further comprising:
and the content recommending module is used for recommending related content through the application program according to the current emotional state of the user under the condition that the terminal equipment starts the application program.
21. An electronic device, wherein the terminal device comprises a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the model training method of any one of claims 1 to 7 and/or to perform the emotion recognition method of any one of claims 8 to 10, according to instructions in the program code.
22. A computer-readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the model training method of any one of claims 1 to 7 and/or perform the emotion recognition method of any one of claims 8 to 10.
CN201910309245.5A 2019-04-17 2019-04-17 Model training method, emotion recognition method, and related device and equipment Active CN110134316B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910309245.5A CN110134316B (en) 2019-04-17 2019-04-17 Model training method, emotion recognition method, and related device and equipment
PCT/CN2020/084216 WO2020211701A1 (en) 2019-04-17 2020-04-10 Model training method, emotion recognition method, related apparatus and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910309245.5A CN110134316B (en) 2019-04-17 2019-04-17 Model training method, emotion recognition method, and related device and equipment

Publications (2)

Publication Number Publication Date
CN110134316A CN110134316A (en) 2019-08-16
CN110134316B true CN110134316B (en) 2021-12-24

Family

ID=67570305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910309245.5A Active CN110134316B (en) 2019-04-17 2019-04-17 Model training method, emotion recognition method, and related device and equipment

Country Status (2)

Country Link
CN (1) CN110134316B (en)
WO (1) WO2020211701A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134316B (en) * 2019-04-17 2021-12-24 华为技术有限公司 Model training method, emotion recognition method, and related device and equipment
CN114223139B (en) * 2019-10-29 2023-11-24 深圳市欢太科技有限公司 Interface switching method and device, wearable electronic equipment and storage medium
CN111166290A (en) * 2020-01-06 2020-05-19 华为技术有限公司 Health state detection method, equipment and computer storage medium
CN111530081B (en) * 2020-04-17 2023-07-25 成都数字天空科技有限公司 Game level design method and device, storage medium and electronic equipment
CN111626191B (en) * 2020-05-26 2023-06-30 深圳地平线机器人科技有限公司 Model generation method, device, computer readable storage medium and electronic equipment
CN112220479A (en) * 2020-09-04 2021-01-15 陈婉婷 Genetic algorithm-based examined individual emotion judgment method, device and equipment
CN112596405B (en) * 2020-12-17 2024-06-04 深圳市创维软件有限公司 Control method, device, equipment and computer readable storage medium for household appliances
CN112906555B (en) * 2021-02-10 2022-08-05 华南师范大学 Artificial intelligence mental robot and method for recognizing expressions from person to person
CN112949575A (en) * 2021-03-29 2021-06-11 建信金融科技有限责任公司 Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium
CN115657870A (en) * 2021-07-07 2023-01-31 荣耀终端有限公司 Method for adjusting sampling rate of touch screen and electronic equipment
CN113656635B (en) * 2021-09-03 2024-04-09 咪咕音乐有限公司 Video color ring synthesis method, device, equipment and computer readable storage medium
CN113744738B (en) * 2021-09-10 2024-03-19 安徽淘云科技股份有限公司 Man-machine interaction method and related equipment thereof
CN113791690B (en) * 2021-09-22 2024-03-29 入微智能科技(南京)有限公司 Man-machine interaction public equipment with real-time emotion recognition function
CN114363049A (en) * 2021-12-30 2022-04-15 武汉杰创达科技有限公司 Internet of things equipment multi-ID identification method based on personalized interaction difference
CN114819614B (en) * 2022-04-22 2024-10-15 支付宝(杭州)信息技术有限公司 Data processing method, device, system and equipment
CN115492493A (en) * 2022-07-28 2022-12-20 重庆长安汽车股份有限公司 Tail gate control method, device, equipment and medium
CN116662638B (en) * 2022-09-06 2024-04-12 荣耀终端有限公司 Data acquisition method and related device
CN115611393B (en) * 2022-11-07 2023-04-07 中节能晶和智慧城市科技(浙江)有限公司 Multi-end cooperative coagulant feeding method and system for multiple water plants
CN115457645B (en) * 2022-11-11 2023-03-24 青岛网信信息科技有限公司 User emotion analysis method, medium and system based on interactive verification
CN115496113B (en) * 2022-11-17 2023-04-07 深圳市中大信通科技有限公司 Emotional behavior analysis method based on intelligent algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926997A (en) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 Method for determining emotional information based on user input and terminal
CN105549885A (en) * 2015-12-10 2016-05-04 重庆邮电大学 Method and device for recognizing user emotion during screen sliding operation
CN106528538A (en) * 2016-12-07 2017-03-22 竹间智能科技(上海)有限公司 Method and device for intelligent emotion recognition
CN108227932A (en) * 2018-01-26 2018-06-29 上海智臻智能网络科技股份有限公司 Interaction is intended to determine method and device, computer equipment and storage medium
CN108334583A (en) * 2018-01-26 2018-07-27 上海智臻智能网络科技股份有限公司 Affective interaction method and device, computer readable storage medium, computer equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10127927B2 (en) * 2014-07-28 2018-11-13 Sony Interactive Entertainment Inc. Emotional speech processing
US10884503B2 (en) * 2015-12-07 2021-01-05 Sri International VPA with integrated object recognition and facial expression recognition
CN106055236A (en) * 2016-05-30 2016-10-26 努比亚技术有限公司 Content pushing method and terminal
CN108073336A (en) * 2016-11-18 2018-05-25 香港中文大学 User emotion detecting system and method based on touch
CN107608956B (en) * 2017-09-05 2021-02-19 广东石油化工学院 Reader emotion distribution prediction algorithm based on CNN-GRNN
CN110134316B (en) * 2019-04-17 2021-12-24 华为技术有限公司 Model training method, emotion recognition method, and related device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926997A (en) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 Method for determining emotional information based on user input and terminal
CN105549885A (en) * 2015-12-10 2016-05-04 重庆邮电大学 Method and device for recognizing user emotion during screen sliding operation
CN106528538A (en) * 2016-12-07 2017-03-22 竹间智能科技(上海)有限公司 Method and device for intelligent emotion recognition
CN108227932A (en) * 2018-01-26 2018-06-29 上海智臻智能网络科技股份有限公司 Interaction is intended to determine method and device, computer equipment and storage medium
CN108334583A (en) * 2018-01-26 2018-07-27 上海智臻智能网络科技股份有限公司 Affective interaction method and device, computer readable storage medium, computer equipment

Also Published As

Publication number Publication date
CN110134316A (en) 2019-08-16
WO2020211701A1 (en) 2020-10-22

Similar Documents

Publication Publication Date Title
CN110134316B (en) Model training method, emotion recognition method, and related device and equipment
CN113794800B (en) Voice control method and electronic equipment
CN110910872B (en) Voice interaction method and device
CN110138959B (en) Method for displaying prompt of human-computer interaction instruction and electronic equipment
WO2020078299A1 (en) Method for processing video file, and electronic device
CN111316199B (en) Information processing method and electronic equipment
CN111078091A (en) Split screen display processing method and device and electronic equipment
CN111742539B (en) Voice control command generation method and terminal
WO2021258814A1 (en) Video synthesis method and apparatus, electronic device, and storage medium
CN113254409A (en) File sharing method, system and related equipment
CN112527093A (en) Gesture input method and electronic equipment
CN113641271A (en) Application window management method, terminal device and computer readable storage medium
CN113970888A (en) Household equipment control method, terminal equipment and computer readable storage medium
CN114995715B (en) Control method of floating ball and related device
CN114115512A (en) Information display method, terminal device, and computer-readable storage medium
CN114095599A (en) Message display method and electronic equipment
CN110058729B (en) Method and electronic device for adjusting sensitivity of touch detection
CN112740148A (en) Method for inputting information into input box and electronic equipment
CN115359156B (en) Audio playing method, device, equipment and storage medium
CN113407300B (en) Application false killing evaluation method and related equipment
CN113380240B (en) Voice interaction method and electronic equipment
CN115437601A (en) Image sorting method, electronic device, program product, and medium
CN114740986B (en) Handwriting input display method and related equipment
CN115730091A (en) Comment display method and device, terminal device and readable storage medium
CN114003241A (en) Interface adaptation display method and system of application program, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant