WO2017114130A1 - 获取机器人的状态的方法和装置 - Google Patents
获取机器人的状态的方法和装置 Download PDFInfo
- Publication number
- WO2017114130A1 WO2017114130A1 PCT/CN2016/109111 CN2016109111W WO2017114130A1 WO 2017114130 A1 WO2017114130 A1 WO 2017114130A1 CN 2016109111 W CN2016109111 W CN 2016109111W WO 2017114130 A1 WO2017114130 A1 WO 2017114130A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- state
- robot
- moment
- result
- output data
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1658—Programme controls characterised by programming, planning systems for manipulators characterised by programming language
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/0015—Face robots, animated artificial faces for imitating human expressions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/1653—Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
Definitions
- the present invention relates to the field of robots, and in particular to a method and apparatus for acquiring the state of a robot.
- human emotions are not only related to the emotions of the previous moment and the current moment. Human emotions are formed by the long-term evolution of human beings to form individuality and the stimulation of external emotions received for a long time, so the emotions in the robot When the system is designed, only the emotions input by the outside world and the emotions of the previous moment are considered to affect the emotions of the current moment, which will make the emotional system of the robots inaccurate, and the emotional system cannot be anthropomorphized.
- Embodiments of the present invention provide a method and apparatus for acquiring a state of a robot, so as to at least solve the prior art calculation of the emotion system of the robot, which only involves the emotion of the current input and the influence of the emotion of the previous moment on the current emotion. A technical problem that leads to inaccurate results of the robot's emotional system.
- a method for acquiring a state of a robot includes: determining a state result of the robot at a current time according to a state output data of the robot at a previous moment and a state result at a previous moment, wherein The state result of the last moment is determined by the initial state result of the robot at the initial moment and the state output data of the plurality of moments before the last moment; acquiring the state data received at the current moment; according to the state data of the robot at the current moment, The state output data at the current moment and the state result at the current moment determine the state output data of the robot at the current moment; wherein the state data represents the externally input emotion information received by the robot, and the state output data represents the robot output to the outside.
- the emotional information, the status result represents the personalized information formed by the robot accumulating at least one emotional information during the running process.
- an apparatus for acquiring a state of a robot includes: a first determining module, configured to determine, according to a state output data of the robot at a previous moment and a state result of a previous moment, a state result of the robot at the current time, wherein the state result of the previous moment is determined by the initial state result of the robot at the initial moment and the state output data of the plurality of moments before the previous moment; the first obtaining module is configured to acquire the current The current state data received at a time; the second determining module is configured to determine the state output data of the robot at the current time according to the state data of the current time of the robot, the state output data at the current time, and the state result at the current time;
- the state data represents the sentiment information of the external input received by the robot, the state output data represents the sentiment information output by the robot to the outside, and the state result represents the personality information formed by the robot accumulating at least one sentiment information during the running process.
- the state result of the robot at the current time is determined according to the state output data of the robot at the previous moment and the state result of the previous moment, and the current state data received at the current time is obtained.
- the purpose of the robot to output the data at the current moment is determined, thereby realizing the basis of the evolution of the robot's personality.
- the technical effect of the accurate emotional output of the robot is obtained, thereby solving the prior art calculation of the emotional system of the robot, which only involves the emotion of the current input and the influence of the emotion of the last moment on the current emotion, resulting in the emotional system of the robot.
- FIG. 1 is a flow chart of a method of acquiring a state of a robot according to an embodiment of the present invention
- FIG. 2 is a flow chart of an alternative method of acquiring a state of a robot in accordance with an embodiment of the present invention
- FIG. 3 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention
- FIG. 4 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention
- FIG. 5 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention.
- FIG. 8 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention.
- an embodiment of a method of acquiring a state of a robot is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and Although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
- FIG. 1 is a flow chart of a method for acquiring a state of a robot according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
- Step S102 determining a state result of the robot at the current time according to the state output data of the robot at the previous moment and the state result of the previous moment, wherein the state result of the previous moment is the initial state result of the robot at the initial moment and
- the status output data is determined at a plurality of times before a moment.
- the state output data of the last moment may be the emotion information that the robot has external input at the last moment; the state result accumulated before the last moment may be the personality information of the robot at the previous moment; the robot The status result of the current time may be the personality information of the robot at the current time.
- the time in the above embodiment may be the time when the robot inputs or outputs the emotion information.
- the personality information of the robot at the current moment is determined by the state output data of the robot at the previous moment and the state result of the previous moment. It can be considered that the personality information of the robot at the current moment is input by the external moment of the robot. The information and the personality information of the previous moment are determined, so the personality information of the robot may be accumulated by the robot after the stimulation of the external input emotion information at a plurality of moments, wherein the robot initially has an initial state result, the state The initial value can be regarded as the "character" of the robot in the initial state.
- the emotion information is superimposed on the initial value of the initial state result of the robot, forming a robotic emotional system more in line with the human character.
- the initial state of the robot is "sadness”
- the emotion information of "happy” is input to the robot a plurality of times
- Emotional letter The information constitutes the new personality information of the robot at the current moment. Since the robot inputs the "happy" emotional information multiple times, the personality information of the robot is more inclined to the cheerful state than the initial state.
- the initial state of the robot is "sadness”
- the initial state of the "sadness” and the plurality of the above constitutes the new personality information of the robot at the current moment. Since the robot has input "anger” emotional information many times, the robot's personality information is more introverted or pessimistic than the initial state. status.
- Step S104 Acquire status data received at the current time.
- the current state data may be emotion information that is externally input to the robot at the current time.
- step S106 based on the state data of the robot at the current time, the state output data at the previous time, and the state result at the current time, the state output data of the robot at the current time is determined.
- the state output data of the robot at the current time may be emotion information that the robot outputs to the outside at the current time.
- the state data represents the sentiment information of the external input received by the robot
- the state output data represents the sentiment information output by the robot to the outside
- the state result indicates that the robot accumulates at least one sentiment information during the operation to form Personality information.
- the personality information of the robot and the emotional output information of the robot at the previous moment can be input to the state result calculation formula to obtain new personality information of the robot, and then the robot is The new personality information and the robot input the state calculation formula with the time output information to obtain the new emotion output information of the robot.
- the above embodiment of the present application determines the state result of the robot at the current time according to the state output data of the robot at the previous moment and the state result of the previous moment, and acquires the current state data received at the current moment, according to the robot.
- the state data of the current time, the state output data at the current time, and the state result at the current time determine the state output data of the robot at the current time.
- the state data of the current time of the robot, the state output data of the previous time, and the state result of the current time are used, wherein the state result is the initial state of the robot at the initial time.
- the output state data of the robot is more anthropomorphic and more accurate, thereby solving the prior art emotional system for the robot.
- the calculation involves only the emotions of the current input and the influence of the emotions of the previous moment on the current emotions, resulting in inaccurate technical problems in the operation results of the robot's emotional system.
- the data is output according to the state of the robot at the last moment and the previous moment.
- the status result, the steps of determining the state result of the robot at the current time include:
- step S1021 the state output data of the robot at the last moment and the state result of the previous moment are acquired.
- the state output data of the robot at the last moment may be the emotion information input by the robot from the outside at the last moment, and the emotional result of the last moment may be the emotion of the robot at multiple moments before the last moment.
- the evolution of the emotional information is obtained from the robot's personality information at the previous moment.
- Step S1023 The state output data of the previous moment and the state result of the previous moment are input to a preset state result model, and the state result of the current moment is determined, wherein the state result model includes the current state determined by the state result of the previous moment.
- the weight function of the moment is a preset state result model, and the state result of the current moment is determined, wherein the state result model includes the current state determined by the state result of the previous moment.
- the preset state result model may be a function related to the time information.
- c(t) Is the state result of the robot at the current moment
- w(t) is a weight function, which can be represented by a transition matrix, which is used to characterize the robot in the process of transitioning from the state result of the previous moment to the state result of the current moment, each emotion The probability of conversion to any one or more emotions, the degree of importance is related to the moment information of the robot
- c(t-1) is the state result of the robot at the last moment
- e o (t-1) is the robot The status output data at the previous moment.
- weight function represented by w(t) may be the importance of the state result of the previous moment relative to the state result of the robot at the current moment. It can be considered that the shorter the time between the current moment and the previous moment is, the shorter the time is.
- the value of the above weighting function can be larger, indicating that the state result of the robot at the previous moment is more important for the state result of the robot at the current moment than the input value of the previous moment.
- the foregoing embodiment of the present application uses the state output data of the acquiring robot at the previous moment and the state result of the previous moment, and inputs the state output data of the previous moment and the state result of the previous moment to the preset state result.
- the model the method of determining the state result of the current moment, realizes the technical purpose of determining the state result of the robot at the current moment by the state output data of the robot at the previous moment and the state result of the previous moment.
- the method further includes the following steps. :
- step S1025 the state output data of the previous moment at the previous moment and the state result of the previous moment of the previous moment are input to the state result model, and the state result of the last moment of the robot is obtained.
- the above steps provide a method for obtaining the state result of the robot. It can be concluded from the above method that the state result of the robot at any moment is related to the state result of the robot at a plurality of previous moments, thereby realizing the process of the personality evolution of the robot.
- the above method of the present application obtains the state result of the last time of the robot by inputting the state output data of the previous time at the previous time and the state result of the previous time of the previous time to the state result model.
- the above method provides a method for acquiring the state result of the robot, and realizes the technical effect of taking the state result of the robot at the current time and the plurality of times before the current time when calculating the state data of the robot by using the method of acquiring the state result.
- the technical purpose of the robot has an anthropomorphic personality evolution process, thereby further solving the prior art calculation of the robot's emotional system, which only involves the current input emotion and the influence of the emotion at the last moment on the current emotion, resulting in The technical results of the robot's emotional system are inaccurate.
- step S106 according to the state data of the current time of the robot, the state output data at the previous time, and the state result at the current time, the step of determining the state output data of the robot at the current time includes:
- Step S1061 acquiring state data of the robot at the current time.
- step S1063 the state output data of the robot at the current time, the state data at the current time, and the state result at the current time are input to a preset state calculation formula, and the state output data of the robot at the current time is determined.
- the above steps of the present application input the state data of the robot at the current time, the state data at the current time, and the state result at the current time to the preset state by acquiring the state data of the robot at the current time.
- the calculation formula determines the state output data of the robot at the current time, and achieves the purpose of outputting the state output data obtained by the robot output.
- step S1063 the state output data of the robot at the current time, the state data at the current time, and the state result at the current time are input to a preset state calculation formula to determine the state of the robot at the current time.
- the steps to output data include:
- step S1065 the state output data of the robot at the current time is calculated by the following formula:
- e o (t) is the state output data of the robot at the current time
- c(t) is the state result of the robot at the current time
- a is the proportional coefficient
- e i (t) is the state data of the robot at the current time
- e o (t-1) is the output data of the robot at the last moment.
- a may be any value greater than 0 and less than 1, for determining the state data of the robot at the current time and the state result of the current time and the robot at the previous moment when calculating the state output data of the robot.
- e i (t) and e o (t) may be a matrix of one row and multiple columns, in the example of the emotional system including "happy, sad, angry, surprised, disgusted, fearful and calm”.
- you can get the basic matrix ⁇ happy, sad, angry, surprised, disgusted, fear, calm ⁇ , when the emotional information output from the outside to the robot is "happy", e i (t) ⁇ 1, 0, 0, 0, 0,0,0 ⁇ ;
- e i (t) ⁇ 1, 0, 0, 1, 0, 0, 0 ⁇ .
- the above steps of the present application provide a scheme for calculating the state output data of the robot at the current time by using the state output data of the robot at the current time, the state data at the current time, and the state result at the current time.
- the above solution acquires the state data of the robot at the current time by the calculation formula of the state output data of the robot at the current time.
- the proportional coefficient a is any value between 0 and 1.
- the method provided by the above embodiment of the present application determines the value range of the proportional coefficient a, and realizes the effect of calculating the state output data of the robot at the current time according to the formula for calculating the state output data of the robot at the current time.
- the step of outputting the state output data of the previous moment and the state result of the previous moment to the preset state result model, and determining the state result of the current moment includes: calculating by using the following formula The status of the robot:
- c(t) is the state result of the robot at the current time
- c(t-1) is the state result of the robot at the previous moment
- e o (t-1) is the state output data of the robot at the previous moment
- w (t) is a weight function
- w(t) is used as a weight function for characterizing the robot in the process of transitioning from the state result of the previous moment to the state result of the current moment, and each emotion is converted into any one or more
- the weighting function can be determined using an expert system, and the expert system determining the weighting function can include the process in which the robot transitions from one state result to another in different situations.
- Each emotion is converted into a probability factor table of one or more emotions, and the probability factor table can be composed of empirical values obtained by simulating human emotion changes.
- the factor analysis weight method may be used to obtain the weight function of the state result at the previous moment, and in the case of using the factor analysis weight method to obtain the state result of the previous moment, In the state result of the previous moment, each emotion calculates the contribution rate of conversion to other emotions. For example, the contribution rate of “happy” to “calm” can be 25%, and “happy” is converted to “fear”. The contribution rate may be 3%, and when the above contribution rate is larger, it can be considered that the value indicating the above contribution ratio is larger in the state matrix for expressing the weight function.
- the method of determining the weight function of the state result of the robot at the last moment may be any one of the methods provided in the above embodiments, but is not limited thereto.
- the status data and the status output data comprise a combination of any one or more of the following emotional information: happy, sad, angry, surprised, disgusted, fearful, and calm.
- Table 1 is in the embodiment of the present invention.
- the initial state result of the robot is converted to the weight value, and the emotion conversion matrix is an example of the emotion in 7 as shown in Table 1.
- the weight of the initial conversion result of the robot is 0.6, and when the state data of the next time is fear, the weight of the initial conversion result of the robot is 0.25.
- the foregoing embodiment of the present application provides optional state information of the robot and optional emotion information of the state output data, so that the state data and the state output data of the robot can be one emotional information or multiple emotional information combinations.
- the technical effect further makes the robot's emotional system more anthropomorphic.
- FIG. 3 is a schematic structural diagram of an apparatus for acquiring a state of a robot according to an embodiment of the present invention.
- the depicted architecture is only one example of a suitable environment and is not intended to limit the scope of the application.
- the device that acquires the state of the robot be considered to have any dependency or need for any of the components or combinations shown in FIG.
- the device for acquiring the state of the robot may include: a first determining module 30, a first obtaining module 32, and a second determining module 34, wherein:
- the first determining module 30 is configured to determine a state result of the robot at the current moment according to the state output data of the robot at the previous moment and the state result of the previous moment, wherein the state result of the previous moment is initiated by the robot at the initial moment
- the status result and the status output data at a plurality of times before the previous time are determined.
- the status output data of the last moment may be the emotion information that the robot has external input at the last moment; the state structure accumulated before the last moment may be the personality information of the robot at the previous moment; The state result of the moment may be the personality information of the person at the current moment.
- the moment in the above embodiment may be the time when the robot inputs or outputs the emotion information.
- the personality information of the robot at the current moment is determined by the state output data of the robot at the previous moment and the state result of the previous moment. It can be considered that the personality information of the robot at the current moment is determined by the robot at a moment.
- the emotion information input from the outside is determined by the personality information of the previous moment, so the personality information of the robot may be accumulated by the robot after the stimulation of the input emotion information outside the plurality of moments, wherein the robot initially has one
- the initial state result, the initial value of the state result can be considered as the "character" of the robot in the initial state.
- the external emotion information is input externally, the emotion information is superimposed on the initial value of the initial state result of the robot, forming a more conforming to humanity. Character robotic emotional system.
- the first obtaining module 32 is configured to acquire state data received at the current time.
- a second determining module 34 configured to determine, according to the state data of the current time of the robot, the state output data at the current time, and the state result at the current time, the state output data of the robot at the current time;
- the state data represents the sentiment information of the external input received by the robot
- the state output data represents the sentiment information output by the robot to the outside
- the state result represents the personality information formed by the robot accumulating at least one sentiment information during the running process.
- the first embodiment of the present application determines the state result of the robot at the current time according to the state output data of the robot at the previous moment and the state result of the previous moment, and acquires the current state data received at the current time.
- the state output data of the robot at the current time is determined by the second determining module according to the state data of the robot at the current time, the state output data at the current time, and the state result at the current time.
- the state data of the current time of the robot, the state output data of the previous time, and the state result of the current time are used, wherein the state result is the initial state of the robot at the initial time.
- the state output data of the robot is more anthropomorphic and more accurate, thereby solving the emotion of the robot in the prior art.
- the system performs calculations, it only involves the emotions of the current input and the influence of the emotions of the previous moment on the current emotions, resulting in inaccurate technical problems of the operation results of the robot's emotional system.
- the first determining module 30 includes:
- the first obtaining sub-module 301 is configured to acquire state output data of the robot at a previous moment and a state result of the last moment.
- the state output data of the robot at the last moment may be the emotion information input by the robot from the outside at the last moment, and the emotional result of the last moment may be the emotion input of the robot at multiple moments before the last moment.
- the robot After the evolution of emotional information, the robot’s personality information at the last moment
- the first determining sub-module 302 is configured to input the state output data of the last moment and the state result of the last moment to a preset state result model, and determine a state result of the current moment, where the state result model includes the previous moment The state result determines the weight function of the current moment.
- c(t) is the state result of the robot at the current time
- w(t) is the weight function related to the time information
- c(t-1) is the state result of the robot at the previous moment
- e o (t-1) is the state output data of the robot at the previous moment.
- weight function represented by w(t) can be the importance of the state result of the previous moment relative to the state result of the robot at the current moment. It can be considered that the shorter the time between the current moment and the previous moment is considered.
- the value of the above weighting function can be larger, indicating that the state result of the robot at the previous moment is more important for the state result of the robot at the current moment than the input value of the previous moment.
- the foregoing embodiment of the present application uses the second acquiring module to acquire the state output data of the robot at the previous moment and the state result of the previous moment, and input the state output data of the previous moment and the state result of the previous moment to the pre-previous
- the state result model is set, and the method for determining the state result of the current time by the third determining module realizes the technical purpose of determining the state result of the robot at the current time by the state output data of the robot at the previous moment and the state result of the previous moment.
- the device provided by the foregoing embodiment of the present application further includes:
- the second obtaining module 50 is configured to input the state output data of the previous moment at the previous moment and the state result of the previous moment of the previous moment to the state result model, and acquire the state result of the last moment of the robot.
- the device of the present application inputs the state output data of the previous moment at the previous moment and the state result of the previous moment of the previous moment to the state result model through the third acquisition module to obtain the state of the robot at the previous moment. result.
- the above method provides a method for acquiring the state result of the robot, and implements a technique in which the state result of the robot is calculated by taking the state result and calculating the state output data of the robot at a plurality of times before the current time and the current time.
- the second determining module 34 includes:
- the second obtaining submodule 341 is configured to acquire state data of the robot at the current time.
- the second determining sub-module 343 is configured to input the state output data of the robot at the current moment, the state data at the current moment, and the state result at the current moment to a preset state calculation formula, and determine the state output of the robot at the current moment. data.
- the above step of the present application acquires state data of the state of the robot at the current time through the third obtaining module, and outputs the state of the robot at the current time, the state data at the current time, and the state result at the current time.
- the input to the preset state calculation formula determines the state output data of the robot at the current time through the fourth determining module, and achieves the purpose of outputting the state output data obtained by the robot output.
- the second determining submodule 343 includes:
- the first calculating unit 3431 is configured to calculate the state output data of the robot at the current moment by the following formula:
- e o (t) is the state output data of the robot at the current time
- c(t) is the state result of the robot at the current time
- a is the proportional coefficient
- e i (t) is the state data of the robot at the current time
- e o (t-1) is the output data of the robot at the last moment.
- a may be any value greater than 0 and less than 1, for determining the state data of the robot at the current time and the state result of the current time and the robot at the previous moment when calculating the state output data of the robot.
- the above steps of the present application provide a calculation module that uses the state output data of the robot at the current time, the state data at the current time, and the state result at the current time to calculate the state output data of the robot at the current time.
- the state data of the robot at the current time is acquired by the calculation formula of the output data by the state of the robot at the current time.
- the proportional coefficient a is an arbitrary value between 0 and 1.
- the method provided by the above embodiment of the present application determines the value range of the proportional coefficient a, and realizes the effect of calculating the state output data of the robot at the current time according to the formula for calculating the state output data of the robot at the current time.
- the determining sub-module 302 includes:
- the second calculating unit 3021 is configured to calculate a state result of the robot by using the following formula:
- c(t) is the state result of the robot at the current time
- c(t-1) is the state result of the robot at the previous moment
- e o (t-1) is the state output data of the robot at the previous moment
- w (t) is a weight function
- the status data and the status output data comprise a combination of any one or more of the following emotion information: happy, sad, angry, surprised, disgusted, fearful and calm.
- the foregoing embodiment of the present application provides optional state information of the robot and optional emotion information of the state output data, so that the state data and the state output data of the robot can be one emotional information or multiple emotional information combinations.
- the technical effect further makes the robot's emotional system more anthropomorphic.
- the disclosed technical contents may be implemented in other manners.
- the device embodiments described above are only schematic.
- the division of the unit may be a logical function division.
- there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated order The element can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
- a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
一种获取机器人的状态的方法和装置,该方法包括:根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果,其中,上一时刻的状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态输出数据确定(S102);获取当前时刻接收到的状态数据(S104);根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据(S106)。该方法和相应的装置解决了现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题,使得计算出的机器人的情感系统更加拟人化。
Description
本发明涉及机器人领域,具体而言,涉及一种获取机器人的状态的方法和装置。
随着机器人的相关技术的发展,关于机器人的情感变化的研究越来越深入,在现有技术中,可以根据机器人接收到的外接的刺激输出相应的情感,或在输出机器人当前时刻的情感时,考虑机器人在上一时刻的情感对当前时刻感情的情绪,使机器人的感情更拟人化。
然而,人类的情感不只是涉及到上一时刻和当前时刻的情感,人类的情感是由人类经过长时间演变而形成个性和长时间以来接收的外界情感的刺激形成的,所以在对机器人的情感系统进行设计时,仅仅考虑机器人当前由外界输入的情感和上一时刻的情感对当前时刻的情感的影响,会使得机器人的情感系统运行结果不准确,情感系统不能够拟人化的运行。
针对现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种获取机器人的状态的方法和装置,以至少解决现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题。
根据本发明实施例的一个方面,提供了一种获取机器人的状态的方法,包括:根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果,其中,上一时刻的状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态输出数据确定;获取当前时刻接收到的状态数据;根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据;其中,状态数据表征机器人接收到的外部输入的情感信息,状态输出数据表征机器人向外部输出的情感信息,状态结果表征机器人在运行过程中积累至少一个情感信息而形成的个性信息。
根据本发明实施例的另一方面,还提供了一种获取机器人的状态的装置,包括:第一确定模块,用于根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定
机器人在当前时刻的状态结果,其中,上一时刻的状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态输出数据确定;第一获取模块,用于获取当前时刻接收到的当前状态数据;第二确定模块,用于根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据;其中,状态数据表征机器人接收到的外部输入的情感信息,状态输出数据表征机器人向外部输出的情感信息,状态结果表征机器人在运行过程中积累至少一个情感信息而形成的个性信息。
在本发明实施例中,采用根据机器人在上一时刻的状态输出数据和上一时刻的状态结果的方式,确定机器人在当前时刻的状态结果,通过获取当前时刻接收到的当前状态数据,达到了根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据的目的,从而实现了在以机器人的个性演变为依据的基础上,得到机器人准确的情感输出的技术效果,进而解决了现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题。
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种获取机器人的状态的方法的流程图;
图2是根据本发明实施例的一种可选的获取机器人的状态的方法的流程图;
图3是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图;
图4是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图;
图5是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图;
图6是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图;
图7是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图;以及
图8是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普
通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例一
根据本发明实施例,提供了一种获取机器人的状态的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本发明实施例的一种获取机器人的状态的方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果,其中,上一时刻的状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态输出数据确定。
具体的,在上述步骤S102中,上一时刻的状态输出数据可以是机器人在上一时刻有外部输入的情感信息;上一时刻之前积累的状态结果可以是机器人在上一时刻的个性信息;机器人当前时刻的状态结果可以是机器人在当前时刻的个性信息其中,上述实施例中的时刻可以是机器人输入或输出情感信息的时间。
值得注意的是,机器人在当前时刻的个性信息由机器人在上一时刻的状态输出数据和上一时刻的状态结果确定,可以认为机器人在当前时刻的个性信息由机器人上一时刻由外部输入的情感信息和上一时刻的个性信息确定,因此机器人的个性信息可以是机器人在经过一个多个时刻的外部的输入情感信息的刺激后积累得到的,其中,机器人在最初时具有一个初始状态结果,状态结果初始值可以认为是该机器人在最初始状态的“性格”,当外部输入不同的情感信息后,这些情感信息叠加在机器人最初的状态结果初始值上,形成更符合人类性格的机器人情绪系统。
作为一种可选的实施例,在机器人的初始状态为“悲伤”的示例中,当对机器人输入多次“高兴”的情感信息后,上述“悲伤”的初始状态和多个上述“高兴”的情感信
息构成了机器人在当前时刻的新的个性信息,由于对机器人多次输入了“高兴”的情感信息,这一机器人的个性信息相较于初始状态,更偏向于开朗的状态。
在另一种可选的实施例中,仍在机器人的初始状态为“悲伤”的示例中,当对机器人输入多次“难过”的情感信息后,上述“悲伤”的初始状态和多个上述“难过”的情感信息构成了机器人在当前时刻的新的个性信息,由于对机器人多次输入了“愤怒”的情感信息,这一机器人的个性信息相较于初始状态,更偏向于内向或悲观的状态。
步骤S104,获取当前时刻接收到的状态数据。
具体的,在上述步骤S104中,当前状态数据可以是当前时刻外部向机器人输入的情感信息。
步骤S106,根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据。
具体的,在上述步骤S106中,机器人在当前时刻的状态输出数据可以是在当前时刻机器人向外部输出的情感信息。
在一种可选的实施例中,状态数据表征机器人接收到的外部输入的情感信息,状态输出数据表征机器人向外部输出的情感信息,状态结果表征机器人在运行过程中积累至少一个情感信息而形成的个性信息。
在另一种可选的实施例中,结合图2所示,可以将机器人的个性信息和机器人在上一时刻的情感输出信息输入至状态结果计算公式,得到机器人新的个性信息,再将机器人的新的个性信息、机器人上以时刻输出信息输入状态计算公式,得到机器人的新的情感输出信息。
由上可知,本申请上述实施例通过根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果,获取当前时刻接收到的当前状态数据,根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据。上述方案在确定机器人在当前时刻的状态输出数据时,使用了机器人在当前时刻的状态数据、上一时刻的状态输出数据以及当前时刻的状态结果,其中,状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态数据确定,通过采用上述方案,实现了机器人的输出状态数据更加拟人化、更加准确的技术效果,从而解决了现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题。
可选的,在上述步骤S102中,根据机器人在上一时刻的状态输出数据和上一时刻的
状态结果,确定机器人在当前时刻的状态结果的步骤包括:
步骤S1021,获取机器人在上一时刻的状态输出数据和上一时刻的状态结果。
具体的,在上述步骤S1021中,机器人在上一时刻的状态输出数据可以是机器人在上一时刻由外部输入的情感信息,上时刻的情感结果可以是机器人在上一时刻之前多个时刻的情绪输入后,经过情感信息的演变得到的机器人在上一时刻的个性信息。
步骤S1023,将上一时刻的状态输出数据和上一时刻的状态结果输入至预设的状态结果模型,确定当前时刻的状态结果,其中,状态结果模型包括由上一时刻的状态结果确定的当前时刻的权重函数。
具体的,在上述步骤S1023中,预设的状态结果模型可以是与时刻信息相关的函数。作为一种可选的实施例,在当前时刻的状态结果为c(t)=f(w(t),c(t-1),eo(t-1))的示例中,f(w(t),c(t-1),eo(t-1))=c(t-1)*eo(t-1)*w(t),在上述函数公式中,c(t)是机器人在当前时刻的状态结果,w(t)是权重函数,可以用转移矩阵表示,用于表征机器人在由上一时刻的状态结果转换至当前时刻的状态结果的过程中,每一种情感转换成任意一种或多种情感的概率,该重要程度与机器人所处的时刻信息相关,c(t-1)是机器人在上一时刻的状态结果,eo(t-1)是机器人在上一时刻的状态输出数据。
值得注意的是,由w(t)所表示的权重函数可以是上一时刻的状态结果相对机器人在当前时刻的状态结果的重要程度,可以认为,当前时刻与上一时刻相差的时间越短时,上述权重函数的值可以越大,表示机器人在上一时刻的状态结果对于当前时刻的机器人的状态结果的影响相较于上一时刻的输入值来说更重要。
由上可知,本申请上述实施例采用获取机器人在上一时刻的状态输出数据和上一时刻的状态结果,将上一时刻的状态输出数据和上一时刻的状态结果输入至预设的状态结果模型,确定当前时刻的状态结果的方法,实现了通过机器人在上一时刻的状态输出数据和上一时刻的状态结果确定机器人在当前时刻的状态结果的技术目的。
可选的,在上述步骤S102之前,即在根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果之前,上述方法还包括如下步骤
:
步骤S1025,将上一时刻的前一时刻的状态输出数据和上一时刻的前一时刻的状态结果输入至状态结果模型,获取机器人的上一时刻的状态结果。
上述步骤提供了获取机器人的状态结果的方法,由上述方法可以得出,机器人在任何时刻的状态结果都与机器人在之前的多个时刻的状态结果相关,从而实现了机器人的个性演变的过程。
值得注意的是,当机器人的情感系统首次运行,不具备之前多个时刻的状态结果时,机器人具有预先设置的初始状态结果。
由上可知,本申请上述方法通过将上一时刻的前一时刻的状态输出数据和上一时刻的前一时刻的状态结果输入至状态结果模型得到机器人的上一时刻的状态结果。上述方法提供了获取机器人的状态结果的方法,实现了采用获取状态结果的方式,在计算机器人的状态数据时,将机器人当前时刻以及当前时刻之前的多个时刻的状态结果考虑在其中的技术效果,从而达到了机器人具有拟人的个性演变过程的技术目的,从而进一步解决了现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题。
可选的,在步骤S106中,根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据的步骤包括:
步骤S1061,获取机器人在当前时刻的状态数据。
步骤S1063,将机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果输入至预设的状态计算公式,确定机器人在当前时刻的状态输出数据。
由上可知,本申请上述步骤通过获取机器人在当前时刻所处的状态数据,将机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果输入至预设的状态计算公式,确定机器人在当前时刻的状态输出数据,达到了使机器人输出计算得到的状态输出数据的目的。
可选的,在上述步骤S1063中,将机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果输入至预设的状态计算公式,确定机器人在当前时刻的状态输出数据的步骤包括:
步骤S1065,通过如下公式计算机器人在当前时刻的状态输出数据:
eo(t)=c(t)*a*ei(t)+c(t)*(1-a)*eo(t-1),
其中,eo(t)为机器人在当前时刻的状态输出数据,c(t)为机器人在当前时刻的状态结果,a为比例系数,ei(t)为机器人在当前时刻的状态数据,eo(t-1)为机器人在上一时刻的状态输出数据。
具体的,在上述公式中,a可以是大于0小于1的任意数值,用于在计算机器人的状态输出数据时,确定机器人在当前时刻的状态数据和当前时刻的状态结果与机器人在上一时刻的状态输出数据和上一时刻的状态结果所占的比例。
作为一种可选的实施例,ei(t)和eo(t)可以是一行多列的矩阵,在情感系统包括“高兴、难过、生气、惊讶、厌恶、恐惧和平静”的示例中,可以得到基础矩阵{高兴,难过,生气,惊讶,厌恶,恐惧,平静},当外界向机器人输出的情感信息为“高兴”时,ei(t)={1,0,0,0,0,0,0};当外界向机器人输出的情感信息为“高兴”且“惊讶”时,ei(t)={1,0,0,1,0,0,0}。
由上可知,本申请上述步骤提供了采用机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果计算机器人在当前时刻的状态输出数据的方案。上述方案通过机器人在当前时刻的状态输出数据的计算公式,获取机器人的在当前时刻的状态数据。
可选的,在本申请上述步骤中,比例系数a是0至1之间的任意数值。
由上可知,本申请上述实施例提供的方法确定了比例系数a的取值范围,实现了可以根据计算机器人在当前时刻的状态输出数据的公式计算机器人在当前时刻的状态输出数据的效果。
可选的,在本申请上述步骤S1021中,将上一时刻的状态输出数据和上一时刻的状态结果输入至预设的状态结果模型,确定当前时刻的状态结果的步骤包括:通过如下公式计算机器人的状态结果:
c(t)=c(t-1)*eo(t-1)*w(t),
其中,c(t)为机器人在当前时刻的状态结果,c(t-1)为机器人在上一时刻的状态结果,eo(t-1)为机器人在上一时刻的状态输出数据,w(t)为权重函数。
具体的,在上述步骤中,w(t)作为权重函数,用于表征机器人在由上一时刻的状态结果转换至当前时刻的状态结果的过程中,每一种情感转换成任意一种或多种情感的概率,在情感系统包括“高兴、难过、生气、惊讶、厌恶、恐惧和平静”的示例中,w(t)可以是的n*n的转移矩阵,其中,n=7。
在一种可选的实施例中,可以使用专家系统对权重函数进行确定,确定上述权重函数的专家系统可以包含在不同情况下,机器人由一种状态结果转变至另一种状态结果的过程中,每一种情感转换为一种或多种情感的概率因数表,该概率因数表可以由通过模拟人类情感变化得到的经验值构成。
在另一种可选的实施例中,可以使用因子分析权数法来获取上一时刻的状态结果的权重函数,在使用因子分析权数法获取上一时刻的状态结果的情况下,可以通过获取上一时刻的状态结果中,每一种情感对转换为其他情感的贡献率进行计算,例如,“开心”转换为“平静”的贡献率可以是25%,“开心”转换为“恐惧”的贡献率可以是3%,当上述贡献率越大时,可以认为在用于表示权重函数的状态矩阵中,表示上述贡献率的值越大。
确定机器人在上一时刻的状态结果的权重函数的方法可以为上述实施例提供的任意一种方法,但不仅限于此。
由上可知,在本申请上述实施例提出了计算机器人的状态结果可以使用的计算公式以及权重函数的确定方法,实现了可以根据上述计算公式计算计机器人的状态结果的技术效果。
可选的,在本申请上述步骤中,状态数据和状态输出数据包括如下任意一种或多种情感信息的组合:高兴、难过、生气、惊讶、厌恶、恐惧和平静。
作为一种可选的实施例,在机器人的情感信息包括高兴、难过、生气、惊讶、厌恶、恐惧和平静中任意一种或多种的组合的示例中,表1是在本发明实施例中机器人的初始状态结果进行转换的权重值,情绪转换矩阵的对于7中情绪的举例,如表1所示,当机器人的初始状态结果为高兴时,当机器人在下一时刻的状态结果中,当下一时刻状态数据为高兴时,机器人的初始转换结果的权重值为0.6,当下一时刻状态数据为恐惧时,机器人的初始转换结果的权重值为0.25。
表1
开心 | 难过 | 生气 | 惊讶 | 厌恶 | 恐惧 | 平静 | |
开心 | 0.6 | 0.03 | 0.03 | 0.03 | 0.03 | 0.03 | 0.25 |
难过 | 0.03 | 0.6 | 0.03 | 0.03 | 0.03 | 0.03 | 0.25 |
生气 | 0.03 | 0.03 | 0.6 | 0.03 | 0.03 | 0.03 | 0.25 |
惊讶 | 0.03 | 0.03 | 0.03 | 0.6 | 0.03 | 0.03 | 0.25 |
厌恶 | 0.03 | 0.03 | 0.03 | 0.03 | 0.6 | 0.03 | 0.25 |
恐惧 | 0.03 | 0.03 | 0.03 | 0.03 | 0.03 | 0.6 | 0.25 |
平静 | 1/7 | 1/7 | 1/7 | 1/7 | 1/7 | 1/7 | 1/7 |
由上可知,本申请上述实施例提供了机器人的状态数据和状态输出数据的可选的多种情感信息,实现了机器人的状态数据和状态输出数据可以为一中情感信息或多种情感信息组合的技术效果,进一步的使得机器人的情感系统更加拟人化。
实施例二
图3是根据本发明实施例的一种可选的获取机器人的状态的装置的结构示意图。出于描述的目的,所绘的体系结构仅为合适环境的一个示例,并非对本申请的使用范围或功能提出任何局限。也不应该将获取机器人的状态的装置视为对图3所示的任一组件或组合具有任何依赖或需求。
如图3所示,该获取机器人的状态的装置可以包括:第一确定模块30、第一获取模块32和第二确定模块34,其中:
第一确定模块30,用于根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果,其中,上一时刻的状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态输出数据确定。
具体的,在上述装置中,上一时刻的状态输出数据可以是机器人在上一时刻有外部输入的情感信息;上一时刻之前积累的状态结构可以是机器人在上一时刻的个性信息;机器人当前时刻的状态结果可以是及其人在当前时刻的个性信息其中,上述实施例中的时刻可以是机器人输入或输出情感信息的时间。
值得注意的是,机器人在当前时刻的个性信息由机器人在上一时刻的状态输出数据和上一时刻的状态结果确定,可以认为机器人在当前时刻的个性信息由机器人上一时刻
由外部输入的情感信息和上一时刻的个性信息确定,因此机器人的个性信息可以是机器人在经过一个多个时刻的外部的输入情感信息的刺激后积累得到的,其中,机器人在最初时具有一个初始状态结果,状态结果初始值可以认为是该机器人子在最初是状态的“性格”,当外部输入不同的情感信息后,这些情感信息叠加在机器人最初的状态结果初始值上,形成更符合人类性格的机器人情绪系统。
第一获取模块32,用于获取当前时刻接收到的状态数据。
第二确定模块34,用于根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,确定机器人在当前时刻的状态输出数据;
其中,状态数据表征机器人接收到的外部输入的情感信息,状态输出数据表征机器人向外部输出的情感信息,状态结果表征机器人在运行过程中积累至少一个情感信息而形成的个性信息。
由上可知,本申请上述实施例通过第一获取模块根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定机器人在当前时刻的状态结果,获取当前时刻接收到的当前状态数据根据机器人在当前时刻的状态数据、在上一时刻的状态输出数据和在当前时刻的状态结果,通过第二确定模块确定机器人在当前时刻的状态输出数据。上述方案在确定机器人在当前时刻的状态输出数据时,使用了机器人在当前时刻的状态数据、上一时刻的状态输出数据以及当前时刻的状态结果,其中,状态结果由机器人在初始时刻的初始状态结果和在上一时刻之前的多个时刻的状态输出数据确定,通过采用上述方案,实现了机器人的状态输出数据更加拟人化、更加准确的技术效果,从而解决了现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题。
可选的,结合图4所示,根据本申请上述实施例提供的装置,第一确定模块30包括:
第一获取子模块301,用于获取机器人在上一时刻的状态输出数据和上一时刻的状态结果。
具体的,在上述装置中,机器人在上一时刻的状态输出数据可以是机器人在上一时刻由外部输入的情感信息,上时刻的情感结果可以是机器人在上一时刻之前多个时刻的情绪输入后,经过情感信息的演变得到的机器人在上一时刻的个性信息
第一确定子模块302,用于将上一时刻的状态输出数据和上一时刻的状态结果输入至预设的状态结果模型,确定当前时刻的状态结果,其中,状态结果模型包括由上一时刻的状态结果确定的当前时刻的权重函数。
具体的,在上述方案中,预设的状态结果模型可以是时刻信息相关的函数,例如,c(t)=f(w(t),c(t-1),eo(t-1)),在上述函数公式中,c(t)是机器人在当前时刻的状态结果,w(t)是与时刻信息有关的权重函数,c(t-1)是机器人在上一时刻的状态结果,eo(t-1)是机器人在上一时刻的状态输出数据。
值得注意的是,由w(t)所表示的权重函数可以是上一时刻的状态结果相对机器人在当前时刻的状态结果的重要程度,可以认为,当当前时刻与上一时刻相差的时间越短时,上述权重函数的值可以越大,表示机器人在上一时刻的状态结果对于当前时刻的机器人的状态结果的影响相较于上一时刻的输入值来说更重要。
由上可知,本申请上述实施例采用第二获取模块获取机器人在上一时刻的状态输出数据和上一时刻的状态结果,将上一时刻的状态输出数据和上一时刻的状态结果输入至预设的状态结果模型,通过第三确定模块确定当前时刻的状态结果的方法,实现了通过机器人在上一时刻的状态输出数据和上一时刻的状态结果确定机器人在当前时刻的状态结果的技术目的。
可选的,结合图5所示,根据本申请上述实施例提供的装置,该装置还包括:
第二获取模块50,用于将上一时刻的前一时刻的状态输出数据和上一时刻的前一时刻的状态结果输入至状态结果模型,获取机器人的上一时刻的状态结果。
值得注意的是,当机器人的情感系统首次运行,不具备之前多个时刻的状态结果时,机器人具有预先设置的初始状态结果。
由上可知,本申请上述装置通过第三获取模块将上一时刻的前一时刻的状态输出数据和上一时刻的前一时刻的状态结果输入至状态结果模型获取得到机器人的上一时刻的状态结果。上述方法提供了获取机器人的状态结果的方法,实现了采用获取状态结果的方式,在计算机器人的状态输出数据时,将机器人当前时刻以及当前时刻之前的多个时刻的状态结果考虑在其中的技术效果,从而达到了机器人具有拟人的个性演变过程的技术目的,从而进一步解决了现有技术中对机器人的情感系统进行计算时仅涉及当前输入的情感以及上一时刻的情感对当前情感的影响,导致机器人的情感系统的运行结果不准确的技术问题。
可选的,结合图6所示,根据本申请上述实施例提供的装置,第二确定模块34包括:
第二获取子模块341,用于获取机器人在当前时刻的状态数据。
第二确定子模块343,用于将机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果输入至预设的状态计算公式,确定机器人在当前时刻的状态输出数据。
由上可知,本申请上述步骤通过第三获取模块获取机器人在当前时刻所处的状态的状态数据,将机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果输入至预设的状态计算公式,通过第四确定模块确定机器人在当前时刻的状态输出数据,达到了使机器人输出计算得到的状态输出数据的目的。
可选的,结合图7所示,根据本申请上述实施例提供的装置,第二确定子模块343包括:
第一计算单元3431,用于通过如下公式计算机器人在当前时刻的状态输出数据:
eo(t)=c(t)*a*ei(t)+c(t)*(1-a)*eo(t-1),
其中,eo(t)为机器人在当前时刻的状态输出数据,c(t)为机器人在当前时刻的状态结果,a为比例系数,ei(t)为机器人在当前时刻的状态数据,eo(t-1)为机器人在上一时刻的状态输出数据。
具体的,在上述公式中,a可以是大于0小于1的任意数值,用于在计算机器人的状态输出数据时,确定机器人在当前时刻的状态数据和当前时刻的状态结果与机器人在上一时刻的状态输出数据和上一时刻的状态结果所占的比例。
由上可知,本申请上述步骤提供了采用机器人在上一时刻的状态输出数据、在当前时刻的状态数据和在当前时刻的状态结果计算机器人在当前时刻的状态输出数据的计算模块。上述在通过机器人在当前时刻的状态输出数据的计算公式,获取机器人的在当前时刻的状态数据。
可选的,根据本申请上述实施例提供的装置,比例系数a是0至1之间的任意数值。
由上可知,本申请上述实施例提供的方法确定了比例系数a的取值范围,实现了可以根据计算机器人在当前时刻的状态输出数据的公式计算机器人在当前时刻的状态输出数据的效果。
可选的,结合图8所示,根据本申请上述实施例提供的装置,其特征在于,上述第一
确定子模块302包括:
第二计算单元3021,用于通过如下公式计算机器人的状态结果:
c(t)=c(t-1)*eo(t-1)*w(t),
其中,c(t)为机器人在当前时刻的状态结果,c(t-1)为机器人在上一时刻的状态结果,eo(t-1)为机器人在上一时刻的状态输出数据,w(t)为权重函数。
由上可知,在本申请上述实施例提出了通过第二计算模块计算机器人的状态结果的计算公式,实现了可以根据上述计算公式计算计机器人的状态结果的技术效果。
可选的,根据本申请上述实施例提供的装置,状态数据和状态输出数据包括如下任意一种或多种情感信息的组合:高兴、难过、生气、惊讶、厌恶、恐惧和平静。
由上可知,本申请上述实施例提供了机器人的状态数据和状态输出数据的可选的多种情感信息,实现了机器人的状态数据和状态输出数据可以为一中情感信息或多种情感信息组合的技术效果,进一步的使得机器人的情感系统更加拟人化。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单
元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
Claims (16)
- 一种获取机器人的状态的方法,其特征在于,包括:根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定所述机器人在当前时刻的状态结果,其中,所述上一时刻的状态结果由所述机器人在初始时刻的初始状态结果和在所述上一时刻之前的多个时刻的状态输出数据确定;获取所述当前时刻接收到的状态数据;根据所述机器人在所述当前时刻的状态数据、在所述上一时刻的状态输出数据和在所述当前时刻的状态结果,确定所述机器人在所述当前时刻的状态输出数据。
- 根据权利要求1所述的方法,其特征在于,根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定所述机器人在当前时刻的状态结果,包括:获取所述机器人在所述上一时刻的状态输出数据和所述上一时刻的状态结果;将所述上一时刻的状态输出数据和所述上一时刻的状态结果输入至预设的状态结果模型,确定所述当前时刻的状态结果,其中,所述状态结果模型包括由所述上一时刻的状态结果确定的所述当前时刻的权重函数。
- 根据权利要求1所述的方法,其特征在于,在根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定所述机器人在当前时刻的状态结果之前,所述方法还包括:将所述上一时刻的前一时刻的状态输出数据和所述上一时刻的前一时刻的状态结果输入至状态结果模型,获取所述机器人的所述上一时刻的状态结果。
- 根据权利要求1至3中任意一项所述的方法,其特征在于,根据所述机器人在所述当前时刻的状态数据、在所述上一时刻的状态输出数据和在所述当前时刻的状态结果,确定所述机器人在当前时刻的状态输出数据,包括:获取所述机器人在所述当前时刻的状态数据;将所述机器人在所述上一时刻的状态输出数据、在所述当前时刻的状态数据和在所述当前时刻的状态结果输入至预设的状态计算公式,确定所述机器人在所述当前时刻的状态输出数据。
- 根据权利要求4所述的方法,其特征在于,将所述机器人在所述上一时刻的状态输出数据、在所述当前时刻的状态数据和在所述当前时刻的状态结果输入至预设的状态计算公式,确定所述机器人在所述当前时刻的状态输出数据,包括:通过如下公式计算所述机器人在所述当前时刻的状态输出数据:eo(t)=c(t)*a*ei(t)+c(t)*(1-a)*eo(t-1),其中,所述eo(t)为所述机器人在所述当前时刻的状态输出数据,所述c(t)为所述机器人在所述当前时刻的状态结果,所述a为比例系数,所述ei(t)为所述机器人在所述当前时刻的状态数据,所述eo(t-1)为所述机器人在所述上一时刻的状态输出数据。
- 根据权利要求5所述的方法,其特征在于,所述比例系数a是0至1之间的任意数值。
- 根据权利要求2所述的方法,其特征在于,将所述上一时刻的状态输出数据和所述上一时刻的状态结果输入至预设的状态结果模型,确定所述当前时刻的状态结果,包括:通过如下公式计算所述机器人的所述状态结果:c(t)=c(t-1)*eo(t-1)*w(t),其中,所述c(t)为所述机器人在所述当前时刻的状态结果,所述c(t-1)为所述机器人在所述上一时刻的状态结果,所述eo(t-1)为所述机器人在所述上一时刻的状态输出数据,所述w(t)为所述权重函数。
- 根据权利要求1所述的方法,其特征在于,所述状态数据和所述状态输出数据包括如下任意一种或多种情感信息的组合:高兴、难过、生气、惊讶、厌恶、恐惧和平静。
- 一种获取机器人的状态的装置,其特征在于,包括:第一确定模块,用于根据机器人在上一时刻的状态输出数据和上一时刻的状态结果,确定所述机器人在当前时刻的状态结果,其中,所述上一时刻的状态结果由所述机器人在初始时刻的初始状态结果和在所述上一时刻之前的多个时刻的状态输出数据确定;第一获取模块,用于获取所述当前时刻接收到的状态数据;第二确定模块,用于根据所述机器人在所述当前时刻的状态数据、在所述上一时刻的状态输出数据和在所述当前时刻的状态结果,确定所述机器人在所述当前时刻的状态输出数据。
- 根据权利要求9所述的装置,其特征在于,所述第一确定模块包括:第一获取子模块,用于获取所述机器人在所述上一时刻的状态输出数据和所述上一时刻的状态结果;第一确定子模块,用于将所述上一时刻的状态输出数据和所述上一时刻的状态结果输入至预设的状态结果模型,确定所述当前时刻的状态结果,其中,所述状态结果模型包括由所述上一时刻的状态结果确定的所述当前时刻的权重函数。
- 根据权利要求9所述的装置,其特征在于,所述装置还包括:第二获取模块,用于将所述上一时刻的前一时刻的状态输出数据和所述上一时刻的前一时刻的状态结果输入至状态结果模型,获取所述机器人的所述上一时刻的状态结果。
- 根据权利要求9至11中任意一项所述的装置,其特征在于,所述第二确定模块包括:第二获取子模块,用于获取所述机器人在所述当前时刻的状态数据;第二确定子模块,用于将所述机器人在所述上一时刻的状态输出数据、在所述当前时刻的状态数据和在所述当前时刻的状态结果输入至预设的状态计算公式,确 定所述机器人在所述当前时刻的状态输出数据。
- 根据权利要求12所述的装置,其特征在于,所述第二确定子模块包括:第一计算单元,用于通过如下公式计算所述机器人在所述当前时刻的状态输出数据:eo(t)=c(t)*a*ei(t)+c(t)*(1-a)*eo(t-1),其中,所述eo(t)为所述机器人在所述当前时刻的状态输出数据,所述c(t)为所述机器人在所述当前时刻的状态结果,所述a为比例系数,所述ei(t)为所述机器人在所述当前时刻的状态数据,所述eo(t-1)为所述机器人在所述上一时刻的状态输出数据。
- 根据权利要求13所述的装置,其特征在于,所述比例系数a是0至1之间的任意数值。
- 根据权利要求10所述的装置,其特征在于,所述第一确定子模块包括:第二计算单元,用于通过如下公式计算所述机器人的所述状态结果:c(t)=c(t-1)*eo(t-1)*w(t),其中,所述c(t)为所述机器人在所述当前时刻的状态结果,所述c(t-1)为所述机器人在所述上一时刻的状态结果,所述eo(t-1)为所述机器人在所述上一时刻的状态输出数据,所述w(t)为所述权重函数。
- 根据权利要求9所述的装置,其特征在于,所述状态数据和所述状态输出数据包括如下任意一种或多种情感信息的组合:高兴、难过、生气、惊讶、厌恶、恐惧和平静。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511033145.2A CN106926236B (zh) | 2015-12-31 | 2015-12-31 | 获取机器人的状态的方法和装置 |
CN201511033145.2 | 2015-12-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017114130A1 true WO2017114130A1 (zh) | 2017-07-06 |
Family
ID=59224513
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/109111 WO2017114130A1 (zh) | 2015-12-31 | 2016-12-09 | 获取机器人的状态的方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106926236B (zh) |
WO (1) | WO2017114130A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113524179A (zh) * | 2021-07-05 | 2021-10-22 | 上海仙塔智能科技有限公司 | 基于情绪累积数值的控制方法、装置、设备以及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1338980A (zh) * | 1999-11-30 | 2002-03-06 | 索尼公司 | 机器人设备及其控制方法,和机器人性格判别方法 |
US20080059393A1 (en) * | 2006-09-05 | 2008-03-06 | Samsung Electronics, Co., Ltd. | Method for changing emotion of software robot |
CN101241561A (zh) * | 2007-02-08 | 2008-08-13 | 三星电子株式会社 | 表现软件机器人的行为的设备和方法 |
CN101599137A (zh) * | 2009-07-15 | 2009-12-09 | 北京工业大学 | 自治操作条件反射自动机及在实现智能行为中的应用 |
CN101692261A (zh) * | 2009-09-02 | 2010-04-07 | 北京科技大学 | 一种运用于儿童用户玩伴机器人的个性化情感模型及其应用方法 |
CN103218654A (zh) * | 2012-01-20 | 2013-07-24 | 沈阳新松机器人自动化股份有限公司 | 一种机器人情绪情感生成与表达系统 |
-
2015
- 2015-12-31 CN CN201511033145.2A patent/CN106926236B/zh not_active Expired - Fee Related
-
2016
- 2016-12-09 WO PCT/CN2016/109111 patent/WO2017114130A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1338980A (zh) * | 1999-11-30 | 2002-03-06 | 索尼公司 | 机器人设备及其控制方法,和机器人性格判别方法 |
US20080059393A1 (en) * | 2006-09-05 | 2008-03-06 | Samsung Electronics, Co., Ltd. | Method for changing emotion of software robot |
CN101241561A (zh) * | 2007-02-08 | 2008-08-13 | 三星电子株式会社 | 表现软件机器人的行为的设备和方法 |
CN101599137A (zh) * | 2009-07-15 | 2009-12-09 | 北京工业大学 | 自治操作条件反射自动机及在实现智能行为中的应用 |
CN101692261A (zh) * | 2009-09-02 | 2010-04-07 | 北京科技大学 | 一种运用于儿童用户玩伴机器人的个性化情感模型及其应用方法 |
CN103218654A (zh) * | 2012-01-20 | 2013-07-24 | 沈阳新松机器人自动化股份有限公司 | 一种机器人情绪情感生成与表达系统 |
Non-Patent Citations (1)
Title |
---|
WANG, WEI;: "Research of Robot Learning System Based on Affective Computing", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE , CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 December 2011 (2011-12-15), pages I140 - 337-42 to I140-337-50 * |
Also Published As
Publication number | Publication date |
---|---|
CN106926236B (zh) | 2020-06-30 |
CN106926236A (zh) | 2017-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gu et al. | Fitting the generalized multinomial logit model in Stata | |
Chow et al. | Dynamic factor analysis models with time-varying parameters | |
US10296827B2 (en) | Data category identification method and apparatus based on deep neural network | |
US20200126539A1 (en) | Speech recognition using convolutional neural networks | |
EP2866421B1 (en) | Method and apparatus for identifying a same user in multiple social networks | |
CN109409125B (zh) | 一种提供隐私保护的数据采集和回归分析方法 | |
JP2019511033A5 (zh) | ||
CN109614874B (zh) | 一种基于注意力感知和树形骨架点结构的人体行为识别方法和系统 | |
Sun et al. | Dynamic emotion modelling and anomaly detection in conversation based on emotional transition tensor | |
US20190122117A1 (en) | Learning device, non-transitory computer readable storage medium, and learning method | |
Martin et al. | Approximate Bayesian computation for smoothing | |
CN110059155A (zh) | 文本相似度的计算、智能客服系统的实现方法和装置 | |
US20200176019A1 (en) | Method and system for recognizing emotion during call and utilizing recognized emotion | |
CN107895027A (zh) | 个性情感知识图谱建立方法及装置 | |
WO2018113260A1 (zh) | 情绪表达的方法、装置和机器人 | |
WO2017114130A1 (zh) | 获取机器人的状态的方法和装置 | |
US20150339419A1 (en) | Efficient power grid analysis on multiple cpu cores with states elimination | |
KR102280489B1 (ko) | 대규모 사전학습 모델을 학습하여 지성을 기반으로 대화하는 대화 지능 획득 방법 | |
Negri et al. | Z-process method for change point problems with applications to discretely observed diffusion processes | |
CN112783949B (zh) | 人体数据预测方法、装置、电子设备和存储介质 | |
JP2016110284A (ja) | 情報処理システム、情報処理方法、及び、プログラム | |
Schubert et al. | 3D Virtuality Sketching: Interactive 3D-sketching based on real models in a virtual scene | |
JP6964481B2 (ja) | 学習装置、プログラムおよび学習方法 | |
Li et al. | Optimal zone for bandwidth selection in semiparametric models | |
WO2017114132A1 (zh) | 机器人情绪的控制方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16880920 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.11.2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16880920 Country of ref document: EP Kind code of ref document: A1 |