WO2016132729A1 - ロボット制御装置、ロボット、ロボット制御方法およびプログラム記録媒体 - Google Patents
ロボット制御装置、ロボット、ロボット制御方法およびプログラム記録媒体 Download PDFInfo
- Publication number
- WO2016132729A1 WO2016132729A1 PCT/JP2016/000775 JP2016000775W WO2016132729A1 WO 2016132729 A1 WO2016132729 A1 WO 2016132729A1 JP 2016000775 W JP2016000775 W JP 2016000775W WO 2016132729 A1 WO2016132729 A1 WO 2016132729A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- person
- action
- reaction
- user
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 22
- 230000009471 action Effects 0.000 claims abstract description 101
- 238000006243 chemical reaction Methods 0.000 claims abstract description 95
- 238000001514 detection method Methods 0.000 claims abstract description 83
- 230000008569 process Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 abstract description 6
- 230000007704 transition Effects 0.000 description 25
- 230000006870 function Effects 0.000 description 24
- 210000003128 head Anatomy 0.000 description 23
- 238000010586 diagram Methods 0.000 description 17
- 238000004590 computer program Methods 0.000 description 9
- 230000014509 gene expression Effects 0.000 description 6
- 230000007257 malfunction Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008921 facial expression Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 206010010219 Compulsions Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000010255 response to auditory stimulus Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/026—Acoustical sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
Definitions
- the present invention relates to a technique for controlling the transition of a user to a speech listening mode in a robot.
- Such robots move between multiple operation modes, for example, autonomous mode that operates autonomously, standby mode that does not perform autonomous operation or listening to human speech, and speech listening mode that listens to human speech. While being controlled to work naturally.
- a person who is a user of a robot it is preferable for a person who is a user of a robot to be able to talk freely at the timing he / she wants to talk to the robot.
- the robot always keeps listening to the user's speech (always operates in the speech listening mode).
- the robot may malfunction in response to sounds unintended by the user, for example, affected by environmental sounds such as the sound of a nearby TV or conversation with other people. is there.
- Patent Document 1 discloses a transition model of an operation state in a robot.
- Patent Document 2 discloses a robot that reduces the occurrence of malfunctions by improving the accuracy of voice recognition.
- Patent Document 3 discloses a robot control method that suppresses a sense of compulsion that a human feels through a call or gesture to attract attention and interest to the robot.
- Patent Document 4 discloses a robot that can autonomously control the surrounding environment, the situation of the person, and the behavior according to the reaction from the person.
- JP-T-2014-502565 Publication JP 2007-155985 A JP 2013-099800 A JP 2008-254122 A
- the robot has a function to start listening to general utterances triggered by the recognition of button presses or keyword utterances from the user. It can be considered to be mounted on.
- Patent Document 1 observes the behavior and state of a user when the robot transitions from a self-oriented mode in which a task that is not based on a user input is executed to a participation mode in which the user is involved. Migrate based on the analysis results.
- Patent Literature 1 does not disclose a technique for accurately capturing the user's intention and shifting to the utterance listening mode without requiring a complicated operation from the user.
- the robot described in Patent Document 2 includes a camera, a human detection sensor, a voice recognition unit, and the like, and determines whether there is a person based on information obtained from the camera or the human detection sensor.
- the result of speech recognition by the speech recognition unit is validated.
- the result of speech recognition is made effective regardless of whether the user wants to talk or not, so there is a risk that the robot will perform an action against the user's intention.
- Patent Documents 3 and 4 disclose a robot that performs an operation that attracts the user's attention and interest, and a robot that performs an action according to the situation of a person. The technology for starting listening is not disclosed.
- the present invention has been made in view of the above problems, and has as its main object to provide a robot control device and the like that improve the accuracy of the start of utterance listening without requiring an operation from the user.
- the first robot control apparatus of the present invention determines an action to be performed on the person and controls the action to be executed by the robot, and the action execution.
- a reaction from the person corresponding to the action determined by the means is detected, based on the reaction, a determination means for determining the possibility of talking to the robot of the person, and based on a determination result by the determination means And an operation control means for controlling an operation mode of the robot.
- the first robot control method of the present invention determines an action to be performed on the person and controls the robot to execute the action, and controls the determined action.
- the possibility of talking to the robot of the person is determined based on the reaction, and the operation mode of the robot is controlled based on the determination result.
- the object is also achieved by a computer program that realizes the robot having the above-described configurations or the robot control method by a computer, and a computer-readable recording medium in which the computer program is stored.
- FIG. 1 is a diagram showing an external configuration example of a robot 100 according to a first embodiment of the present invention and a person 20 who is a user of the robot.
- the robot 100 includes, for example, a robot body including a body part 210 and a head part 220, arm parts 230, and leg parts 240 movably connected to the body part 210.
- the head 220 includes a microphone 141, a camera 142, and a facial expression display 152.
- the body part 210 includes a speaker 151, a human detection sensor 143, and a distance sensor 144.
- the microphone 141, the camera 142, and the expression display 152 are provided on the head 220, and the speaker 151, the human detection sensor 143, and the distance sensor 144 are provided on the body portion 210, the present invention is not limited thereto.
- Person 20 is a user of the robot 100. In the present embodiment, it is assumed that there is one person 20 as a user near the robot 100.
- FIG. 2 is a diagram illustrating the internal hardware configuration of the robot 100 according to the first embodiment and the following embodiments.
- the robot 100 includes a processor 10, a RAM (Random Access Memory) 11, a ROM (Read Only Memory) 12, an I / O (Input / Output) device 13, a storage 14, and a reader / writer 15.
- Each component is connected via a bus 17 to transmit / receive data to / from each other.
- the processor 10 is realized by an arithmetic processing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit).
- arithmetic processing device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit).
- the processor 10 controls the overall operation of the robot 100 by reading various computer programs stored in the ROM 12 or the storage 14 into the RAM 11 and executing them. That is, in this embodiment and the embodiments described below, the processor 10 executes a computer program that executes each function (each unit) included in the robot 100 while referring to the ROM 12 or the storage 14 as appropriate.
- the I / O device 13 includes an input device such as a microphone and an output device such as a speaker (details will be described later).
- the storage 14 may be realized by a storage device such as a hard disk, an SSD (Solid State Drive), or a memory card.
- the reader / writer 15 has a function of reading and writing data stored in the recording medium 16 such as a CD-ROM (Compact_Disc_Read_Only_Memory).
- FIG. 3 is a functional block diagram for realizing the functions of the robot 100 according to the first embodiment.
- the robot 100 includes a robot control device 101, an input device 140, and an output device 150.
- the robot control apparatus 101 is an apparatus that controls the operation of the robot 100 by receiving information from the input device 140, performing processing described later, and issuing an instruction to the output device 150.
- the robot control apparatus 101 includes a detection unit 110, a migration determination unit 120, a migration control unit 130, and a storage unit 160.
- the detection unit 110 includes a person detection unit 111 and a reaction detection unit 112.
- the transition determination unit 120 includes a control unit 121, an action determination unit 122, a drive instruction unit 123, and an estimation unit 124.
- the storage unit 160 includes human detection pattern information 161, reaction pattern information 162, action information 163, and determination criterion information 164.
- the input device 140 includes a microphone 141, a camera 142, a human detection sensor 143, and a distance sensor 144.
- the output device 150 includes a speaker 151, an expression display 152, a head drive circuit 153, an arm drive circuit 154, and a leg drive circuit 155.
- the robot 100 has a plurality of operations such as an autonomous mode in which the robot controller 101 operates autonomously, a standby mode in which the autonomous operation and the utterance of the person are not performed, and an utterance listening mode in which the utterance of the person is heard. It is controlled to operate while shifting between modes. For example, in the utterance listening mode, the robot 100 receives the acquired (acquired) voice as a command, and operates in accordance with the command. In the following description, control for shifting the robot 100 from the autonomous mode to the speech listening mode will be described as an example.
- the autonomous mode or the standby mode may be referred to as a second mode, and the speech listening mode may be referred to as a first mode.
- the microphone 141 of the input device 140 has a function of listening to human voices and capturing surrounding sounds.
- the camera 142 is mounted at a position corresponding to any eye of the robot 100, for example, and has a function of photographing the surroundings.
- the human detection sensor 143 has a function of detecting that a person is nearby.
- the distance sensor 144 has a function of measuring a distance from a person or an object.
- the surroundings or the vicinity is, for example, a range where a voice of a person or a television can be acquired by the microphone 141, a range where a person or an object can be detected from the robot 100 by an infrared sensor, an ultrasonic sensor, or the like. It is a possible range.
- the human detection sensor 143 can use a plurality of types of sensors such as a pyroelectric infrared sensor and an ultrasonic sensor.
- the distance sensor 144 a plurality of types of sensors such as a sensor using ultrasonic waves and a sensor using infrared rays can be used. The same sensor may be used as the human detection sensor 143 and the distance sensor 144.
- an image captured by the camera 142 may be analyzed by software so as to play a similar role.
- the speaker 151 of the output device 150 has a function of emitting a voice when the robot 100 talks to a person.
- the facial expression display 152 includes, for example, a plurality of LEDs (Light Emitting Diodes) mounted at positions corresponding to the cheeks and mouths of the robot, and the robot smiles and thinks by changing the light emitting method of the LEDs. It has a function to produce such expressions.
- LEDs Light Emitting Diodes
- the head drive circuit 153, arm drive circuit 154, and leg drive circuit 155 are circuits that drive the head 220, arm 230, and leg 240, respectively, so as to perform predetermined operations.
- the person detection unit 111 of the detection unit 110 detects that a person has come near the robot 100 based on information from the input device 140.
- the reaction detection unit 112 detects a human reaction (reaction) to an action performed by the robot based on information from the input device 140.
- the transition determination unit 120 determines whether to shift the robot 100 to the utterance listening mode based on the result of human detection or reaction detection by the detection unit 110.
- the control unit 121 notifies the action determination unit 122 or the estimation unit 124 of the information acquired from the detection unit 110.
- the action determination unit 122 determines the type of action (action) that the robot 100 performs on the person.
- the drive instructing unit 123 is driven to at least one of the speaker 151, the expression display 152, the head drive circuit 153, the arm drive circuit 154, and the leg drive circuit 155 so as to execute the action determined by the action determination unit 122. Give instructions.
- the estimation unit 124 estimates whether or not the person 20 is willing to talk to the robot 100 based on the reaction of the person 20 who is the user.
- the transition control unit 130 controls the operation mode so that the robot 100 shifts to the utterance listening mode in which the person's utterance can be heard. .
- FIG. 4 is a flowchart showing the operation of the robot control apparatus 101 shown in FIG. The operation of the robot control apparatus 101 will be described with reference to FIGS. 3 and 4. Here, it is assumed that the robot control apparatus 101 controls the robot 100 to operate in the autonomous mode.
- the human detection unit 111 of the detection unit 110 acquires information from the microphone 141, the camera 142, the human detection sensor 143, and the distance sensor 144 of the input device 140.
- the human detection unit 111 detects that the human 20 has approached the robot 100 based on the result of analyzing the acquired information and the human detection pattern information 161 (S201).
- FIG. 5 is a diagram illustrating an example of a detection pattern of the person 20 by the person detection unit 111 included in the person detection pattern information 161.
- detection patterns for example, “a person-like sensor 143 detects a person-like thing”, “a distance sensor 144 detects an object moving within a certain distance range”, “a camera 142 Something that looks like a person's face has been captured, "" a microphone 141 has picked up a sound estimated to be a human voice, "or a combination of the above.
- the person detection unit 111 detects that a person has come close when the result of analyzing the information acquired from the input device 140 matches at least one of these.
- the person detection unit 111 continues the above detection until it detects that a person is approaching. When a person is detected (Yes in S202), the person detection unit 111 notifies the transition determination unit 120 to that effect. When the transition determination unit 120 receives the notification, the control unit 121 instructs the action determination unit 122 to determine the type of action. The action determination unit 122 determines the type of action that the robot 100 works on the user based on the action information 163 in response to the instruction (S203).
- the action confirms whether or not the user 20 is willing to speak to the robot 100 when the user 20 approaches the robot 100 based on the user's reaction to the movement (action) of the robot 100. Is to do.
- the drive instruction unit 123 is connected to at least one of the speaker 151, the expression display 152, the head drive circuit 153, the arm drive circuit 154, and the leg drive circuit 155 of the robot 100. Give instructions. As a result, the drive instruction unit 123 controls the robot 100 to move, to control the sound output from the robot 100, or to change the facial expression of the robot 100. As described above, the action determination unit 122 and the drive instruction unit 123 control the robot 100 to execute an action that stimulates the user and draws out (induces) the user's reaction.
- FIG. 6 is a diagram illustrating examples of types of actions determined by the action determination unit 122 included in the action information 163.
- the action determination unit 122 “moves the head 220 toward the user”, “speaks to the user (“ turn to this if you want to talk ”, etc.) "Nods by moving the head 220", “Changing facial expressions”, “Move the arm 230 to beckon the user”, “Move the leg 240 to approach the user”, or the above action
- a plurality of combinations are determined as actions. For example, if the user 20 wants to talk to the robot 100, it can be assumed that the user 20 is likely to face the robot 100 as a reaction when the robot 100 faces the user 20. .
- the reaction detection unit 112 acquires information from the microphone 141, the camera 142, the human detection sensor 143, and the distance sensor 144 of the input device 140.
- the reaction detection unit 112 detects the reaction of the user 20 with respect to the action of the robot 100 based on the analysis result of the acquired information and the reaction pattern information 162 (S204).
- FIG. 7 is a diagram illustrating an example of a reaction pattern detected by the reaction detection unit 112 included in the reaction pattern information 162.
- a user 20 turns his face to the robot 100 (sees the face of the robot 100)”
- a user 20 speaks to the robot 100”.
- “User 20 has moved his / her mouth”, “User 20 has stopped”, “User 20 has come closer”, or a combination of the plurality of reactions.
- the reaction detection unit 112 determines that a reaction has been detected when the result of analyzing the information acquired from the input device 140 matches at least one of these.
- the reaction detection unit 112 notifies the transition determination unit 120 of the detection result of the reaction.
- the transition determination unit 120 receives the notification in the control unit 121.
- the control unit 121 instructs the estimation unit 124 to estimate the intention of the user 20 based on the reaction.
- the control unit 121 returns the process to S201 of the person detection unit 111, and when the person detection unit 111 detects the person again, the control unit 121 executes the action determination unit 122 again. Instruct the decision of action. Thereby, the action determination unit 122 tries to draw a reaction from the user 20.
- the estimation unit 124 estimates whether the user 20 has an intention to speak to the robot 100 based on the reaction of the user 20 and the determination criterion information 164 (S206).
- FIG. 8 is a diagram illustrating an example of the criterion information 164 that the estimation unit 124 refers to in order to estimate the user's intention.
- the determination criterion information 164 includes, for example, “user 20 approaches a certain distance or less and looks at the face of robot 100”, “user 20 looks at the face of robot 100 and mouth “Moved”, “User 20 stopped and uttered a voice”, or other combinations of preset user reactions.
- the estimation unit 124 can estimate that the user 20 has an intention to talk to the robot 100 when the reaction detected by the reaction detection unit 112 matches at least one of the information included in the criterion information 164. That is, in this case, the estimation unit 124 determines that the user 20 has a possibility of speaking to the robot 100 (Yes in S207).
- the estimation unit 124 determines that the user 20 may speak to the robot 100, the estimation unit 124 instructs the transition control unit 130 to shift to an utterance listening mode in which the user 20 can hear the utterance (S ⁇ b> 208). .
- the shift control unit 130 controls the robot 100 to shift to the utterance listening mode in response to the instruction.
- the transition control unit 130 ends the process without changing the operation mode of the robot 100. That is, even if it is detected that a person is around, such as when the microphone 141 picks up a sound that is estimated to be a human voice, the estimation unit 124 determines that there is no possibility of talking to the robot 100 from a human reaction. The transition control unit 130 does not shift the robot 100 to the utterance listening mode. Thereby, it is possible to prevent malfunctions such as the robot 100 operating in response to a conversation between the user and another person.
- the estimation unit 124 determines that there is an intention to talk to the user 20, but determines that it cannot be completely said, and detects the process. Return to S201 of the unit 111. That is, in this case, when the person detection unit 111 detects a person again, the action determination unit 122 determines an action again, and the drive instruction unit 123 controls the robot 100 to execute the determined action. Thereby, the further reaction of the user 20 can be pulled out and the precision of estimation can be improved.
- the action determination unit 122 determines an action that induces the reaction of the user 20, and the drive instruction unit 123 Control is performed so that the robot 100 executes the determined action.
- the estimation unit 124 estimates whether or not the user 20 intends to talk to the robot by analyzing the reaction of the person 20 with respect to the executed action. As a result, when it is determined that there is a possibility that the user 20 talks to the robot, the shift control unit 130 controls the robot 100 to shift to the user 20 utterance listening mode.
- the robot control apparatus 101 does not require a troublesome operation from the user 20, and according to the utterance made at the timing at which the user wants to speak, Control the robot 100 to shift to the utterance listening mode. Therefore, according to the first embodiment, it is possible to improve the accuracy of the start of utterance listening with good operability. Further, according to the first embodiment, the robot control apparatus 101 puts the robot 100 into the speech listening mode only when it is determined that the user 20 has an intention to talk to the robot based on the reaction of the user 20. Since control is performed so as to shift, it is possible to prevent the malfunction caused by the voice of the television and the conversation with the surrounding people.
- the robot control apparatus 101 when the robot control apparatus 101 cannot detect the reaction of the user 20 enough to determine that the user 20 has the intention to talk, again, Take action on user 20. Thereby, an additional reaction is drawn from the user 20, and determination of intention is performed based on the result, so that the effect of improving the accuracy of mode transition can be obtained.
- FIG. 9 is a diagram showing an external configuration example of a robot 300 according to the second embodiment of the present invention and people 20-1 to 20-n who are users of the robot.
- the robot 100 described in the first embodiment the configuration in which the head 220 includes one camera 142 has been described.
- the robot 300 in the second embodiment has the head 220 in both eyes of the robot 300.
- Two cameras 142 and 145 are provided at corresponding positions.
- FIG. 9 shows that n people (n is an integer of 2 or more) 20-1 to 20-n exist near the robot 300.
- FIG. 10 is a functional block diagram for realizing the functions of the robot 300 according to the second embodiment.
- the robot 300 includes a robot control device 102 and an input device, respectively, instead of the robot control device 101 and the input device 140 included in the robot 100 described in the first embodiment with reference to FIG. 3. 146.
- the robot control apparatus 102 includes a presence detection unit 113, a count unit 114, and score information 165 in addition to the robot control apparatus 101.
- the input device 146 includes a camera 145 in addition to the input device 140.
- the presence detection unit 113 has a function of detecting that a person is nearby, and corresponds to the person detection unit 111 described in the first embodiment.
- the counting unit 114 has a function of counting the number of people nearby.
- the count unit 114 also has a function of detecting where each person is based on information from the cameras 142 and 145.
- the score information 165 holds a score for each user based on the score according to the user's reaction (details will be described later).
- Other components shown in FIG. 10 have the same functions as those described in the first embodiment.
- FIG. 11 is a flowchart showing the operation of the robot control apparatus 102 shown in FIG. The operation of the robot control apparatus 102 will be described with reference to FIGS. 10 and 11.
- the presence detection unit 113 of the detection unit 110 acquires information from the microphone 141, the cameras 142 and 145, the human detection sensor 143, and the distance sensor 144 of the input device 146.
- the presence detection unit 113 detects whether one or more of the people 20-1 to 20-n are nearby based on the result of analyzing the acquired information and the person detection pattern information 161. (S401).
- the presence detection unit 113 may determine whether or not a person is nearby based on the person detection pattern information 161 illustrated in FIG. 5 in the first embodiment.
- the presence detection unit 113 continues the above-described detection until it detects that any person is nearby, and when it detects a person (Yes in S402), notifies the count unit 114 accordingly.
- the counting unit 114 analyzes the images acquired from the cameras 142 and 145 to detect the number and places of people nearby (S403). For example, the counting unit 114 can count the number of people by extracting a person's face from images acquired from the cameras 142 and 145 and counting the number of faces.
- the robot 300 If the presence detection unit 113 detects that a person is nearby, but the count unit 114 cannot extract a human face from the images acquired by the cameras 142 and 145, for example, the robot 300 It is conceivable that a microphone is used to pick up a sound that is estimated to be the voice of a person behind the phone. In this case, the counting unit 114 moves the head to a position where the driving instruction unit 123 of the transition determination unit 120 can drive the head driving circuit 153 and acquire a human image by the cameras 142 and 145. You may instruct them to do so. Thereafter, the cameras 142 and 145 may acquire images. In the present embodiment, it is assumed that n people have been detected.
- the person detection unit 111 notifies the migration determination unit 120 of the detected number and location.
- the control unit 121 instructs the action determination unit 122 to determine an action.
- the robot 300 determines whether or not the robot 300 is willing to talk to any of the nearby users according to the instruction.
- the type of action that acts on the user is determined (S404).
- FIG. 12 is a diagram illustrating examples of types of actions determined by the action determination unit 122 included in the action information 163 according to the second embodiment.
- the action determination unit 122 may, for example, “move the head 220 and look around the user”, “speak to the user (turn to this if you want to talk about something)”, “head “Nodding by moving part 220”, “changing facial expression”, “inviting each user by moving arm 230", “moving leg 240 in order to approach each user” or the above action Are determined as actions to be executed.
- the action information 163 shown in FIG. 12 differs from the action information 163 shown in FIG. 6 in that a plurality of users are assumed.
- the reaction detection unit 112 acquires information from the microphone 141, the cameras 142 and 145, the human detection sensor 143, and the distance sensor 144 of the input device 146.
- the reaction detection unit 112 detects the reaction of the users 20-1 to 20-n with respect to the action of the robot 300 based on the analysis result of the acquired information and the reaction pattern information 162 (S405).
- FIG. 13 is a diagram illustrating an example of a reaction pattern detected by the reaction detection unit 112 included in the reaction pattern information 162 included in the robot 300.
- the reaction pattern includes, for example, “any user turned his / her face to the robot (looking at the robot's face)”, “any user moved his / her mouth”, “Any user has stopped”, “Any user has come closer”, or a combination of the above-mentioned multiple reactions.
- the reaction detection unit 112 detects each reaction of a plurality of people nearby by analyzing the camera image.
- the reaction detection unit 112 can also determine the approximate distance between the robot 300 and each of a plurality of users by analyzing the images acquired from the two cameras 142 and 145.
- the reaction detection unit 112 notifies the transition determination unit 120 of the detection result of the reaction.
- the transition determination unit 120 receives the notification in the control unit 121. If any person's reaction is detected (Yes in S406), the control unit 121 instructs the estimation unit 124 to estimate the intention of the user whose reaction has been detected. On the other hand, when no reaction of any person is detected (No in S406), the control unit 121 returns the process to S401 of the person detection unit 111, and when the person detection unit 111 detects the person again, the control unit 121 again returns to the action determination unit 122. Instruct the decision of action. Thereby, the action determination part 122 tries to draw out a reaction from a user.
- the estimation unit 124 determines whether or not there is a user who wants to talk to the robot 300, and a plurality of users have the above intention. If there is, it is determined who is most likely to speak (S407).
- the estimation unit 124 in the second embodiment scores one or more reactions performed by each user in order to determine which user is likely to speak to the robot 300.
- FIG. 14 is a diagram illustrating an example of the criterion information 164 that the estimation unit 124 according to the second embodiment refers to in order to estimate the user's intention.
- the determination criterion information 164 in the second embodiment includes a reaction pattern serving as a determination criterion and a score (point) assigned to each reaction pattern.
- any user may talk to the robot by scoring by weighting each user's reaction. To determine if it is high.
- FIG. 15 is a diagram illustrating an example of the score information 165 in the second embodiment. As shown in FIG. 15, for example, when the reaction of the user 20-1 is “approached within 1 m and turned his face to the robot 300”, the score is a score 7 by “approaching within 1 m” It is calculated as a total of 12 points including the points and 5 points obtained by “I saw the robot's face”.
- the score is 3 points for “approaching within 2m” and 3 points for “stopped” And a total of 6 points.
- the score may be 0.
- the estimation unit 124 determines that a user with a score of 10 or more has an intention to talk to the robot 300, and a user with a score of less than 3 has no intention to talk to the robot 300 at all. Also good. In this case, for example, in the example illustrated in FIG. 15, the estimation unit 124 indicates that the users 20-1 and 20-2 are willing to talk to the robot 300, and the user 20-2 is willing to talk to the robot 300. May be determined to be the highest. In addition, the estimation unit 124 may determine that the user 20-n has neither intention to speak or neither, and may determine that other users have no intention to speak.
- the estimation unit 124 determines that there is a possibility that even one person can speak to the robot 300 (Yes in S408), the estimation unit 124 instructs the transition control unit 130 to shift to a listening mode in which the user 20 can hear the utterance.
- the transfer control unit 130 controls the robot 300 to shift to the listening mode in response to the instruction. If the estimation unit 124 determines that there is an intention to talk to a plurality of users, the transition control unit 130 may control the robot 300 so as to listen to the talk of the person with the highest score (S409).
- the transition control unit 130 controls the robot 300 to listen to the user 20-2's talk.
- the transfer control unit 130 instructs the drive instruction unit 123 to drive the head drive circuit 153 and the leg drive circuit 155, for example, to face the person with the highest score when listening. Control such as approaching the person with the highest score may be performed.
- the estimation unit 124 ends the process without giving the transition control unit 130 an instruction to shift to the listening mode.
- the estimation unit 124 can be said that there is no user who is determined to be able to talk as a result of the above estimation for n users, but there is no possibility that all users will talk. If it is determined that there is no, i.e., neither, the process returns to S ⁇ b> 401 of the human detection unit 111. In this case, when the person detection unit 111 detects a person again, the action determination unit 122 determines again the action to be performed on the user, and the drive instruction unit 123 causes the robot 300 to execute the determined action. To control. Thereby, the user's further reaction can be drawn and the precision of estimation can be improved.
- the robot 300 detects one or a plurality of persons, determines an action that induces a person's reaction, as in the first embodiment, and By analyzing the reaction to the action, it is determined whether or not the user is likely to talk to the robot. When it is determined that there is a possibility that one or more users talk to the robot, the robot 300 shifts to the user's utterance listening mode.
- the robot control apparatus 102 can use the user without requesting troublesome operations.
- the robot 300 is controlled to shift to the listening mode according to the utterance made at the timing when the person wants to speak. Therefore, according to the second embodiment, in addition to the effects of the first embodiment, even when a plurality of users are around the robot 300, it is possible to improve the accuracy of the start of utterance listening with good operability. The effect of being able to be obtained.
- the second embodiment by scoring each user's reaction to the action of the robot 300, when there is a possibility that a plurality of users may talk to the robot 300, the possibility of speaking most likely is reached. Select high users. Thereby, when there is a possibility that a plurality of users talk to each other at the same time, it is possible to select an appropriate user and shift to a mode for listening to the utterances of the users.
- the robot 300 includes two cameras 142 and 145, and by analyzing the images acquired by the cameras 142 and 145, the distance to each of a plurality of persons is detected.
- the present invention is not limited to this. That is, the robot 300 may detect the distance to each of a plurality of persons using only the distance sensor 144 or other means. In this case, the robot 300 may not have two cameras.
- FIG. 16 is a functional block diagram for realizing functions of a robot control apparatus 400 according to a third embodiment of the present invention.
- the robot control device 400 includes an action execution unit 410, a determination unit 420, and an operation control unit 430.
- the action execution unit 410 determines an action to be performed on the person and controls the robot to execute the action.
- the determination unit 420 determines the possibility of talking to the robot of the person based on the reaction.
- the operation control unit 430 controls the operation mode of the robot based on the determination result by the determination unit 420.
- the action execution unit 410 includes the action determination unit 122 and the drive instruction unit 123 of the first embodiment.
- Determination unit 420 similarly includes estimation unit 124.
- the operation control unit 430 similarly includes a transition control unit 130.
- the robot is shifted to the listening mode only when it is determined that there is a possibility that a person can speak to the robot, so that an operation is not requested to the user.
- it is possible to improve the accuracy of the start of utterance listening.
- the robot including the body part 210 and the head part 220, the arm part 230, and the leg part 240 movably connected to the body part 210 has been described.
- the present invention is not limited to this.
- a robot in which the body part 210 and the head part 220 are integrated, or a robot that does not include at least one of the head part 220, the arm part 230, and the leg part 240 may be used.
- the robot is not limited to the apparatus including the body part, the head part, the arm part, and the leg part as described above, and may be an integrated apparatus such as a so-called cleaning robot, or a computer that outputs to the user. Or a game machine, a portable terminal, a smart phone, etc. may be included.
- the processor 10 shown in FIG. 2 executes the block functions described with reference to the flowcharts shown in FIGS. 4 and 11 in the robot control apparatus shown in FIGS.
- the processor 10 shown in FIG. 2 executes the block functions described with reference to the flowcharts shown in FIGS. 4 and 11 in the robot control apparatus shown in FIGS.
- the case of realizing by a computer program has been described.
- some or all of the functions shown in the blocks shown in FIGS. 3 and 10 may be realized as hardware.
- the computer program capable of realizing the above-described functions supplied to the robot control apparatuses 101 and 102 is stored in a computer-readable storage device such as a readable / writable memory (temporary recording medium) or a hard disk device. Good.
- a general procedure can be adopted at present as the method of supplying the computer program into the hardware.
- the procedure includes, for example, a method of installing in a robot via various recording media such as a CD-ROM, and a method of downloading from the outside via a communication line such as the Internet.
- the present invention can be understood as being configured by a code representing the computer program or a storage medium storing the computer program.
- the present invention can be applied to, for example, a robot that performs a dialogue with a person, a robot that listens to a person's talk, a robot that receives a voice operation instruction, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Manipulator (AREA)
Abstract
Description
特許文献4は、周囲の環境や人物の状況、人物からの反応に応じた行動を自律的に制御することができるロボットを開示する。
図1は、本発明の第1の実施形態に係るロボット100の外部構成例とロボットの利用者である人20を示す図である。図1に示すように、ロボット100は、例えば、胴体部210と、胴体部210にそれぞれ可動に連結された頭部220、腕部230および脚部240を含むロボット本体を備える。
次に、上述した第1の実施形態を基礎とする第2の実施形態について説明する。以下の説明では、第1の実施形態と同様の構成については同じ参照番号を付与することにより、重複する説明は省略する。
図16は、本発明の第3の実施形態に係るロボット制御装置400の機能を実現する機能ブロック図である。図16に示すように、ロボット制御装置400は、アクション実行部410、判定部420および動作制御部430を備える。
11 RAM
12 ROM
13 I/Oデバイス
14 ストレージ
15 リーダライタ
16 記録媒体
17 バス
20 人(利用者)
20-1乃至20-n 人(利用者)
100 ロボット
110 検出部
111 人検出部
112 リアクション検出部
113 存在検出部
114 カウント部
120 移行判定部
121 制御部
122 アクション決定部
123 駆動指示部
124 推定部
130 移行制御部
140 入力デバイス
141 マイク
142 カメラ
143 人検知センサ
144 距離センサ
145 カメラ
150 出力デバイス
151 スピーカ
152 表情ディスプレイ
153 頭部駆動回路
154 腕部駆動回路
155 脚部駆動回路
160 記憶部
161 人検出パターン情報
162 リアクションパターン情報
163 アクション情報
164 判定基準情報
165 得点情報
210 胴体部
220 頭部
230 腕部
240 脚部
300 ロボット
Claims (9)
- 人が検出されると、該人に対して実行するアクションを決定すると共に、前記アクションをロボットが実行するように制御するアクション実行手段と、
前記アクション実行手段が決定した前記アクションに対する前記人からのリアクションが検出されると、前記リアクションに基づいて、前記人の前記ロボットに話しかける可能性を判定する判定手段と、
前記判定手段による判定の結果に基づいて、前記ロボットの動作モードを制御する動作制御手段と
を備えたロボット制御装置。 - 前記動作制御手段は、少なくとも、取得した音声に応じて動作する第1のモードと、取得した音声に応じて動作しない第2のモードのいずれかの前記動作モードで前記ロボットが動作するように制御し、
前記ロボットが前記第2のモードで動作するように制御している際に、前記判定手段により前記人が前記ロボットに話しかける可能性があると判定されると、前記動作モードを前記第1のモードに移行するように制御する
請求項1記載のロボット制御装置。 - 前記判定手段は、前記検出された前記リアクションが、前記人の前記ロボットに話しかける意思の有無を判定する1または複数の判定基準情報の少なくともいずれかと一致する場合、前記人が前記ロボットに話しかける可能性があると判定する
請求項1または請求項2記載のロボット制御装置。 - 複数の前記人を検出すると共に、該各人のリアクションを検出する検出手段をさらに備え、
前記判定手段は、前記検出されたリアクションが前記判定基準情報の少なくともいずれかと一致する場合、該一致する前記判定基準情報に割り当てられたポイントの合計に基づいて、前記話しかける可能性が最も高い人を判定する
請求項3記載のロボット制御装置。 - 前記動作制御手段は、前記判定手段により前記話しかける可能性が最も高いと判定された人の発話を聞き取るように、前記ロボットの前記動作モードを制御する
請求項4記載のロボット制御装置。 - 前記判定手段は、前記検出されたリアクションが前記判定基準情報の少なくともいずれかと一致すると判定できない場合、前記アクション実行手段に、前記人に対して実行するアクションを決定すると共に該アクションを前記ロボットが実行するように制御することを指示する
請求項3または請求項4記載のロボット制御装置。 - 自ロボットが所定の動作を行うように駆動する駆動回路と、
前記駆動回路を制御する、請求項1乃至請求項6のいずれか1項記載のロボット制御装置と
を備えたロボット。 - 人が検出されると、前記人に対して実行するアクションを決定すると共に、該アクションをロボットが実行するように制御し、
前記決定された前記アクションに対する前記人からのリアクションが検出されると、該リアクションに基づいて、前記人の前記ロボットに話しかける可能性を判定し、
前記判定の結果に基づいて、前記ロボットの動作モードを制御する
ロボット制御方法。 - 人が検出されると、前記人に対して実行するアクションを決定すると共に、該アクションをロボットが実行するように制御する処理と、
前記決定された前記アクションに対する前記人からのリアクションが検出されると、該リアクションに基づいて、前記人の前記ロボットに話しかける可能性を判定し、
前記判定の結果に基づいて、前記ロボットの動作モードを制御する処理とを
ロボットに実行させるロボット制御プログラムを記録するプログラム記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017500516A JP6551507B2 (ja) | 2015-02-17 | 2016-02-15 | ロボット制御装置、ロボット、ロボット制御方法およびプログラム |
US15/546,734 US20180009118A1 (en) | 2015-02-17 | 2016-02-15 | Robot control device, robot, robot control method, and program recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015028742 | 2015-02-17 | ||
JP2015-028742 | 2015-02-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016132729A1 true WO2016132729A1 (ja) | 2016-08-25 |
Family
ID=56692163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/000775 WO2016132729A1 (ja) | 2015-02-17 | 2016-02-15 | ロボット制御装置、ロボット、ロボット制御方法およびプログラム記録媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20180009118A1 (ja) |
JP (1) | JP6551507B2 (ja) |
WO (1) | WO2016132729A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018086689A (ja) * | 2016-11-28 | 2018-06-07 | 株式会社G−グロボット | コミュニケーションロボット |
CN108320021A (zh) * | 2018-01-23 | 2018-07-24 | 深圳狗尾草智能科技有限公司 | 机器人动作与表情确定方法、展示合成方法、装置 |
JP2020510865A (ja) * | 2017-02-27 | 2020-04-09 | ブイタッチ・カンパニー・リミテッド | 音声認識トリガーを提供するための方法、システムおよび非一過性のコンピュータ読み取り可能な記録媒体 |
JP2022509292A (ja) * | 2019-08-29 | 2022-01-20 | シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド | 通信方法および装置、電子機器並びに記憶媒体 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102558873B1 (ko) * | 2016-03-23 | 2023-07-25 | 한국전자통신연구원 | 상호 작용 장치 및 그것의 상호 작용 방법 |
KR102591413B1 (ko) * | 2016-11-16 | 2023-10-19 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
US11100384B2 (en) | 2017-02-14 | 2021-08-24 | Microsoft Technology Licensing, Llc | Intelligent device user interactions |
US10467509B2 (en) | 2017-02-14 | 2019-11-05 | Microsoft Technology Licensing, Llc | Computationally-efficient human-identifying smart assistant computer |
US11010601B2 (en) * | 2017-02-14 | 2021-05-18 | Microsoft Technology Licensing, Llc | Intelligent assistant device communicating non-verbal cues |
EP3599604A4 (en) * | 2017-03-24 | 2020-03-18 | Sony Corporation | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD |
KR102228866B1 (ko) * | 2018-10-18 | 2021-03-17 | 엘지전자 주식회사 | 로봇 및 그의 제어 방법 |
US11796810B2 (en) * | 2019-07-23 | 2023-10-24 | Microsoft Technology Licensing, Llc | Indication of presence awareness |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001188555A (ja) * | 1999-12-28 | 2001-07-10 | Sony Corp | 情報処理装置および方法、並びに記録媒体 |
JP2003305677A (ja) * | 2002-04-11 | 2003-10-28 | Sony Corp | ロボット装置およびロボット制御方法、記録媒体、並びにプログラム |
JP2008126329A (ja) * | 2006-11-17 | 2008-06-05 | Toyota Motor Corp | 音声認識ロボットおよび音声認識ロボットの制御方法 |
JP2014502566A (ja) * | 2011-01-13 | 2014-02-03 | マイクロソフト コーポレーション | ロボットとユーザーの相互作用のためのマルチ状態モデル |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4296714B2 (ja) * | 2000-10-11 | 2009-07-15 | ソニー株式会社 | ロボット制御装置およびロボット制御方法、記録媒体、並びにプログラム |
JP3843743B2 (ja) * | 2001-03-09 | 2006-11-08 | 独立行政法人科学技術振興機構 | ロボット視聴覚システム |
KR100953902B1 (ko) * | 2003-12-12 | 2010-04-22 | 닛본 덴끼 가부시끼가이샤 | 정보 처리 시스템, 정보 처리 방법, 정보 처리용 프로그램을 기록한 컴퓨터로 읽을 수 있는 매체, 단말 및 서버 |
JP4204541B2 (ja) * | 2004-12-24 | 2009-01-07 | 株式会社東芝 | 対話型ロボット、対話型ロボットの音声認識方法および対話型ロボットの音声認識プログラム |
US8583282B2 (en) * | 2005-09-30 | 2013-11-12 | Irobot Corporation | Companion robot for personal interaction |
JP2007155986A (ja) * | 2005-12-02 | 2007-06-21 | Mitsubishi Heavy Ind Ltd | 音声認識装置および音声認識装置を備えたロボット |
JP2007329702A (ja) * | 2006-06-08 | 2007-12-20 | Toyota Motor Corp | 受音装置と音声認識装置とそれらを搭載している可動体 |
KR20090065212A (ko) * | 2007-12-17 | 2009-06-22 | 한국전자통신연구원 | 로봇 채팅 시스템 및 방법 |
JP5223605B2 (ja) * | 2008-11-06 | 2013-06-26 | 日本電気株式会社 | ロボットシステム、コミュニケーション活性化方法及びプログラム |
KR101553521B1 (ko) * | 2008-12-11 | 2015-09-16 | 삼성전자 주식회사 | 지능형 로봇 및 그 제어방법 |
JP2011000656A (ja) * | 2009-06-17 | 2011-01-06 | Advanced Telecommunication Research Institute International | 案内ロボット |
JP5751610B2 (ja) * | 2010-09-30 | 2015-07-22 | 学校法人早稲田大学 | 会話ロボット |
JP2012213828A (ja) * | 2011-03-31 | 2012-11-08 | Fujitsu Ltd | ロボット制御装置及びプログラム |
JP5927797B2 (ja) * | 2011-07-26 | 2016-06-01 | 富士通株式会社 | ロボット制御装置、ロボットシステム、ロボット装置の行動制御方法、及びプログラム |
AU2012368731A1 (en) * | 2012-02-03 | 2014-08-21 | Nec Corporation | Communication draw-in system, communication draw-in method, and communication draw-in program |
-
2016
- 2016-02-15 US US15/546,734 patent/US20180009118A1/en not_active Abandoned
- 2016-02-15 WO PCT/JP2016/000775 patent/WO2016132729A1/ja active Application Filing
- 2016-02-15 JP JP2017500516A patent/JP6551507B2/ja active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001188555A (ja) * | 1999-12-28 | 2001-07-10 | Sony Corp | 情報処理装置および方法、並びに記録媒体 |
JP2003305677A (ja) * | 2002-04-11 | 2003-10-28 | Sony Corp | ロボット装置およびロボット制御方法、記録媒体、並びにプログラム |
JP2008126329A (ja) * | 2006-11-17 | 2008-06-05 | Toyota Motor Corp | 音声認識ロボットおよび音声認識ロボットの制御方法 |
JP2014502566A (ja) * | 2011-01-13 | 2014-02-03 | マイクロソフト コーポレーション | ロボットとユーザーの相互作用のためのマルチ状態モデル |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018086689A (ja) * | 2016-11-28 | 2018-06-07 | 株式会社G−グロボット | コミュニケーションロボット |
JP2020510865A (ja) * | 2017-02-27 | 2020-04-09 | ブイタッチ・カンパニー・リミテッド | 音声認識トリガーを提供するための方法、システムおよび非一過性のコンピュータ読み取り可能な記録媒体 |
CN108320021A (zh) * | 2018-01-23 | 2018-07-24 | 深圳狗尾草智能科技有限公司 | 机器人动作与表情确定方法、展示合成方法、装置 |
JP2022509292A (ja) * | 2019-08-29 | 2022-01-20 | シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド | 通信方法および装置、電子機器並びに記憶媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2016132729A1 (ja) | 2017-11-30 |
JP6551507B2 (ja) | 2019-07-31 |
US20180009118A1 (en) | 2018-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6551507B2 (ja) | ロボット制御装置、ロボット、ロボット制御方法およびプログラム | |
US10930303B2 (en) | System and method for enhancing speech activity detection using facial feature detection | |
US9390726B1 (en) | Supplementing speech commands with gestures | |
JP6143975B1 (ja) | 画像の取り込みを支援するためにハプティックフィードバックを提供するためのシステムおよび方法 | |
JP7038210B2 (ja) | 対話セッション管理用のシステム及び方法 | |
JP2009166184A (ja) | ガイドロボット | |
KR20160009344A (ko) | 귓속말 인식 방법 및 장치 | |
KR20200050235A (ko) | 전자 장치 및 그의 지능형 인터랙션 방법 | |
JP5975947B2 (ja) | ロボットを制御するためのプログラム、及びロボットシステム | |
US11165728B2 (en) | Electronic device and method for delivering message by to recipient based on emotion of sender | |
KR20150112337A (ko) | 디스플레이 장치 및 그 사용자 인터랙션 방법 | |
US12220805B2 (en) | Information processing device and information processing method | |
JP7176244B2 (ja) | ロボット、ロボットの制御方法及びプログラム | |
US20210216589A1 (en) | Information processing apparatus, information processing method, program, and dialog system | |
JP2001300148A (ja) | アクション応答システムおよびそのプログラム記録媒体 | |
US20200090663A1 (en) | Information processing apparatus and electronic device | |
JP2020155944A (ja) | 発話者検出システム、発話者検出方法及びプログラム | |
US10596708B2 (en) | Interaction device and interaction method thereof | |
KR102613040B1 (ko) | 영상 통화 방법 및 이를 구현하는 로봇 | |
JP2022060288A (ja) | 制御装置、ロボット、制御方法およびプログラム | |
US20210034079A1 (en) | Personal space creation system, personal space creation method, personal space creation program | |
WO2018056169A1 (ja) | 対話装置、処理方法、プログラム | |
WO2018047932A1 (ja) | 対話装置、ロボット、処理方法、プログラム | |
JP5709955B2 (ja) | ロボットおよび音声認識装置ならびにプログラム | |
JP2019072787A (ja) | 制御装置、ロボット、制御方法、および制御プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16752118 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017500516 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15546734 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16752118 Country of ref document: EP Kind code of ref document: A1 |