WO2017168936A1 - 情報処理装置、情報処理方法、及びプログラム - Google Patents
情報処理装置、情報処理方法、及びプログラム Download PDFInfo
- Publication number
- WO2017168936A1 WO2017168936A1 PCT/JP2017/000726 JP2017000726W WO2017168936A1 WO 2017168936 A1 WO2017168936 A1 WO 2017168936A1 JP 2017000726 W JP2017000726 W JP 2017000726W WO 2017168936 A1 WO2017168936 A1 WO 2017168936A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- mode
- utterance
- feedback
- information
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 114
- 238000003672 processing method Methods 0.000 title claims abstract description 8
- 230000006872 improvement Effects 0.000 claims description 31
- 230000000007 visual effect Effects 0.000 claims description 28
- 230000007613 environmental effect Effects 0.000 claims description 22
- 238000000034 method Methods 0.000 claims description 20
- 230000008451 emotion Effects 0.000 claims description 13
- 230000003993 interaction Effects 0.000 abstract description 5
- 230000004044 response Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 48
- 238000004458 analytical method Methods 0.000 description 29
- 238000004891 communication Methods 0.000 description 26
- 230000008859 change Effects 0.000 description 18
- 238000001514 detection method Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 241000282412 Homo Species 0.000 description 3
- 230000005281 excited state Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- -1 PCs Substances 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Definitions
- the present disclosure relates to an information processing apparatus, an information processing method, and a program.
- the present disclosure proposes an information processing apparatus, an information processing method, and a program capable of realizing a more natural dialogue between the user and the system.
- a control unit that selects a feedback mode for the user's utterance mode from a plurality of modes according to information related to the user's utterance recognition, and the plurality of modes are implicit feedback.
- an information processing apparatus including a first mode in which the first and second modes are performed and a second mode in which explicit feedback is performed.
- FIG. 3 is a functional block diagram of an input / output terminal according to the present disclosure. It is a functional block diagram of an information processor concerning this indication. It is a figure which shows the example of an interaction with the input / output terminal and user which concern on embodiment of this indication. It is a figure which shows the flow of control of the implicit feedback which concerns on the same embodiment. It is a figure for demonstrating the volume level of the input / output terminal which concerns on the embodiment. It is a figure which shows the flow of the feedback control based on the frequency
- Control of feedback according to the present disclosure 1.1. Feedback in speech recognition technology 1.2. System configuration example according to the present disclosure 1.3. Input / output terminal 10 according to the present disclosure 1.4. Information processing apparatus 30 according to the present disclosure 2. Embodiment 2.1. Feedback mode 2.2. Example of implicit feedback 2.3. Switching of modes related to feedback 2.4. Explicit feedback with reason for improvement 2.5. Additional control of feedback by visual information 2.6. 2. Example of feedback by visual information 3. Hardware configuration examples of the input / output terminal 10 and the information processing apparatus 30 Summary
- the apparatus may have a function of performing feedback to the user according to the recognition result of the user's utterance. For example, when the device cannot recognize the user's utterance, the feedback is performed by displaying a text message “voice is low” on the display unit of the device. By confirming the message, the user can perceive that the uttered content is not recognized by the device and can take the next action.
- the device using the speech recognition technology can request the user to improve the utterance by performing feedback regarding the utterance. At this time, as the content of feedback by the device is clearer, the possibility that the user's speech can be improved increases.
- feedback from a device greatly affects the user's impression of the device.
- the user may have an impression that the device is “cold” or “hard”. This is because the user compares the actual interaction with the human and the interaction with the device. Since the dialogue between humans changes according to the situation, the user feels unnaturalness with respect to a device that always gives a constant feedback. Moreover, the above impression may also lead to an evaluation that the technical level of the apparatus is low. Furthermore, when the feedback from the device is clear, it is assumed that some users feel that the device is instructed and feel disgusted with the device.
- the information processing apparatus, the information processing method, and the program according to the present disclosure have been conceived by paying attention to the above points, and in accordance with information related to user utterance recognition, a feedback mode for the user's utterance mode is set.
- One of the features is to select from a plurality of modes. Further, the plurality of modes include a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed.
- the effects of the features will be described with reference to the features of the information processing apparatus, the information processing method, and the program according to the present disclosure.
- the information processing system according to the present disclosure includes an input / output terminal 10 and an information processing apparatus 30. Further, the input / output terminal 10 and the information processing apparatus 30 are connected via the network 20 so that they can communicate with each other.
- the input / output terminal 10 may be a terminal that collects a user's utterance and presents a processing result of an application based on the utterance to the user.
- the input / output terminal 10 may have a function of performing feedback on the user's utterance.
- FIG. 1 shows an example in which the input / output terminal 10 performs feedback by voice output of “May I help you?” To the utterance “Hello.” Of the user P1.
- the information processing apparatus 30 has a function of controlling feedback executed by the input / output terminal 10 on the utterance based on the utterance of the user P1 collected by the input / output terminal 10.
- the information processing apparatus 30 acquires the utterance of the user P1 picked up by the input / output terminal 10 via the network 20, and selects a feedback mode for the utterance mode from a plurality of modes according to information based on the utterance. You may choose.
- the plurality of modes may include a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed.
- the network 20 may also include various LANs (Local Area Network) including Ethernet (registered trademark), WAN (Wide Area Network), and the like.
- the network 20 may be a private line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
- the information processing apparatus 30 may have a function of collecting a user's utterance and executing feedback on the utterance.
- the information processing apparatus 30 can also serve as the input / output terminal 10 as well as control feedback.
- the function of the application processed based on the result of voice recognition may be executed by the input / output terminal 10 or may be executed by the information processing apparatus 30. Processing of an application based on the user's utterance can be changed as appropriate according to the specifications of the application, the input / output terminal 10 and the information processing apparatus 30.
- Input / output terminal 10 according to the present disclosure has a function of collecting a user's utterance.
- the input / output terminal 10 has a function of presenting feedback information controlled by the information processing apparatus 30 to the user in accordance with information related to the user's speech recognition.
- the input / output terminal 10 according to the present disclosure can be realized as various devices having the above functions.
- the input / output terminal 10 according to the present disclosure may be, for example, a dedicated agent that executes various processes based on a user's utterance.
- the agent may include an interactive robot, a vending machine, a voice guidance device, and the like.
- the input / output terminal 10 according to the present disclosure may be an information processing terminal such as a PC (Personal Computer), a tablet, or a smartphone.
- the input / output terminal 10 may be a device used by being incorporated in a building or a vehicle.
- the input / output terminal 10 according to the present disclosure can be widely applied to various devices to which a voice recognition function is applied.
- the input / output terminal 10 includes an audio input unit 110, a sensor unit 120, an audio output unit 130, a display unit 140, a terminal control unit 150, and a server communication unit 160.
- the voice input unit 110 has a function of collecting a user's utterance and environmental sound.
- the voice input unit 110 may include a microphone that converts a user's speech and environmental sound into an electrical signal.
- the voice input unit 110 may include a microphone array having directivity for collecting sound in a specific direction. With the microphone array as described above, the voice input unit 110 can also collect the user's utterance separately from the environmental sound.
- the voice input unit 110 may include a plurality of microphones and microphone arrays. With this configuration, the position, orientation, movement, etc. of the sound source can be detected with higher accuracy.
- the sensor unit 120 has a function of detecting various information related to an object including a user.
- the sensor unit 120 may include a plurality of sensors for detecting the above information.
- the sensor unit 120 may include an imaging device for detecting a user's operation, an infrared sensor, a temperature sensor, and the like.
- the sensor unit 120 may have a function of performing image recognition based on a captured image. For example, the sensor unit 120 can identify the user who is speaking by detecting the movement of the user's mouth.
- the audio output unit 130 has a function of converting an electrical signal into sound and outputting the sound. Specifically, the voice output unit 130 has a function of performing feedback to the user by voice output based on feedback information controlled by the information processing apparatus 30.
- the audio output unit 130 may include a speaker having the above function. Further, the speaker included in the audio output unit 130 may have a function of realizing audio output having directivity in a specific direction, distance, or the like. By including the speaker having the function, the audio output unit 130 can perform audio output according to the position of the user detected by the sensor unit 120, for example.
- the audio output unit 130 may include a plurality of speakers. When the audio output unit 130 includes a plurality of speakers, it is possible to execute feedback according to the position of the user by controlling the speakers that output feedback. Details of this function will be described later.
- the voice output unit 130 may have a function of performing voice synthesis based on feedback information controlled by the information processing apparatus 30.
- the voice output unit 130 may perform text reading (TTS: Text To Speech) based on the text information acquired from the information processing apparatus 30.
- TTS Text To Speech
- the display unit 140 has a function of performing feedback to the user using visual information based on feedback information controlled by the information processing apparatus 30.
- the function may be realized by, for example, a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, or an OLED (Organic Light Emitting Diode) device.
- the display unit 140 may have a function as an operation unit that receives a user operation.
- the function as the operation unit can be realized by a touch panel, for example.
- the terminal control unit 150 has a function of controlling each configuration of the input / output terminal 10 described above.
- the terminal control unit 150 may have a function of acquiring various types of information detected by the voice input unit 110 and the sensor unit 120 and transmitting the information to the information processing apparatus 30 via the server communication unit 160 described later. Further, the terminal control unit 150 may acquire information related to feedback from the information processing apparatus 30 via the server communication unit 160 and may control the audio output unit 130 and the display unit 140 based on the information.
- the terminal control unit 150 can control the processing of the application.
- the input / output terminal 10 has been described above.
- the input / output terminal 10 has both a function of accepting input such as a user's utterance and a function of presenting feedback to the user according to information related to the user's utterance recognition.
- the system configuration according to the present disclosure is not limited to such an example.
- the system according to the present disclosure may separately include an input terminal having an input function and an output terminal that presents the feedback.
- the input terminal may include the functions of the voice input unit 110, the sensor unit 120, and the terminal control unit 150 described above.
- the output terminal may include the functions of the audio output unit 130, the display unit 140, and the terminal control unit 150 described above, for example.
- the system configuration according to the present disclosure can be flexibly modified.
- the server communication unit 160 has a function of performing information communication with the information processing apparatus 30 via the network 20. Specifically, the server communication unit 160 transmits information acquired by the voice input unit 110 and the sensor unit 120 to the information processing apparatus 30 based on the control of the terminal control unit 150. In addition, the server communication unit 160 delivers feedback information acquired from the information processing apparatus 30 to the terminal control unit 150.
- the information processing apparatus 30 according to the present disclosure has a function of controlling feedback executed by the input / output terminal 10 with respect to the user's utterance mode according to information related to the user's utterance recognition collected by the input / output terminal 10. .
- the information processing apparatus 30 can select a feedback mode for the user's utterance mode from a plurality of modes according to information related to the user's utterance recognition.
- the plurality of modes may include a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed.
- the utterance mode may include utterance volume, utterance speed, pitch of utterance, clarity of pronunciation, utterance position, utterance direction, utterance content, and environmental sound.
- the information processing apparatus 30 may be a server having the above functions.
- the information processing apparatus 30 may be various agents, PCs, tablets, or smartphones that detect user utterances and execute feedback.
- the information processing apparatus 30 includes a terminal communication unit 310, a voice analysis unit 320, a voice recognition unit 330, a state storage unit 340, a position detection unit 350, and an output control unit 360.
- the terminal communication unit 310 has a function of performing information communication with the input / output terminal 10 via the network 20. Specifically, the terminal communication unit 310 delivers various types of information acquired from the input / output terminal 10 to the voice analysis unit 320, the voice recognition unit 330, and the position detection unit 350. The terminal communication unit 310 has a function of acquiring feedback information controlled by the output control unit 360 and transmitting it to the input / output terminal 10. When the information processing apparatus 30 controls a plurality of input / output terminals 10, the terminal communication unit 310 may perform information communication with the plurality of input / output terminals 10 via the network 20.
- the voice analysis unit 320 has a function of acquiring information collected by the input / output terminal 10 and analyzing the information.
- the voice analysis unit 320 can analyze information related to the user's utterance mode, including, for example, the user's utterance volume, the utterance speed, the pitch of the utterance, or the clarity of pronunciation.
- the user's utterance mode may include an environmental sound collected along with the user's utterance.
- the voice analysis unit 320 may have a function of separating the user's utterance and the environmental sound from the information collected by the input / output terminal 10.
- the separation of the user's utterance and the environmental sound may be performed based on information on a frequency band related to a human voice, or may be realized by a VAD (Voice Activity Detection) technique or the like. Further, when a state storage unit 340 described later stores personal information related to a predetermined user's voice, the voice analysis unit 320 can also separate the user's utterance and the environmental sound using the information. .
- VAD Voice Activity Detection
- the voice analysis unit 320 may have a function of analyzing a user's utterance collected by the input / output terminal 10 and specifying the user. The user may be specified by comparing the analysis result of the user's utterance with the user's voice print information stored in the state storage unit 340 described later.
- the voice analysis unit 320 may have a function of analyzing a user's utterance collected by the input / output terminal 10 and estimating the user's emotion.
- the estimation of the user's emotion may be performed, for example, by analyzing prosody, amplitude, stress, and the like.
- the voice recognition unit 330 has a function of recognizing the user's speech based on the voice collected by the input / output terminal 10 or the user's voice separated by the voice analysis unit 320. Specifically, the speech recognition unit 330 may have a function of converting acquired speech information into phonemes and converting it into text. In addition, since various methods may be used for the voice recognition by the voice recognition unit 330, detailed description is omitted.
- the state storage unit 340 has a function of storing processing results obtained by the voice analysis unit 320 and the voice recognition unit 330.
- the state storage unit 340 can store, for example, information related to a user's utterance mode analyzed by the voice analysis unit 320 and a result of voice recognition by the voice recognition unit 330.
- the state storage unit 340 may store user attribute information including features related to the user's voice, utterance mode tendency, and the like.
- the position detection unit 350 has a function of estimating the user's utterance position and utterance direction based on information acquired by the input / output terminal 10.
- the position detection unit 350 is based on information collected from various sensors including voice information collected by the voice input unit 110 of the input / output terminal 10 and image information acquired by the sensor unit 120, and a user's utterance position and utterance.
- the direction can be estimated.
- the position detection unit 350 may estimate the positions of persons and objects other than the user who is speaking based on the above information.
- the output control unit 360 has a function of acquiring various types of information from the voice analysis unit 320, the voice recognition unit 330, the state storage unit 340, and the position detection unit 350, and controlling feedback on the user's utterance. Based on the above information, the output control unit 360 selects a feedback mode for the user's speech mode from a plurality of modes. The plurality of modes include a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed. Further, the output control unit 360 may generate feedback information based on audio or visual information performed by the input / output terminal 10 and transmit the information to the input / output terminal 10 via the terminal communication unit 310. The output control unit 360 may generate the feedback information by searching for feedback information based on conditions from an output DB 370 described later. Details of feedback control by the output control unit 360 will be described later.
- the output DB 370 may be a database that accumulates feedback information based on audio or visual information executed by the input / output terminal 10.
- the output DB 370 may store, for example, voice information related to feedback, or may store text information for voice output by the voice synthesis function of the input / output terminal 10. Further, the output DB 370 may store image information and text information related to feedback by visual information performed by the input / output terminal 10.
- the information processing apparatus 30 can select a feedback mode for the user's utterance mode from a plurality of modes according to information related to the user's utterance recognition.
- Implicit feedback is feedback including an indirect improvement method for a user's utterance mode. That is, in the implicit feedback, feedback is performed by not directly presenting the user with the speech mode improvement method but by changing the output mode of the input / output terminal 10.
- the implicit feedback according to the present embodiment may be defined as feedback in an utterance mode with higher recognition accuracy than the user's utterance mode.
- the above recognition accuracy may be the recognition accuracy of the user's utterance by the input / output terminal 10.
- the first mode in which implicit feedback is performed feedback based on the utterance mode expected by the user is performed.
- Ford back may be performed in the first mode by outputting a sound having a volume larger than the user's utterance volume.
- feedback by voice output at a speed slower than the user's speech speed may be performed in the first mode.
- feedback in the opposite direction to the above can also be performed. That is, when the user's utterance volume is high, Ford back may be performed in the first mode by outputting a sound having a volume smaller than the user's utterance volume.
- the user's speech rate is too slow, in the first mode, feedback by voice output at a speed faster than the user's speech rate may be performed.
- FIG. 4 is a diagram showing an example of dialogue between the user and the input / output terminal 10 using implicit feedback.
- the horizontal axis indicates the passage of time
- the vertical axis indicates the volume of the speech volume.
- the unit of time elapsed on the horizontal axis may be, for example, milliseconds (msec).
- the unit of the utterance volume on the vertical axis may be, for example, decibel (dB).
- the user first performs the utterance Uv1.
- the utterance content of the utterance Uv ⁇ b> 1 may be, for example, an inquiry such as “Do you have a plan for tomorrow?”.
- the utterance volume of the utterance Uv1 in this example may be 45 dB, for example.
- the information processing apparatus 30 causes the input / output terminal 10 to output the system output Sv1 with a recognizable volume as an implicit feedback.
- the content of the utterance of the system output Sv1 may be, for example, a re-listening of “Do you use?”.
- the system output Sv1 at this time may be 60 dB, for example.
- the user who has received the system output Sv1 as an implicit feedback makes another inquiry using the utterance Uv2.
- the utterance content of the utterance Uv2 may be, for example, an inquiry “Do you have a plan tomorrow?”.
- the utterance volume of the utterance Uv2 in this example may be 45 dB, for example. That is, in the example shown in FIG. 4, the user's utterance Uv2 is uttered at a louder volume than the utterance Uv1 due to the system output Sv1 that is an implicit feedback, and exceeds the recognizable volume.
- the utterance Uv2 is recognized by the information processing apparatus 30, and the input / output terminal 10 outputs the system output Sv2 as the execution result of the application based on the recognized utterance Uv2.
- the system output Sv2 may be, for example, a search result related to the user's schedule that “There is a schedule for going to hospital at noon tomorrow”. Note that the system output Sv2 at this time may be performed at a volume equivalent to, for example, the system output Sv1.
- the user's utterance is performed first.
- the input / output terminal 10 performs the voice output first
- the system output with the volume that can be recognized by the input / output terminal 10 is started first.
- the explicit feedback may be feedback indicating a direct improvement method for the user's utterance mode. That is, in the explicit feedback, unlike the implicit feedback that changes the mode of output by the input / output terminal 10, an improvement method for improving the recognition accuracy by the input / output terminal 10 may be directly shown to the user. . For this reason, in the second mode in which explicit feedback is performed, in order to recognize the user's utterance, an utterance improvement method that the user can take is specifically presented. For example, when the user's utterance volume is low, in the second mode, voice output “Please speak louder” may be performed.
- a voice output “Please speak more slowly” may be performed.
- feedback that urges the user's speech mode to be improved is provided by clearly indicating improvement measures that the user can take.
- the information processing apparatus 30 selects the first mode or the second mode described above according to information related to user utterance recognition including the user utterance mode.
- explicit feedback may impair the user's impression of the device, and if used frequently, the interaction may become unnatural.
- implicit feedback can realize a more natural dialogue closer to humans than explicit feedback, it is expected that the effect of improving the speech mode is lower than explicit feedback.
- the information processing apparatus 30 is based on the first mode in which implicit feedback is performed, and in the second mode in which explicit feedback is performed depending on the situation. Perform switching control.
- the information processing apparatus 30 performs the control, it is possible to improve the recognition accuracy of the utterance by the user while realizing a more natural dialogue with the user.
- FIG. 5 is a flowchart showing a flow of implicit feedback control by the output control unit 360 according to the present embodiment.
- the output control unit 360 determines whether or not the user's utterance was recognizable (S1101). At this time, the output control unit 360 may make a determination by obtaining a recognition result from the speech recognition unit 330. If the output control unit 360 determines in step S1101 that the user's utterance has been recognized (S1101: Yes), the control related to the implicit feedback is terminated, and the process proceeds to the processing of the application based on the recognized voice.
- the output control unit 360 determines in step S1101 that the user's utterance is not recognized (S1101: No)
- the output control unit 360 determines the user's utterance volume and the volume of the environmental sound. Compare (S1102). At this time, the output control unit 360 may make a determination based on the analysis result by the voice analysis unit 320. If the output control unit 360 determines in step 1102 that the volume of the environmental sound exceeds the utterance volume of the user (S1102: Yes), the output control unit 360 generates feedback information for suggesting environmental adjustment (S1106).
- the feedback information that suggests the environmental adjustment described above may be, for example, a command that outputs a voice saying that “the surrounding sound is loud”.
- the implicit feedback according to the present embodiment may include feedback that gives the user awareness of speech improvement in addition to feedback according to an aspect with high recognition accuracy expected from the user.
- step S1102 determines in step S1102 that the user's utterance volume is larger than the environmental sound volume (S1102: No), then the output control unit 360 recognizes the user's utterance volume. Therefore, it is determined whether or not the size is sufficient (S1103). That is, the output control unit 360 determines whether or not the cause of the failure to recognize the user's utterance is the user's utterance volume.
- the output control unit 360 determines that the user's utterance volume is not sufficient (S1103: No)
- the output control unit 360 generates feedback information for outputting a sound having a volume larger than the user's utterance volume. (S1107).
- the feedback information may be, for example, a command for re-listening at a volume larger than the user's utterance volume.
- the output control unit 360 determines in step S1103 that the user's utterance volume is sufficient (S1103: Yes)
- the output control unit 360 subsequently determines whether the user's utterance can be phonemeized. Is determined (S1104). In other words, the output control unit 360 determines whether or not the cause of the failure to recognize the user's utterance is the user's utterance speed or clarity of pronunciation.
- the output control unit 360 determines that the user's utterance has not been phonemicized (S1104: No)
- the output control unit 360 provides feedback information for performing voice output at a speed slower than the user's utterance speed.
- the feedback information may be, for example, a command for performing re-listening at a speed slower than the user's speaking speed. Further, the feedback information may be a command for outputting a sound in which a change in pitch or voice quality is clearly pronounced.
- step S1104 determines in step S1104 that the user's utterance has been phonemicized (S1104: Yes)
- the output control unit 360 subsequently determines whether the user's utterance can be converted into text. Is determined (S1105). That is, the output control unit 360 determines whether or not the phonemeized information is recognized as a word based on the user's utterance.
- the output control unit 360 determines that the user's utterance is not even recognized as a word (S1105: No), the output control unit 360 performs feedback output using the phoneticized sound. Is generated (S1109).
- the feedback information may be, for example, a command for re-listening “Is it ??” using the information of phonemicized sound.
- the information processing apparatus 30 can cause the input / output terminal 10 to execute various implicit feedbacks in accordance with information related to user utterance recognition. By the control by the information processing apparatus 30, it is possible to realize a more natural dialogue close to humans.
- the information processing apparatus 30 can select a feedback mode for the user's utterance mode in accordance with information related to the user's utterance recognition.
- the information related to user utterance recognition may include, for example, user information, content information, environment information, and device information.
- the above user information is information about the user, and may be, for example, the user's utterance mode, utterance content, attribute information, emotion information, and the like.
- the output control unit 360 can select a feedback mode for the user's utterance mode according to the user information.
- the content information may be information related to an application that recognizes and processes a user's utterance.
- the content information may include, for example, information regarding the type and specification of the application.
- the output control unit 360 can select a feedback mode for the user's utterance mode according to the content information. For example, the output control unit 360 can select the first mode in an application whose main purpose is conversation with a user, and can select the second mode in an application whose main purpose is information retrieval.
- the above environment information may be information related to the environment around the user and the input / output terminal 10.
- the environmental information may include, for example, person detection information other than the user, environmental sound information, and the like.
- the output control unit 360 according to the present embodiment can select a feedback mode for the user's speech mode according to the environment information.
- the device information may be information related to the type and specification of the input / output terminal 10.
- the output control unit 360 can select a feedback mode for the user's utterance mode according to the device information. For example, when the input / output terminal 10 is an agent whose main purpose is conversation with the user, the output control unit 360 selects the first mode, and the input / output terminal 10 is a device that is used office or mechanically. In this case, the second mode can be selected.
- the information processing apparatus 30 is based on the first mode in which implicit feedback is performed, and switches to the second mode in which explicit feedback is performed depending on the situation. Control can be performed.
- the output control unit 360 will be described with specific conditions for switching the mode related to feedback.
- the output control unit 360 can select the second mode in which explicit feedback is performed based on the fact that the user's utterance is not recognized within a predetermined number of times.
- the predetermined number of times may be, for example, the number of times that the input related to the user's utterance has been detected but has not been recognized (the number of authentication failures). Further, the predetermined number of times may be the number of times the input waiting state related to recognition times out (timeout number).
- the predetermined number of times may be the number of utterances of the user (the number of utterances). Furthermore, the predetermined number of times may be the total number of the examples shown above.
- the above control will be described in detail with reference to FIGS. 6 and 7. In the following description, a case where the user's speech volume is determined will be described as an example.
- FIG. 6 is a diagram for explaining the sound output sound volume level by the sound output unit 130 of the input / output terminal 10.
- the volume level of the audio output is defined in three levels of levels 1 to 3, and the volume may increase as the level value increases.
- the level 1 may be an initial setting value in audio output by the audio output unit 130.
- the volume range at level 1 may be, for example, 0 dB to 50 dB.
- level 2 may be defined as a volume that is one step higher than level 1.
- the volume range at level 2 may be, for example, 51 dB to 70 dB.
- Level 3 is a volume one level larger than level 2 and may be defined as the maximum volume in the implicit feedback.
- the volume range at level 3 may be defined as, for example, 71 dB to 100 dB, or 71 dB or more.
- FIG. 7 is a flowchart showing a control flow of the output control unit 360 based on the number of recognition trials.
- the output control unit 360 determines whether or not the user's utterance volume acquired from the input / output terminal 10 is insufficient as a volume for recognition (S1201).
- the output control unit 360 ends the determination processing related to the user's utterance volume.
- the output control unit 360 generates feedback information for executing the implicit feedback with the level 2 volume shown in FIG. 6 (S1202). . That is, the output control unit 360 causes the input / output terminal 10 to execute implicit feedback with a volume level one level higher than the user's utterance volume.
- the output control unit 360 again determines whether the acquired user's utterance volume is insufficient as a volume for recognition. A determination is made (S1203). If the user's utterance volume is sufficient for recognition (S1203: No), the output control unit 360 ends the determination process related to the user's utterance volume.
- the output control unit 360 when the user's speech volume is insufficient again (S1203: Yes), the output control unit 360 generates feedback information for executing the implicit feedback with the level 3 volume shown in FIG. 6 (S1204). ). That is, the output control unit 360 causes the input / output terminal 10 to execute implicit feedback based on the set maximum volume.
- the output control unit 360 again determines whether the acquired user's utterance volume is insufficient as a volume for recognition. A determination is made (S1205). If the user's utterance volume is sufficient for recognition (S1205: No), the output control unit 360 ends the determination process related to the user's utterance volume.
- the output control unit 360 when the user's speech volume is insufficient again (S1205: Yes), the output control unit 360 generates feedback information for executing explicit feedback (S1206). That is, the output control unit 360 determines that the implicit feedback does not lead to improvement of the user's utterance mode, and causes the input / output terminal 10 to execute explicit feedback.
- the input / output terminal 10 may perform voice output such as “Please speak in a louder voice” to the user, for example, according to control by the output control unit 360.
- the output control unit 360 can select the second mode in which explicit feedback is performed based on the number of recognition attempts.
- the above control by the output control unit 360 can improve the recognition accuracy when the user's utterance is not recognized within a predetermined number of times while being based on a natural conversation based on implicit feedback.
- the case where the volume level is defined in three stages and the number of recognition attempts is three has been described as an example.
- the selection of the second mode based on the number of recognition trials according to the present embodiment is not limited to such an example.
- the volume level and the number of recognition attempts may be changed as appropriate.
- the case where the user's speech volume is determined among the user's speech modes has been described as an example.
- the selection of the second mode based on the number of recognition trials according to the present embodiment is not limited to such an example.
- the output control unit 360 can also select the second mode by determining the utterance speed of the user and the clarity of the utterance.
- the output control unit 360 can select the second mode based on the fact that no improvement is recognized in the utterance mode of the user who has received the implicit feedback. The above control will be described in detail below with reference to FIG. In the following description, a case where the user's speech volume is determined will be described as an example.
- the output control unit 360 determines whether or not the user's speech volume acquired from the input / output terminal 10 is insufficient as a volume for performing recognition (S1301). When the user's speech volume is sufficient for recognition (S1301: No), the output control unit 360 ends the determination process related to the user's speech volume. On the other hand, if the user's utterance volume is insufficient (S1301: Yes), the output control unit 360 generates feedback information for executing the implicit feedback with the level 2 volume shown in FIG. 6 (S1302). .
- the output control unit 360 compares the acquired user's utterance volume with the previous utterance volume, and the degree of change in the utterance volume. Is determined (S1303). At this time, the output control unit 360 can make the above determination by acquiring the analysis result of the user's previous speech mode stored in the state storage unit 340.
- step S1303 when the user's utterance volume has sufficiently changed to a recognizable level (S1303: sufficient change), the output control unit 360 ends the determination processing related to the user's utterance volume.
- step S1303 if there is a change in the user's utterance volume but it does not reach a recognizable size (S1303: insufficient change), the output control unit 360 uses the level 3 volume shown in FIG. Feedback information for executing the implicit feedback is generated (S1305).
- step S1303 if the user's utterance volume does not change or the utterance volume is low (S1303: no change), the output control unit 360 uses the implicit feedback to improve the user's utterance mode. It is determined that the connection is not established, and the input / output terminal 10 is caused to execute explicit feedback.
- the output control unit 360 can select the second mode in which explicit feedback is performed based on the degree of change in the user's speech mode.
- the above control by the output control unit 360 can improve the accuracy of recognition even when the user does not respond to the implicit feedback.
- the case where the user's speech volume is determined among the user's speech modes has been described as an example.
- the selection of the second mode based on the degree of change of the user's speech mode according to the present embodiment is not limited to such an example.
- the output control unit 360 can also select the second mode by determining the utterance speed of the user and the clarity of the utterance.
- the output control unit 360 can select the second mode based on the fact that no improvement is recognized in the utterance position or utterance direction of the user who has received the implicit feedback.
- the user's utterance mode according to the present embodiment may include the user's utterance position and utterance direction.
- FIG. 9 is a flowchart showing a control flow of the output control unit 360 based on the user's utterance position or utterance direction.
- the output control unit 360 determines whether or not the user's speech volume acquired from the input / output terminal 10 is insufficient as a volume for performing recognition (S1401). If the user's utterance volume is sufficient for recognition (S1401: No), the output control unit 360 ends the determination process related to the user's utterance position and direction.
- the output control unit 360 determines whether the user's speech position is appropriate (S1402). That is, the output control unit 360 determines whether or not the utterance volume is insufficient due to the utterance position. At this time, the output control unit 360 can make the above determination based on the user's utterance position information estimated by the position detection unit 350.
- the output control unit 360 determines in step S1402 that the user's utterance position is not appropriate (S1402: No), the output control unit 360 generates feedback information for executing implicit feedback on the user's utterance position. (S1404).
- the feedback information may be, for example, a command for causing the input / output terminal 10 to output a voice “It seems that the voice is far away”.
- the output control unit 360 determines whether the user's utterance direction is appropriate. (S1403). That is, the output control unit 360 determines whether or not the lack of the utterance volume is caused by the utterance direction. At this time, the output control unit 360 can make the above determination based on the information of the user's utterance direction estimated by the position detection unit 350.
- step S1403 when the output control unit 360 determines that the user's utterance direction is appropriate (S1403: Yes), the output control unit 360 ends the determination processing related to the user's utterance position and utterance direction.
- the output control unit 360 determines in step S1403 that the user's utterance direction is not appropriate (S1403: No), the output control unit 360 performs feedback information for executing implicit feedback on the user's utterance direction. Is generated (S1405).
- the feedback information may be, for example, a command that causes the input / output terminal 10 to output a voice “Do you speak to me?”.
- the feedback information generated in step S1405 may be a specification related to a speaker that performs audio output. For example, when the audio output unit 130 of the input / output terminal 10 includes a plurality of speakers, it is possible to give the user awareness of the utterance direction by limiting the speakers that output implicit feedback.
- the feedback information may include information for setting the directivity of the microphone array.
- the output control unit 360 determines whether or not the user's speech position or speech direction has been improved. (S1406).
- the output control unit 360 determines that the user's utterance position or utterance direction has been improved (S1406: Yes)
- the output control unit 360 ends the determination process related to the utterance position and utterance direction.
- the output control unit 360 determines that the user's utterance position or utterance direction has not been improved (S1406: No)
- the output control unit 360 executes explicit feedback on the utterance position or utterance direction of the user.
- Generate feedback information may be, for example, a command that causes the input / output terminal 10 to output a voice “Please approach the microphone” or “Please turn in the direction of the microphone”.
- the output control unit 360 can select the second mode in which explicit feedback is performed based on the user's utterance position or utterance direction.
- the selection of the second mode based on the user's utterance position or direction may be controlled in consideration of the volume of the environmental sound.
- the input / output terminal 10 is an agent incorporated in a building and the audio input unit 110 and the audio output unit 130 are provided in a plurality of rooms of the building.
- the information processing apparatus 30 may generate feedback information that guides the user to another room. . That is, the information processing apparatus 30 can guide the user so that the utterance is performed on another microphone different from the microphone that detected the user's utterance.
- the output control unit 360 performs feedback control based on the user's utterance position or utterance direction, so that various feedbacks according to the specifications of the input / output terminal 10 can be realized.
- the output control unit 360 can control the feedback mode based on the analysis result of the user's speech mode.
- the utterance mode may include an utterance volume, an utterance speed, an utterance pitch, clarity of pronunciation, an utterance position, an utterance direction, an utterance content, and an environmental sound.
- the output control unit 360 can control the feedback mode based on the user attribute information.
- the user attribute information may be information obtained by the voice analysis unit 320 analyzing the user's utterance mode or information obtained from the result of voice recognition by the voice recognition unit 330.
- the user attribute information may include profile information such as the user's gender and age, and information such as the language used and the tendency of the speech mode.
- the output control unit 360 may select a feedback mode based on the tendency of the user's speech mode. For example, when the user specified by the analysis of the voice analysis unit 320 tends to have a low utterance volume, the output control unit 360 may preferentially select the second mode. Thus, it can be expected that the time until the user's utterance is recognized is shortened by the output control unit 360 selecting the mode based on the user's utterance tendency.
- the output control unit 360 may select a feedback mode based on a setting made by the user regarding the mode.
- the output control unit 360 can set the feedback mode according to the user setting specified by the analysis of the voice analysis unit 320.
- the output control unit 360 may select a feedback mode based on statistical information obtained from attribute information of a plurality of users.
- the output control unit 360 may acquire a tendency of an utterance mode of a user group having the attribute using specific attribute information as a key, and may select a mode based on the tendency.
- the above control is particularly effective in an environment where the input / output terminal 10 is used by an unspecified number of users.
- the output control unit 360 can control the feedback mode based on the user's emotion.
- the user's emotion may be information obtained by the voice analysis unit 320 analyzing the user's utterance mode.
- the output control unit 360 may select the first mode based on, for example, the estimation that the user is in an excited state. As described above, some users may be disgusted with explicit feedback. For this reason, when it is estimated that the user is in an excited state, the output control unit 360 can reduce the possibility of impairing the user's emotion by causing the input / output terminal 10 to perform implicit feedback. .
- the output control unit 360 can control the feedback mode based on the user's utterance content.
- the user's utterance content may be information obtained from the result of voice recognition by the voice recognition unit 330.
- the output control unit 360 may select the second mode based on, for example, estimation that the user's utterance content includes privacy information.
- the output control unit 360 can set the second mode in order to prevent leakage of privacy information to a person other than the user. Further, the output control unit 360 performs the above-described control, so that the recognition accuracy of the utterance related to the privacy information can be improved and the conversation can be finished in a shorter time.
- the output control unit 360 can control the feedback mode based on the detection of the presence of a third party around the user.
- the third party detection may be information obtained from the detection result by the position detection unit 350 or information obtained from the result of the voice recognition by the voice recognition unit 330.
- the output control unit 360 may select the first mode based on the detection of the presence of a third party around the user, for example. As described above, some users may feel that explicit feedback is instructed to the device. Such a user is expected to further impair the impression of the input / output terminal 10 by receiving explicit feedback from surrounding people. For this reason, when the presence of a third party is detected around the user, the output control unit 360 reduces the possibility of impairing the emotion of the user by causing the input / output terminal 10 to perform implicit feedback. be able to.
- the output control unit 360 can cause the input / output terminal 10 to execute feedback with an improvement reason added in the second mode in which explicit feedback is performed. By controlling the feedback so that the output control unit 360 also presents the reason for improving the speech mode to the user, it is possible to soften the expression of explicit feedback and reduce the possibility of damaging the user's emotion It becomes.
- FIG. 10 is a flowchart showing the flow of adding the reason for improvement by the output control unit 360.
- the output control unit 360 first acquires the utterance analysis result by the speech analysis unit 320 and determines the number of detected speeches (S1501). That is, the output control unit 360 determines whether or not the sound collection information includes a plurality of voices.
- step S1501 when the output control unit 360 determines that only one person's voice is detected (S1501: one person's voice), the output control unit 360 sets the improvement reason 1 in the feedback information (S1502).
- the improvement reason 1 set in the feedback information may be, for example, additional information “because the surrounding sound is too loud”.
- step S1501 when the output control unit 360 determines that a multi-person voice is detected (S1501: multi-person voice), the output control unit 360 sets improvement reason 2 in the feedback information (S1503).
- the improvement reason 2 set in the feedback information may be, for example, additional information such as “Someone else is talking”.
- step S1501 if it is difficult for the output control unit 360 to determine the number of voices (S1501: difficult to specify), the output control unit 360 sets improvement reason 3 in the feedback information (S1504).
- the improvement reason 3 set in the feedback information may be, for example, additional information such as “It is hard to hear”.
- the output control unit 360 When any improvement reason is set in steps S1502 to S1504, the output control unit 360 subsequently generates feedback information for executing explicit feedback with the improvement reason added, and transmits the feedback information to the input / output terminal 10. (S1505).
- the feedback information generated in step S1505 may be, for example, information that is a combination of output information “Please say a little louder” and the reason for improvement.
- the information generated in step S1505 may be output information such as “Please speak a little louder because the surrounding sound is too loud”.
- the output control unit 360 can cause the input / output terminal 10 to perform feedback with an improvement reason added in the second mode in which explicit feedback is performed.
- the output control unit 360 it is possible to soften the expression of explicit feedback and realize a more natural dialogue.
- the output control unit 360 can control feedback based on visual information in addition to feedback based on audio output. Further, the output control unit 360 can add feedback based on visual information based on the fact that the user's speech mode does not change sufficiently.
- the above control by the output control unit 360 will be described in detail with reference to FIG.
- the output control unit 360 determines whether or not the user's speech volume is insufficient as a volume for recognition (S1601). If the user's utterance volume is sufficient for recognition (S1601: No), the output control unit 360 ends the control related to the addition of feedback based on visual information. On the other hand, when the user's speech volume is insufficient (S1601: Yes), the output control unit 360 generates feedback information for executing the implicit feedback with the level 2 volume shown in FIG. 6 (S1602). .
- the output control unit 360 again determines whether the acquired user's utterance volume is insufficient as a volume for recognition. A determination is made (S1603). If the user's utterance volume is sufficient for recognition (S1603: No), the output control unit 360 ends the control related to the addition of feedback based on visual information.
- the output control unit 360 when the user's speech volume is insufficient again (S1603: Yes), the output control unit 360 generates feedback information for executing the implicit feedback with the level 3 volume shown in FIG. 6 (S1604). ). Further, the output control unit 360 generates feedback information for executing implicit feedback based on visual information (S1605).
- the feedback information for executing the implicit feedback may be, for example, a command for causing the display unit 140 of the input / output terminal 10 to display text information similar to the feedback by voice output. Further, the feedback information for executing the implicit feedback may be a command for executing feedback using an image or animation described later.
- step S1604 to S1605 after the user's utterance is acquired again after the implicit feedback is executed, the output control unit 360 determines whether the acquired user's utterance volume is insufficient as a volume for recognition. It is determined again (S1606). If the user's utterance volume is sufficient for recognition (S1606: No), the output control unit 360 ends the control related to the addition of feedback based on visual information.
- the output control unit 360 when the user's speech volume is insufficient again (S1606: Yes), the output control unit 360 generates feedback information for executing explicit feedback by voice output (S1607). Further, the output control unit 360 generates feedback information for executing explicit feedback based on visual information (S1608).
- the output control unit 360 can control feedback based on visual information in addition to feedback based on audio output. Further, the output control unit 360 can add visual feedback step by step in the same manner as feedback control by audio output. When the output control unit 360 performs the above-described control, it is possible to improve recognition accuracy while using implicit feedback by voice as a basis.
- the visual information may include text, symbols, avatars, indicators, or image changes.
- FIG. 12 is an example of an indicator used for implicit feedback based on visual information according to the present embodiment.
- indicator i1 may be an indicator which shows a user's utterance volume.
- the indicator i2 may be an indicator that indicates the output volume of the input / output terminal 10.
- Each indicator i1 and i2 may change the proportion of gradation toward the upper portion of the display unit 140 in accordance with a change in the user's speech volume or the output volume of the input / output terminal 10.
- the gradation of the indicator i1 increases toward the upper part of the screen of the display unit 140
- the indicator i2 increases toward the upper part of the screen of the display unit 140.
- the gradation may spread.
- FIG. 12B is a diagram showing an example of another indicator.
- two indicators i3 and i4 are displayed on the display unit 140 of the input / output terminal 10.
- indicator i3 may be an indicator which shows a user's speech volume.
- the indicator i4 may be an indicator that indicates the output volume of the input / output terminal 10.
- Each of the indicators i3 and i4 may change the number of bars indicating the volume level toward the center of the display unit 140 in accordance with the change in the user's speech volume or the output volume of the input / output terminal 10.
- the number of bars of the indicator i3 increases toward the screen center of the display unit 140, and as the output volume of the input / output terminal 10 increases, the indicator i4 increases.
- the number of bars may increase towards
- the user can compare the output volume of the input / output terminal 10 and the user's utterance volume by checking the indicator displayed on the display unit 140. Thereby, the user recognizes that the utterance volume is insufficient, and an effect of improving the utterance mode is expected.
- the avatar shown in FIG. 13 may be an image or animation for performing implicit feedback on the user's utterance direction.
- an avatar a ⁇ b> 1 is displayed on the display unit 140 of the input / output terminal 10.
- a voice input unit 110 is disposed below the input / output terminal 10.
- the avatar a1 may be an example of an avatar displayed when the user's utterance direction is appropriate.
- FIG. 13B is an example of an avatar displayed when the user's utterance direction is not appropriate.
- the voice input unit 110 is disposed on the left side of the input / output terminal 10.
- the avatar a2 is displayed on the left side of the display unit 140, and the line of sight faces the voice input unit 110.
- FIG. 13C also shows an example of an avatar displayed when the user's speech direction is not appropriate, as in FIG. 13B.
- the voice input unit 110 is arranged on the right side of the input / output terminal 10.
- the avatar a3 is displayed on the right side of the display unit 140, and the line of sight faces the voice input unit 110.
- the graphic g1 shown in FIG. 14 may be an image or animation for performing implicit feedback on the user's utterance direction.
- a graphic g1 is displayed on the display unit 140 of the input / output terminal 10.
- a voice input unit 110 is disposed below the input / output terminal 10.
- the graphic g1 may be an example of a graphic displayed when the user's utterance direction is appropriate.
- FIG. 14B is an example of a graphic displayed when the user's utterance direction is not appropriate.
- the voice input unit 110 is arranged on the left side of the input / output terminal 10.
- the graphic g2 is displayed on the left side of the display unit 140, and is deformed so as to spread toward the voice input unit 110 side.
- FIG. 14C also shows an example of a graphic displayed when the user's speech direction is not appropriate, as in FIG. 14B.
- the voice input unit 110 is arranged on the right side of the input / output terminal 10.
- the graphic g3 is displayed on the right side of the display unit 140, and is deformed so as to spread toward the voice input unit 110 side.
- the graphic indicating the position of the voice input unit 110 when the user's utterance direction is not appropriate, implicit feedback is performed by the graphic indicating the position of the voice input unit 110.
- the effect that the user's utterance direction is improved is expected when the user visually recognizes the movement of the graphic image or animation.
- voice input part 110 may be shown by changing the gradation of the color displayed on the whole screen of the display part 140. Good.
- the avatar shown in FIG. 15 may be an image or animation for performing implicit feedback on the user's utterance position.
- FIG. 15A an avatar a ⁇ b> 4 is displayed on the display unit 140 of the input / output terminal 10.
- the avatar a4 may be an example of an avatar displayed when the user's utterance position is appropriate.
- FIG. 15B is an example of an avatar displayed when the user's utterance position is not appropriate (when the distance is long). Referring to FIG. 15B, it can be seen that the avatar a5 is displayed smaller than the avatar a4, and the expression changes. In this way, in the example illustrated in FIG. 15B, implicit feedback on the user's utterance position is performed by changing the size and expression of the avatar a5.
- FIG. 15B shows an example in which the expression of the avatar a5 is clouded, but other expressions may be used for the expression of the avatar a5. For example, the line of sight of the avatar a5 may indicate that the user's speaking position is far away.
- FIG. 15C also shows an example of an avatar displayed when the user's utterance position is not appropriate, as in FIG. 15B.
- the outline of the avatar a6 is displayed lighter than the avatar a4.
- implicit feedback on the user's utterance position is performed by changing the contour density of the avatar a ⁇ b> 6.
- the graphic including the arrow shown in FIG. 16 may be an image or animation for performing implicit feedback on the user's utterance direction or utterance position.
- graphics g4 and g5 are displayed on the display unit 140 of the input / output terminal 10.
- the graphic g4 is displayed as an arrow indicating the position of the audio input unit 110
- the graphic g5 is illustrated as an ear icon.
- the position of the voice input unit 110 is indicated by an arrow or ear-shaped graphic, thereby performing implicit feedback on the user's utterance direction.
- FIG. 16B unlike FIG. 16A, feedback indicating the relative position between the input / output terminal 10 and the user is performed.
- the display unit 140 of the input / output terminal 10 shows graphics g6 to g9.
- the graphic g6 is displayed as an arrow indicating the position of the voice input unit 110.
- Graphics g7 and g8 are icons indicating the input / output terminal 10 and the user, respectively. As described above, in the example illustrated in FIG.
- the position of the voice input unit 110 is indicated by an arrow, thereby performing implicit feedback on the user's utterance direction.
- feedback indicating a sound source other than the user may be performed as shown in the graphic g9. The effect that the user's utterance position is improved is expected when the user visually recognizes the graphic indicating the sound source.
- FIG. 17 is a block diagram illustrating a hardware configuration example of the input / output terminal 10 and the information processing apparatus 30 according to the present disclosure.
- the input / output terminal 10 and the information processing apparatus 30 include, for example, a CPU 871, a ROM 872, a RAM 873, a host bus 874, a bridge 875, an external bus 876, an interface 877, and an input unit 878.
- the hardware configuration shown here is an example, and some of the components may be omitted. Moreover, you may further include components other than the component shown here.
- the CPU 871 functions as, for example, an arithmetic processing unit or a control unit, and controls all or part of the operation of each component based on various programs recorded in the ROM 872, RAM 873, storage unit 880, or removable recording medium 901. .
- the ROM 872 is a means for storing programs read by the CPU 871, data used for calculations, and the like.
- the RAM 873 for example, a program read by the CPU 871, various parameters that change as appropriate when the program is executed, and the like are temporarily or permanently stored.
- the CPU 871, the ROM 872, and the RAM 873 are connected to each other via, for example, a host bus 874 capable of high-speed data transmission.
- the host bus 874 is connected to an external bus 876 having a relatively low data transmission speed via a bridge 875, for example.
- the external bus 876 is connected to various components via an interface 877.
- Input unit 8708 For the input unit 878, for example, a mouse, a keyboard, a touch panel, a button, a switch, a lever, or the like is used. Furthermore, as the input unit 878, a remote controller (hereinafter referred to as a remote controller) that can transmit a control signal using infrared rays or other radio waves may be used.
- a remote controller hereinafter referred to as a remote controller
- a remote controller that can transmit a control signal using infrared rays or other radio waves may be used.
- Output unit 879 In the output unit 879, for example, a display device such as a CRT (Cathode Ray Tube), LCD, or organic EL, an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile, etc. It is a device that can notify visually or audibly.
- a display device such as a CRT (Cathode Ray Tube), LCD, or organic EL
- an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile, etc. It is a device that can notify visually or audibly.
- the storage unit 880 is a device for storing various data.
- a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like is used.
- the drive 881 is a device that reads information recorded on a removable recording medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information to the removable recording medium 901.
- a removable recording medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory
- the removable recording medium 901 is, for example, a DVD medium, a Blu-ray (registered trademark) medium, an HD DVD medium, or various semiconductor storage media.
- the removable recording medium 901 may be, for example, an IC card on which a non-contact IC chip is mounted, an electronic device, or the like.
- connection port 882 is a port for connecting an external connection device 902 such as a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. is there.
- an external connection device 902 such as a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. is there.
- the external connection device 902 is, for example, a printer, a portable music player, a digital camera, a digital video camera, or an IC recorder.
- the communication unit 883 is a communication device for connecting to the network 903.
- the information processing apparatus 30 can select a feedback mode for a user's utterance mode from a plurality of modes according to information related to the user's utterance recognition.
- the plurality of modes may include a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed.
- the utterance mode may include utterance volume, utterance speed, pitch of utterance, clarity of pronunciation, utterance position, utterance direction, utterance content, and environmental sound. According to such a configuration, it is possible to realize a more natural dialogue between the user and the system.
- a control unit that selects a feedback mode for the user's speech mode from a plurality of modes according to information related to the user's speech recognition, With The plurality of modes includes a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed. Information processing device.
- the first mode feedback including an indirect improvement method for the user's utterance mode is performed; In the second mode, feedback indicating how to directly improve the user's speech mode is performed.
- the information processing apparatus according to (1).
- Information related to the user's utterance recognition includes user information, content information, environmental information, and device information. The information processing apparatus according to (2).
- the user information includes an utterance mode of the user, The control unit selects the first mode or the second mode based on the user's speech mode.
- the utterance mode includes at least one of utterance volume, utterance speed, pitch of utterance, clarity of pronunciation, utterance position, or utterance direction.
- the control unit selects the second mode based on the fact that no improvement is recognized in the user's utterance mode that has received feedback in the first mode.
- the control unit selects the second mode based on the fact that the user's utterance is not recognized within a predetermined number of times after performing feedback in the first mode.
- the information processing apparatus according to any one of (4) to (6).
- the user information includes the content of the user's utterance, The control unit selects the first mode or the second mode based on the content of the user's utterance.
- the information processing apparatus according to any one of (3) to (7).
- the control unit selects the second mode based on the assumption that the content of the user's utterance includes privacy information.
- the information processing apparatus according to (8).
- the control unit selects the first mode based on the presence of another person different from the user from the environment information.
- the information processing apparatus according to any one of (3) to (9).
- the user information includes attribute information of the user, The control unit selects the first mode or the second mode based on the attribute information of the user.
- the information processing apparatus according to any one of (3) to (10).
- the user information includes emotion information of the user,
- the control unit selects the first mode or the second mode based on the emotion information of the user estimated from the user's utterance.
- the information processing apparatus according to any one of (3) to (11).
- (13) In the first mode feedback is performed at a volume with higher recognition accuracy than the user's utterance volume.
- the information processing apparatus according to any one of (4) to (12).
- the information processing apparatus according to any one of (4) to (13).
- (15) In the first mode feedback is performed at a pitch of sound with higher recognition accuracy than the pitch of the user's speech.
- the information processing apparatus according to any one of (4) to (14).
- the information processing apparatus according to any one of (2) to (15).
- the feedback includes feedback by visual information,
- the information processing apparatus according to any one of (2) to (16).
- the information processing apparatus according to any one of (2) to (17).
- the attribute information of the user includes at least one of gender, age, language used, or utterance tendency.
- (20) The control unit selects the first mode based on the assumption that the user is in an excited state.
- the information processing apparatus according to (12).
- the information processing apparatus In the first mode, feedback by an artificial voice according to the user's utterance mode is performed.
- the information processing apparatus according to any one of (1) to (20).
- the visual information includes text, symbols, avatars, indicators, or image changes,
- the information processing apparatus according to (17).
- a processor selects a feedback mode for the user's speech mode from a plurality of modes according to information related to the user's speech recognition; Including The plurality of modes includes a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed. Information processing method.
- Computer A control unit that selects a feedback mode for the user's speech mode from a plurality of modes according to information related to the user's speech recognition, With The plurality of modes includes a first mode in which implicit feedback is performed and a second mode in which explicit feedback is performed.
- Information processing equipment, Program to function as.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
1.本開示に係るフィードバックの制御
1.1.音声認識技術におけるフィードバック
1.2.本開示に係るシステム構成例
1.3.本開示に係る入出力端末10
1.4.本開示に係る情報処理装置30
2.実施形態
2.1.フィードバックのモードについて
2.2.暗示的なフィードバックの例
2.3.フィードバックに係るモードの切り替え
2.4.改善理由を付加した明示的なフィードバック
2.5.視覚情報によるフィードバックの追加制御
2.6.視覚情報によるフィードバックの例
3.入出力端末10及び情報処理装置30のハードウェア構成例
4.まとめ
<<1.1.音声認識技術におけるフィードバック>>
近年、音声認識技術を利用した様々な装置が提供されている。音声認識技術を利用した装置は、PC(Personal Computer)やスマートフォンなどの情報処理装置をはじめ、生活家電や車載用装置など、一般消費者に近い場面でも広く利用されている。また、音声認識技術を利用した装置は、商業施設や公共施設において、人の代わりとなり顧客にサービスを提供する主体としても、今後の活用が期待される。
まず、図1を参照して、本開示に係る情報処理システムの構成例について説明する。図1を参照すると、本開示に係る情報処理システムは、入出力端末10及び情報処理装置30を備える。また、入出力端末10と情報処理装置30は、ネットワーク20を介して互いに通信が行えるように接続される。
次に、本開示に係る入出力端末10について、詳細に説明する。上記で説明したとおり、本開示に係る入出力端末10は、ユーザの発話を収音する機能を有する。また、入出力端末10は、ユーザの発話認識に係る情報に応じて情報処理装置30が制御するフィードバックの情報をユーザに提示する機能を有する。
音声入力部110は、ユーザの発話及び環境音を収音する機能を有する。音声入力部110は、ユーザの発話及び環境音を電気信号に変換するマイクロフォンを含んでよい。また、音声入力部110は、特定方向の音を収音する指向性を有したマイクアレイを含んでもよい。上記のようなマイクアレイにより、音声入力部110が、ユーザの発話を環境音と切り離して収音することも可能となる。また、音声入力部110は、マイクロフォンやマイクアレイを複数含んでもよい。当該構成により、音源の位置、向き、及び動きなどをより高い精度で検出することが可能となる。
センサ部120は、ユーザを含む物体に関する種々の情報を検出する機能を有する。センサ部120は、上記の情報を検出するための複数のセンサを含んでよい。センサ部120は、ユーザの動作を検出するための撮像素子や、赤外線センサ、温度センサなどを含んでもよい。また、センサ部120は、撮像した画像に基づく画像認識を行う機能を有してよい。例えば、センサ部120は、ユーザの口の動きを検出することで、発話を行っているユーザを特定することができる。
音声出力部130は、電気信号を音に変換し出力する機能を有する。具体的には、音声出力部130は、情報処理装置30が制御するフィードバックの情報に基づいて、音声出力によりユーザへのフィードバックを行う機能を有する。音声出力部130は、上記の機能を有するスピーカを含んでよい。また、音声出力部130に含まれるスピーカは、特定の方向や距離などに指向性を持つ音声出力を実現する機能を有してもよい。当該機能を有するスピーカを含むことで、音声出力部130は、例えば、センサ部120が検出したユーザの位置に応じた音声出力を実施することができる。また、音声出力部130は、複数のスピーカを含んでもよい。音声出力部130が複数のスピーカを含む場合、フィードバックを出力するスピーカを制御することで、ユーザの位置に応じたフィードバックを実行することが可能となる。当該機能の詳細については、後述する。
表示部140は、情報処理装置30が制御するフィードバックの情報に基づいて、視覚情報によるユーザへのフィードバックを行う機能を有する。当該機能は、例えば、CRT(Cathode Ray Tube)ディスプレイ装置、液晶ディスプレイ(LCD:Liquid Crystal Display)装置、OLED(Organic Light Emitting Diode)装置により実現されてもよい。また、表示部140は、ユーザの操作を受け付ける操作部としての機能を有してもよい。操作部としての機能は、例えば、タッチパネルにより実現され得る。
端末制御部150は、上記で述べた入出力端末10の各構成を制御する機能を有する。端末制御部150は、例えば、音声入力部110及びセンサ部120が検出した各種の情報を取得し、後述するサーバ通信部160を介して、情報処理装置30に送信する機能を有してよい。また、端末制御部150は、サーバ通信部160を介して情報処理装置30からフィードバックに係る情報を取得し、当該情報に基づいて、音声出力部130及び表示部140を制御してもよい。また、ユーザの発話に基づいて処理を実行するアプリケーションが入出力端末10に備えられる場合、端末制御部150は、当該アプリケーションの処理を制御することができる。
サーバ通信部160は、ネットワーク20を介して、情報処理装置30との情報通信を行う機能を有する。具体的には、サーバ通信部160は、音声入力部110やセンサ部120が取得した情報を、端末制御部150の制御に基づいて、情報処理装置30に送信する。また、サーバ通信部160は、情報処理装置30から取得したフィードバックの情報を端末制御部150に引き渡す。
次に、本開示に係る情報処理装置30について、詳細に説明する。本開示に係る情報処理装置30は、入出力端末10が収音したユーザの発話認識に係る情報に応じて、ユーザの発話態様に対して入出力端末10が実行するフィードバックを制御する機能を有する。情報処理装置30は、ユーザの発話認識に係る情報に応じて、ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択することができる。上記の複数のモードには、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、が含まれてよい。また、上記の発話態様には、発話音量、発話速度、発話する音の高さ、発音の明確さ、発話位置、発話方向、発話内容、及び環境音が含まれてよい。
端末通信部310は、ネットワーク20を介して、入出力端末10との情報通信を行う機能を有する。具体的には、端末通信部310は、入出力端末10から取得した各種の情報を、音声分析部320、音声認識部330、及び位置検出部350に引き渡す。また、端末通信部310は、出力制御部360により制御されるフィードバックの情報を取得し、入出力端末10に送信する機能を有する。なお、情報処理装置30が複数の入出力端末10を制御する場合、端末通信部310は、ネットワーク20を介して、複数の入出力端末10と情報通信を行ってよい。
音声分析部320は、入出力端末10が収音した情報を取得し、当該情報を分析する機能を有する。音声分析部320は、例えば、ユーザの発話音量、発話速度、発話する音の高さ、または発音の明確さなどを含む、ユーザの発話態様に関する情報を分析できる。また、上記ユーザの発話態様には、ユーザの発話に付随して集音される環境音が含まれてよい。また、音声分析部320は、入出力端末10が収音した情報から、ユーザの発話と環境音とを分離する機能を有してもよい。ユーザの発話と環境音との分離は、人間の声に係る周波数帯域の情報などに基づいて行われてもよいし、VAD(Voice Activity Detection)技術などにより実現されてもよい。また、後述する状態記憶部340が、所定のユーザの音声に関する個人情報を記憶している場合、音声分析部320は、当該情報を用いて、ユーザの発話と環境音とを分離することもできる。
音声認識部330は、入出力端末10が収音した音声、または音声分析部320が分離したユーザの音声、に基づいて、ユーザの発話を認識する機能を有する。具体的には、音声認識部330は、取得した音声情報を音素化し、テキスト化する機能を有してよい。なお、音声認識部330による音声認識については種々の手法が用いられてよいため、詳細な説明は省略する。
状態記憶部340は、音声分析部320及び音声認識部330による処理結果を記憶する機能を有する。状態記憶部340は、例えば、音声分析部320により分析されたユーザの発話態様に係る情報や、音声認識部330による音声認識の結果を記憶できる。また、状態記憶部340は、ユーザの音声に係る特徴を含むユーザの属性情報や、発話態様の傾向などを記憶してもよい。
位置検出部350は、入出力端末10が取得した情報に基づいて、ユーザの発話位置や発話方向を推定する機能を有する。位置検出部350は、入出力端末10の音声入力部110が収音した音声情報や、センサ部120が取得した画像情報を含む各種のセンサから収集された情報を基にユーザの発話位置及び発話方向を推定することができる。また、位置検出部350は、上記の情報を基に、発話を行っているユーザ以外の人物及び物体の位置を推定してもよい。
出力制御部360は、音声分析部320、音声認識部330、状態記憶部340、及び位置検出部350から各種の情報を取得し、ユーザの発話に対するフィードバックを制御する機能を有する。出力制御部360は、上記の情報に基づいて、ユーザの発話態様に対するフィードバックのモードを複数のモードから選択する。当該複数のモードには、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、が含まれる。また、出力制御部360は、入出力端末10が行う音声または視覚情報によるフィードバックの情報を生成し、端末通信部310を介して、入出力端末10に送信してもよい。出力制御部360は、後述する出力DB370から条件に基づいたフィードバック情報を検索することで、上記のフィードバックの情報を生成してもよい。出力制御部360によるフィードバック制御の詳細については、後述する。
出力DB370は、入出力端末10が実施する音声または視覚情報によるフィードバックの情報を蓄積するデータベースであってよい。出力DB370は、例えば、フィードバックに係る音声情報を記憶してもよいし、入出力端末10の音声合成機能により音声出力させるためのテキスト情報を記憶してもよい。また、出力DB370は、入出力端末10が実施する視覚情報によるフィードバックに係る画像情報やテキスト情報を記憶してもよい。
<<2.1.フィードバックのモードについて>>
以上、本開示に係るフィードバックの制御について、概要を述べた。続いて、本開示の実施形態に係るフィードバックの制御について、詳細に説明する。本実施形態に係る情報処理装置30は、ユーザの発話認識に係る情報に応じて、ユーザの発話態様に対するフィードバックのモードを複数のモードから選択することができる。
暗示的なフィードバックとは、ユーザの発話態様に対する間接的な改善方法を含むフィードバックである。すなわち、暗示的なフィードバックでは、ユーザに発話態様の改善方法を直接的には提示せず、入出力端末10による出力の態様を変化させることでフィードバックが行われる。ここで、本実施形態に係る暗示的なフィードバックとは、ユーザの発話態様よりも認識精度の高い発話態様でのフィードバックと定義してもよい。上記の認識精度は、入出力端末10によるユーザの発話の認識精度であってよい。言い換えると、暗示的なフィードバックが行われる第1のモードでは、ユーザに期待する発話態様によるフィードバックが行われる。
一方、明示的なフィードバックとは、ユーザの発話態様に対する直接的な改善方法を示すフィードバックであってよい。すなわち、明示的なフィードバックでは、入出力端末10による出力の態様を変化させる暗示的なフィードバックとは異なり、入出力端末10による認識精度を向上させるための改善方法を直接的にユーザに示してよい。このため、明示的なフィードバックが行われる第2のモードでは、ユーザの発話を認識するために、ユーザがとり得る発話の改善方法が具体的に提示される。例えば、ユーザの発話音量が小さい場合、第2のモードでは、「もっと大きな声で喋ってください」、という音声出力が行われてもよい。また、例えば、ユーザの発話速度が速すぎる場合、第2のモードでは、「もっとゆっくり喋ってください」、という音声出力が行われてもよい。上記のように、明示的なフィードバックが行われる第2のモードでは、ユーザがとり得る改善手段を明確に示すことで、ユーザの発話態様が改善されるように促すフィードバックが行われる。
次に、本実施形態に係る暗示的なフィードバックの具体的な例について説明する。本実施形態では、ユーザの発話態様に応じて、様々な暗示的フィードバックが行われてよい。図5は、本実施形態に係る出力制御部360による暗示的なフィードバックの制御の流れを示すフローチャートである。
次に、本実施形態の出力制御部360によるフィードバックに係るモードの選択について説明する。上述したとおり、本実施形態に係る情報処理装置30は、ユーザの発話認識に係る情報に応じて、ユーザの発話態様に対するフィードバックのモードを選択することができる。ここで、ユーザの発話認識に係る情報には、例えば、ユーザ情報、コンテンツ情報、環境情報、デバイス情報が含まれてよい。
まず、認識の試行回数に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、ユーザの発話が所定の回数以内に認識されないことに基づいて、明示的なフィードバックが行われる第2のモードを選択することができる。なお、上記の所定の回数については、システムやアプリケーションの仕様に応じた種々の定義が行われてよい。本実施形態に係る所定の回数は、例えば、ユーザの発話に係る入力を検出したものの認識には至らなかった回数であってもよい(認証失敗回数)。また、所定の回数は、認識に係る入力待ち状態がタイムアウトした回数であってもよい(タイムアウト回数)。また、所定の回数は、ユーザの発話回数であってもよい(発話回数)。さらには、所定の回数は、上記に示した例の合計回数であてもよい。以下、図6及び図7を参照して、上記の制御について詳細に説明する。なお、以下の説明においては、ユーザの発話音量を判定する場合を例に説明を行う。
次に、ユーザの発話態様の変化度合いに基づくモードの選択について説明する。本実施形態に係る出力制御部360は、暗示的なフィードバックを受けたユーザの発話態様に改善が認められないことに基づいて、第2のモードを選択することができる。以下、図8を参照して、上記の制御について詳細に説明する。なお、以下の説明においては、ユーザの発話音量を判定する場合を例に説明を行う。
次に、ユーザの発話位置または発話方向に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、暗示的なフィードバックを受けたユーザの発話位置または発話方向に改善が認められないことに基づいて、第2のモードを選択することができる。このように、本実施形態に係るユーザの発話態様には、ユーザの発話位置や発話方向が含まれてよい。
次に、ユーザの発話態様の分析に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、ユーザの発話態様の分析結果に基づいて、フィードバックのモードを制御することができる。上記の発話態様には、発話音量、発話速度、発話する音の高さ、発音の明確さ、発話位置、発話方向、発話内容、及び環境音が含まれてよい。
まず、ユーザの属性情報に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、ユーザの属性情報に基づいて、フィードバックのモードを制御することができる。ユーザの属性情報は、音声分析部320がユーザの発話態様を分析することで得られる情報、または音声認識部330による音声認識の結果から得られる情報であってよい。また、ユーザの属性情報には、ユーザの性別、年齢などのプロフィール情報や、使用言語、発話態様の傾向などの情報が含まれてよい。
次に、ユーザの感情に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、ユーザの感情に基づいて、フィードバックのモードを制御することができる。ユーザの感情は、音声分析部320がユーザの発話態様を分析することで得られる情報であってよい。
次に、ユーザの発話内容に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、ユーザの発話内容に基づいて、フィードバックのモードを制御することができる。ユーザの発話内容は、音声認識部330による音声認識の結果から得られる情報であってよい。
次に、環境情報に基づくモードの選択について説明する。本実施形態に係る出力制御部360は、ユーザの周囲に第三者の存在が検出されたことに基づいて、フィードバックのモードを制御することができる。第三者の検出は、位置検出部350による検出結果から得られる情報、または音声認識部330による音声認識の結果から得られる情報であってよい。
以上、出力制御部360によるフィードバックに係るモードの選択について説明した。次に、本実施形態に係る改善理由を付加した明示的なフィードバックについて説明する。本開示に係る出力制御部360は、明示的なフィードバックが行われる第2のモードにおいて、入出力端末10に、改善理由を付加したフィードバックを実行させることができる。出力制御部360が、ユーザに発話態様を改善する理由を併せて提示するようにフィードバックを制御することで、明示的なフィードバックの表現を和らげ、ユーザの感情を損ねる可能性を低減することが可能となる。
次に、本実施形態に係る視覚情報によるフィードバックの追加について説明する。本実施形態に係る出力制御部360は、音声出力によるフィードバックに加え、視覚情報によるフィードバックを制御することが可能である。また、出力制御部360は、ユーザの発話態様が充分に変化しないことに基づいて、視覚情報によるフィードバックを追加することができる。以下、図11を参照して、出力制御部360による上記の制御について、詳細に説明する。
以上、本実施形態に係る視覚情報によるフィードバックの制御について説明した。以下、図12~図16を参照して、本実施形態に係る視覚情報によるフィードバックの例について説明する。上記の視覚情報には、文字、記号、アバター、インジケータ、または画像の変化が含まれてよい。
図12は、本実施形態の視覚情報による暗示的なフィードバックに用いられるインジケータの一例である。図12Aを参照すると、入出力端末10の表示部140には、2つのインジケータi1及びi2が表示されている。ここで、インジケータi1は、ユーザの発話音量を示すインジケータであってよい。また、インジケータi2は、入出力端末10の出力音量を示すインジケータであってよい。それぞれのインジケータi1及びi2は、ユーザの発話音量または入出力端末10の出力音量の変化に応じて、表示部140の上部に向けてグラデーションの占める割合が変化してよい。すなわち、ユーザの発話音量が大きいほど、インジケータi1は、表示部140の画面上部に向けてグラデーションが広がり、入出力端末10の出力音量が大きいほど、インジケータi2は、表示部140の画面上部に向けてグラデーションが広がってよい。
次に、図13を参照して、本実施形態の視覚情報による暗示的なフィードバックに用いられるアバターについて、一例を説明する。図13に示されるアバターは、ユーザの発話方向に対する暗示的なフィードバックを行うための画像やアニメーションであってよい。図13Aを参照すると、入出力端末10の表示部140には、アバターa1が表示されている。また、入出力端末10の下部には、音声入力部110が配置されている。ここで、アバターa1は、ユーザの発話方向が適切である場合に表示されるアバターの一例であってよい。
次に、図14を参照して、本実施形態の視覚情報による暗示的なフィードバックに用いられるグラフィックについて、一例を説明する。図14に示されるグラフィックg1は、ユーザの発話方向に対する暗示的なフィードバックを行うための画像やアニメーションであってよい。図14Aを参照すると、入出力端末10の表示部140には、グラフィックg1が表示されている。また、入出力端末10の下部には、音声入力部110が配置されている。ここで、グラフィックg1は、ユーザの発話方向が適切である場合に表示されるグラフィックの一例であってよい。
次に、図15を参照して、アバターによる発話位置に対する暗示的なフィードバックについて、一例を説明する。図15に示されるアバターは、ユーザの発話位置に対する暗示的なフィードバックを行うための画像やアニメーションであってよい。図15Aを参照すると、入出力端末10の表示部140には、アバターa4が表示されている。ここで、アバターa4は、ユーザの発話位置が適切である場合に表示されるアバターの一例であってよい。
次に、図16を参照して、矢印による発話方向又は発話位置に対する暗示的フィードバックについて、一例を説明する。図16に示される矢印を含むグラフィックは、ユーザの発話方向または発話位置に対する暗示的なフィードバックを行うための画像やアニメーションであってよい。
次に、本開示に係る入出力端末10及び情報処理装置30に共通するハードウェア構成例について説明する。図17は、本開示に係る入出力端末10及び情報処理装置30のハードウェア構成例を示すブロック図である。図17を参照すると、入出力端末10及び情報処理装置30は、例えば、CPU871と、ROM872と、RAM873と、ホストバス874と、ブリッジ875と、外部バス876と、インターフェース877と、入力部878と、出力部879と、記憶部880と、ドライブ881と、接続ポート882と、通信部883と、を有する。なお、ここで示すハードウェア構成は一例であり、構成要素の一部が省略されてもよい。また、ここで示される構成要素以外の構成要素をさらに含んでもよい。
CPU871は、例えば、演算処理装置又は制御装置として機能し、ROM872、RAM873、記憶部880、又はリムーバブル記録媒体901に記録された各種プログラムに基づいて各構成要素の動作全般又はその一部を制御する。
ROM872は、CPU871に読み込まれるプログラムや演算に用いるデータ等を格納する手段である。RAM873には、例えば、CPU871に読み込まれるプログラムや、そのプログラムを実行する際に適宜変化する各種パラメータ等が一時的又は永続的に格納される。
CPU871、ROM872、RAM873は、例えば、高速なデータ伝送が可能なホストバス874を介して相互に接続される。一方、ホストバス874は、例えば、ブリッジ875を介して比較的データ伝送速度が低速な外部バス876に接続される。また、外部バス876は、インターフェース877を介して種々の構成要素と接続される。
入力部878には、例えば、マウス、キーボード、タッチパネル、ボタン、スイッチ、及びレバー等が用いられる。さらに、入力部878としては、赤外線やその他の電波を利用して制御信号を送信することが可能なリモートコントローラ(以下、リモコン)が用いられることもある。
出力部879には、例えば、CRT(Cathode Ray Tube)、LCD、又は有機EL等のディスプレイ装置、スピーカ、ヘッドホン等のオーディオ出力装置、プリンタ、携帯電話、又はファクシミリ等、取得した情報を利用者に対して視覚的又は聴覚的に通知することが可能な装置である。
記憶部880は、各種のデータを格納するための装置である。記憶部880としては、例えば、ハードディスクドライブ(HDD)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス、又は光磁気記憶デバイス等が用いられる。
ドライブ881は、例えば、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリ等のリムーバブル記録媒体901に記録された情報を読み出し、又はリムーバブル記録媒体901に情報を書き込む装置である。
リムーバブル記録媒体901は、例えば、DVDメディア、Blu-ray(登録商標)メディア、HD DVDメディア、各種の半導体記憶メディア等である。もちろん、リムーバブル記録媒体901は、例えば、非接触型ICチップを搭載したICカード、又は電子機器等であってもよい。
接続ポート882は、例えば、USB(Universal Serial Bus)ポート、IEEE1394ポート、SCSI(Small Computer System Interface)、RS-232Cポート、又は光オーディオ端子等のような外部接続機器902を接続するためのポートである。
外部接続機器902は、例えば、プリンタ、携帯音楽プレーヤ、デジタルカメラ、デジタルビデオカメラ、又はICレコーダ等である。
通信部883は、ネットワーク903に接続するための通信デバイスであり、例えば、有線又は無線LAN、Bluetooth(登録商標)、又はWUSB(Wireless USB)用の通信カード、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ、又は各種通信用のモデム等である。
以上、説明したように情報処理装置30は、ユーザの発話認識に係る情報に応じて、ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択することができる。上記の複数のモードには、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、が含まれてよい。また、上記の発話態様には、発話音量、発話速度、発話する音の高さ、発音の明確さ、発話位置、発話方向、発話内容、及び環境音が含まれてよい。係る構成によれば、ユーザとシステムとの間のより自然な対話を実現することが可能となる。
(1)
ユーザの発話認識に係る情報に応じて、前記ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択する制御部、
を備え、
前記複数のモードは、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、を含む、
情報処理装置。
(2)
前記第1のモードでは、前記ユーザの発話態様に対する間接的な改善方法を含むフィードバックが行われ、
前記第2のモードでは、前記ユーザの発話態様に対する直接的な改善方法を示すフィードバックが行われる、
前記(1)に記載の情報処理装置。
(3)
前記ユーザの発話認識に係る情報は、ユーザ情報、コンテンツ情報、環境情報、デバイス情報を含む、
前記(2)に記載の情報処理装置。
(4)
前記ユーザ情報は、前記ユーザの発話態様を含み、
前記制御部は、前記ユーザの発話態様に基づいて、前記第1のモード又は前記第2のモードを選択する、
前記(3)に記載の情報処理装置。
(5)
前記発話態様には、発話音量、発話速度、発話する音の高さ、発音の明確さ、発話位置、または発話方向のうち、少なくとも1つが含まれる、
前記(4)に記載の情報処理装置。
(6)
前記制御部は、前記第1のモードによるフィードバックを受けた前記ユーザの発話態様に改善が認められないことに基づいて、前記第2のモードを選択する、
前記(4)または(5)に記載の情報処理装置。
(7)
前記制御部は、前記第1のモードによるフィードバックを行った後、前記ユーザの発話が所定の回数以内に認識されないことに基づいて、前記第2のモードを選択する、
前記(4)~(6)のいずれかに記載の情報処理装置。
(8)
前記ユーザ情報は、前記ユーザの発話の内容を含み、
前記制御部は、前記ユーザの発話の内容に基づいて、前記第1のモード又は前記第2のモードを選択する、
前記(3)~(7)のいずれかに記載の情報処理装置。
(9)
前記制御部は、前記ユーザの発話の内容がプライバシー情報を含むと推定されることに基づいて、前記第2のモードを選択する、
前記(8)に記載の情報処理装置。
(10)
前記制御部は、前記環境情報から前記ユーザとは異なる別の人物の存在が推定されることに基づいて、前記第1のモードを選択する、
前記(3)~(9)のいずれかに記載の情報処理装置。
(11)
前記ユーザ情報は、前記ユーザの属性情報を含み、
前記制御部は、前記ユーザの属性情報に基づいて、前記第1のモード又は前記第2のモードを選択する、
前記(3)~(10)のいずれかに記載の情報処理装置。
(12)
前記ユーザ情報は、前記ユーザの感情情報を含み、
前記制御部は、前記ユーザの発話から推定される前記ユーザの感情情報に基づいて、前記第1のモード又は前記第2のモードを選択する、
前記(3)~(11)のいずれかに記載の情報処理装置。
(13)
前記第1のモードでは、前記ユーザの発話音量よりも認識精度の高い音量でフィードバックが行われる、
前記(4)~(12)のいずれかに記載の情報処理装置。
(14)
前記第1のモードでは、前記ユーザの発話速度よりも認識精度の高い速度でフィードバックが行われる、
前記(4)~(13)のいずれかに記載の情報処理装置。
(15)
前記第1のモードでは、前記ユーザの発話する音の高さよりも認識精度の高い音の高さでフィーバックが行われる、
前記(4)~(14)のいずれかに記載の情報処理装置。
(16)
前記第2のモードでは、前記発話態様の改善理由を付加したフィードバックが行われる、
前記(2)~(15)のいずれかに記載の情報処理装置。
(17)
前記フィードバックは、視覚情報によるフィードバックを含む、
前記(2)~(16)のいずれかに記載の情報処理装置。
(18)
前記第2のモードでは、前記ユーザの発話を検出したセンサとは異なる別のセンサに発話をすべき旨のフィードバックが行われる、
前記(2)~(17)のいずれかに記載の情報処理装置。
(19)
前記ユーザの属性情報には、性別、年齢、使用言語、または発話態様の傾向のうち、少なくとも1つが含まれる、
前記(11)に記載の情報処理装置。
(20)
前記制御部は、前記ユーザが興奮状態であると推定されることに基づいて、前記第1のモードを選択する、
前記(12)に記載の情報処理装置。
(21)
前記第1のモードでは、前記ユーザの発話態様に応じた人工音声によるフィードバックが行われる、
前記(1)~(20)のいずれかに記載の情報処理装置。
(22)
前記視覚情報は、文字、記号、アバター、インジケータ、または画像の変化を含む、
前記(17)に記載の情報処理装置。
(23)
プロセッサが、ユーザの発話認識に係る情報に応じて、前記ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択すること、
を含み、
前記複数のモードは、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、を含む、
情報処理方法。
(24)
コンピュータを、
ユーザの発話認識に係る情報に応じて、前記ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択する制御部、
を備え、
前記複数のモードは、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、を含む、
情報処理装置、
として機能させるためのプログラム。
110 音声入力部
120 センサ部
130 音声出力部
140 表示部
150 端末制御部
160 サーバ通信部
20 ネットワーク
30 情報処理装置
310 端末通信部
320 音声分析部
330 音声認識部
340 状態記憶部
350 位置検出部
360 出力制御部
370 出力DB
Claims (20)
- ユーザの発話認識に係る情報に応じて、前記ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択する制御部、
を備え、
前記複数のモードは、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、を含む、
情報処理装置。 - 前記第1のモードでは、前記ユーザの発話態様に対する間接的な改善方法を含むフィードバックが行われ、
前記第2のモードでは、前記ユーザの発話態様に対する直接的な改善方法を示すフィードバックが行われる、
請求項1に記載の情報処理装置。 - 前記ユーザの発話認識に係る情報は、ユーザ情報、コンテンツ情報、環境情報、デバイス情報を含む、
請求項2に記載の情報処理装置。 - 前記ユーザ情報は、前記ユーザの発話態様を含み、
前記制御部は、前記ユーザの発話態様に基づいて、前記第1のモード又は前記第2のモードを選択する、
請求項3に記載の情報処理装置。 - 前記発話態様には、発話音量、発話速度、発話する音の高さ、発音の明確さ、発話位置、または発話方向のうち、少なくとも1つが含まれる、
請求項4に記載の情報処理装置。 - 前記制御部は、前記第1のモードによるフィードバックを受けた前記ユーザの発話態様に改善が認められないことに基づいて、前記第2のモードを選択する、
請求項4に記載の情報処理装置。 - 前記制御部は、前記第1のモードによるフィードバックを行った後、前記ユーザの発話が所定の回数以内に認識されないことに基づいて、前記第2のモードを選択する、
請求項4に記載の情報処理装置。 - 前記ユーザ情報は、前記ユーザの発話の内容を含み、
前記制御部は、前記ユーザの発話の内容に基づいて、前記第1のモード又は前記第2のモードを選択する、
請求項3に記載の情報処理装置。 - 前記制御部は、前記ユーザの発話の内容がプライバシー情報を含むと推定されることに基づいて、前記第2のモードを選択する、
請求項8に記載の情報処理装置。 - 前記制御部は、前記環境情報から前記ユーザとは異なる別の人物の存在が推定されることに基づいて、前記第1のモードを選択する、
請求項3に記載の情報処理装置。 - 前記ユーザ情報は、前記ユーザの属性情報を含み、
前記制御部は、前記ユーザの属性情報に基づいて、前記第1のモード又は前記第2のモードを選択する、
請求項3に記載の情報処理装置。 - 前記ユーザ情報は、前記ユーザの感情情報を含み、
前記制御部は、前記ユーザの発話から推定される前記ユーザの感情情報に基づいて、前記第1のモード又は前記第2のモードを選択する、
請求項3に記載の情報処理装置。 - 前記第1のモードでは、前記ユーザの発話音量よりも認識精度の高い音量でフィードバックが行われる、
請求項4に記載の情報処理装置。 - 前記第1のモードでは、前記ユーザの発話速度よりも認識精度の高い速度でフィードバックが行われる、
請求項4に記載の情報処理装置。 - 前記第1のモードでは、前記ユーザの発話する音の高さよりも認識精度の高い音の高さでフィーバックが行われる、
請求項4に記載の情報処理装置。 - 前記第2のモードでは、前記発話態様の改善理由を付加したフィードバックが行われる、
請求項2に記載の情報処理装置。 - 前記フィードバックは、視覚情報によるフィードバックを含む、
請求項2に記載の情報処理装置。 - 前記第2のモードでは、前記ユーザの発話を検出したセンサとは異なる別のセンサに発話をすべき旨のフィードバックが行われる、
請求項2に記載の情報処理装置。 - プロセッサが、ユーザの発話認識に係る情報に応じて、前記ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択すること、
を含み、
前記複数のモードは、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、を含む、
情報処理方法。 - コンピュータを、
ユーザの発話認識に係る情報に応じて、前記ユーザの発話態様に対するフィードバックのモードを、複数のモードから選択する制御部、
を備え、
前記複数のモードは、暗示的なフィードバックが行われる第1のモードと、明示的なフィードバックが行われる第2のモードと、を含む、
情報処理装置、
として機能させるためのプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780019476.1A CN109074805A (zh) | 2016-03-31 | 2017-01-12 | 信息处理设备、信息处理方法和程序 |
EP17773486.0A EP3438974A4 (en) | 2016-03-31 | 2017-01-12 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM |
US16/074,202 US11462213B2 (en) | 2016-03-31 | 2017-01-12 | Information processing apparatus, information processing method, and program |
JP2018508407A JP6819672B2 (ja) | 2016-03-31 | 2017-01-12 | 情報処理装置、情報処理方法、及びプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016070593 | 2016-03-31 | ||
JP2016-070593 | 2016-03-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017168936A1 true WO2017168936A1 (ja) | 2017-10-05 |
Family
ID=59963984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/000726 WO2017168936A1 (ja) | 2016-03-31 | 2017-01-12 | 情報処理装置、情報処理方法、及びプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US11462213B2 (ja) |
EP (1) | EP3438974A4 (ja) |
JP (1) | JP6819672B2 (ja) |
CN (1) | CN109074805A (ja) |
WO (1) | WO2017168936A1 (ja) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109697290A (zh) * | 2018-12-29 | 2019-04-30 | 咪咕数字传媒有限公司 | 一种信息处理方法、设备及计算机存储介质 |
WO2019107144A1 (ja) * | 2017-11-28 | 2019-06-06 | ソニー株式会社 | 情報処理装置、及び情報処理方法 |
WO2019150708A1 (ja) * | 2018-02-01 | 2019-08-08 | ソニー株式会社 | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム |
JP2019174784A (ja) * | 2018-03-29 | 2019-10-10 | パナソニック株式会社 | 音声翻訳装置、音声翻訳方法及びそのプログラム |
CN110568931A (zh) * | 2019-09-11 | 2019-12-13 | 百度在线网络技术(北京)有限公司 | 交互方法、设备、系统、电子设备及存储介质 |
JP2020507165A (ja) * | 2017-11-21 | 2020-03-05 | ジョンアン インフォメーション テクノロジー サービシズ カンパニー リミテッド | データ可視化のための情報処理方法及び装置 |
JP2021021848A (ja) * | 2019-07-29 | 2021-02-18 | 株式会社第一興商 | カラオケ用入力装置 |
JP2022171661A (ja) * | 2018-03-30 | 2022-11-11 | 株式会社 ディー・エヌ・エー | 動画を作成するためのシステム、方法、及びプログラム |
WO2024053476A1 (ja) * | 2022-09-05 | 2024-03-14 | ダイキン工業株式会社 | システム、支援方法、サーバ装置及び通信プログラム |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11386913B2 (en) * | 2017-08-01 | 2022-07-12 | Dolby Laboratories Licensing Corporation | Audio object classification based on location metadata |
CN110223711B (zh) * | 2019-06-03 | 2021-06-01 | 清华大学 | 基于麦克风信号的语音交互唤醒电子设备、方法和介质 |
US11562744B1 (en) * | 2020-02-13 | 2023-01-24 | Meta Platforms Technologies, Llc | Stylizing text-to-speech (TTS) voice response for assistant systems |
JP7405660B2 (ja) * | 2020-03-19 | 2023-12-26 | Lineヤフー株式会社 | 出力装置、出力方法及び出力プログラム |
EP3933560A1 (en) * | 2020-06-30 | 2022-01-05 | Spotify AB | Methods and systems for providing animated visual feedback for voice commands |
CN114155865A (zh) * | 2021-12-16 | 2022-03-08 | 广州城市理工学院 | 一种全息互动系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003150194A (ja) * | 2001-11-14 | 2003-05-23 | Seiko Epson Corp | 音声対話装置および音声対話装置における入力音声最適化方法ならびに音声対話装置における入力音声最適化処理プログラム |
WO2005076258A1 (ja) * | 2004-02-03 | 2005-08-18 | Matsushita Electric Industrial Co., Ltd. | ユーザ適応型装置およびその制御方法 |
JP2005266020A (ja) * | 2004-03-17 | 2005-09-29 | Advanced Telecommunication Research Institute International | 音声認識装置 |
JP2006251061A (ja) * | 2005-03-08 | 2006-09-21 | Nissan Motor Co Ltd | 音声対話装置および音声対話方法 |
JP2007264126A (ja) * | 2006-03-27 | 2007-10-11 | Toshiba Corp | 音声処理装置、音声処理方法および音声処理プログラム |
JP2014186184A (ja) * | 2013-03-25 | 2014-10-02 | Panasonic Corp | 音声入力選択装置及び音声入力選択方法 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5495282A (en) * | 1992-11-03 | 1996-02-27 | The Arbitron Company | Monitoring system for TV, cable and VCR |
US8645122B1 (en) * | 2002-12-19 | 2014-02-04 | At&T Intellectual Property Ii, L.P. | Method of handling frequently asked questions in a natural language dialog service |
JP4241443B2 (ja) * | 2004-03-10 | 2009-03-18 | ソニー株式会社 | 音声信号処理装置、音声信号処理方法 |
US7412378B2 (en) * | 2004-04-01 | 2008-08-12 | International Business Machines Corporation | Method and system of dynamically adjusting a speech output rate to match a speech input rate |
US20080177734A1 (en) * | 2006-02-10 | 2008-07-24 | Schwenke Derek L | Method for Presenting Result Sets for Probabilistic Queries |
US8972268B2 (en) * | 2008-04-15 | 2015-03-03 | Facebook, Inc. | Enhanced speech-to-speech translation system and methods for adding a new word |
US8886521B2 (en) * | 2007-05-17 | 2014-11-11 | Redstart Systems, Inc. | System and method of dictation for a speech recognition command system |
EP2540099A1 (de) * | 2010-02-24 | 2013-01-02 | Siemens Medical Instruments Pte. Ltd. | Verfahren zum trainieren des sprachverstehens und trainingsvorrichtung |
JP2011209787A (ja) | 2010-03-29 | 2011-10-20 | Sony Corp | 情報処理装置、および情報処理方法、並びにプログラム |
US20140343947A1 (en) * | 2013-05-15 | 2014-11-20 | GM Global Technology Operations LLC | Methods and systems for managing dialog of speech systems |
US10075140B1 (en) * | 2014-06-25 | 2018-09-11 | Amazon Technologies, Inc. | Adaptive user interface configuration |
US9639854B2 (en) * | 2014-06-26 | 2017-05-02 | Nuance Communications, Inc. | Voice-controlled information exchange platform, such as for providing information to supplement advertising |
US9858920B2 (en) * | 2014-06-30 | 2018-01-02 | GM Global Technology Operations LLC | Adaptation methods and systems for speech systems |
US9418663B2 (en) * | 2014-07-31 | 2016-08-16 | Google Inc. | Conversational agent with a particular spoken style of speech |
JPWO2016136062A1 (ja) * | 2015-02-27 | 2017-12-07 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
-
2017
- 2017-01-12 WO PCT/JP2017/000726 patent/WO2017168936A1/ja active Application Filing
- 2017-01-12 US US16/074,202 patent/US11462213B2/en active Active
- 2017-01-12 EP EP17773486.0A patent/EP3438974A4/en not_active Withdrawn
- 2017-01-12 JP JP2018508407A patent/JP6819672B2/ja active Active
- 2017-01-12 CN CN201780019476.1A patent/CN109074805A/zh not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003150194A (ja) * | 2001-11-14 | 2003-05-23 | Seiko Epson Corp | 音声対話装置および音声対話装置における入力音声最適化方法ならびに音声対話装置における入力音声最適化処理プログラム |
WO2005076258A1 (ja) * | 2004-02-03 | 2005-08-18 | Matsushita Electric Industrial Co., Ltd. | ユーザ適応型装置およびその制御方法 |
JP2005266020A (ja) * | 2004-03-17 | 2005-09-29 | Advanced Telecommunication Research Institute International | 音声認識装置 |
JP2006251061A (ja) * | 2005-03-08 | 2006-09-21 | Nissan Motor Co Ltd | 音声対話装置および音声対話方法 |
JP2007264126A (ja) * | 2006-03-27 | 2007-10-11 | Toshiba Corp | 音声処理装置、音声処理方法および音声処理プログラム |
JP2014186184A (ja) * | 2013-03-25 | 2014-10-02 | Panasonic Corp | 音声入力選択装置及び音声入力選択方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3438974A4 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020507165A (ja) * | 2017-11-21 | 2020-03-05 | ジョンアン インフォメーション テクノロジー サービシズ カンパニー リミテッド | データ可視化のための情報処理方法及び装置 |
WO2019107144A1 (ja) * | 2017-11-28 | 2019-06-06 | ソニー株式会社 | 情報処理装置、及び情報処理方法 |
WO2019150708A1 (ja) * | 2018-02-01 | 2019-08-08 | ソニー株式会社 | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム |
JP7223561B2 (ja) | 2018-03-29 | 2023-02-16 | パナソニックホールディングス株式会社 | 音声翻訳装置、音声翻訳方法及びそのプログラム |
JP2019174784A (ja) * | 2018-03-29 | 2019-10-10 | パナソニック株式会社 | 音声翻訳装置、音声翻訳方法及びそのプログラム |
JP2022171661A (ja) * | 2018-03-30 | 2022-11-11 | 株式会社 ディー・エヌ・エー | 動画を作成するためのシステム、方法、及びプログラム |
CN109697290A (zh) * | 2018-12-29 | 2019-04-30 | 咪咕数字传媒有限公司 | 一种信息处理方法、设备及计算机存储介质 |
JP2021021848A (ja) * | 2019-07-29 | 2021-02-18 | 株式会社第一興商 | カラオケ用入力装置 |
JP7312639B2 (ja) | 2019-07-29 | 2023-07-21 | 株式会社第一興商 | カラオケ用入力装置 |
CN110568931A (zh) * | 2019-09-11 | 2019-12-13 | 百度在线网络技术(北京)有限公司 | 交互方法、设备、系统、电子设备及存储介质 |
JP2021043936A (ja) * | 2019-09-11 | 2021-03-18 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | インタラクション方法、機器、システム、電子機器及び記憶媒体 |
WO2024053476A1 (ja) * | 2022-09-05 | 2024-03-14 | ダイキン工業株式会社 | システム、支援方法、サーバ装置及び通信プログラム |
JP7482459B2 (ja) | 2022-09-05 | 2024-05-14 | ダイキン工業株式会社 | システム、支援方法、サーバ装置及び通信プログラム |
Also Published As
Publication number | Publication date |
---|---|
CN109074805A (zh) | 2018-12-21 |
US11462213B2 (en) | 2022-10-04 |
EP3438974A4 (en) | 2019-05-08 |
JPWO2017168936A1 (ja) | 2019-02-07 |
EP3438974A1 (en) | 2019-02-06 |
JP6819672B2 (ja) | 2021-01-27 |
US20210142796A1 (en) | 2021-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017168936A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US11657812B2 (en) | Message playback using a shared device | |
JP6534926B2 (ja) | 話者識別方法、話者識別装置及び話者識別システム | |
US8381238B2 (en) | Information processing apparatus, information processing method, and program | |
JP5731998B2 (ja) | 対話支援装置、対話支援方法および対話支援プログラム | |
US10157614B1 (en) | Message playback using a shared device | |
JP2011209786A (ja) | 情報処理装置、および情報処理方法、並びにプログラム | |
JP2014153663A (ja) | 音声認識装置、および音声認識方法、並びにプログラム | |
CN106067996B (zh) | 语音再现方法、语音对话装置 | |
JP2011209787A (ja) | 情報処理装置、および情報処理方法、並びにプログラム | |
WO2017141530A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US10186267B1 (en) | Message playback using a shared device | |
WO2018105373A1 (ja) | 情報処理装置、情報処理方法、および情報処理システム | |
US20230362026A1 (en) | Output device selection | |
JP6678315B2 (ja) | 音声再生方法、音声対話装置及び音声対話プログラム | |
WO2020202862A1 (ja) | 応答生成装置及び応答生成方法 | |
WO2019138652A1 (ja) | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム | |
WO2019017033A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
WO2019150708A1 (ja) | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム | |
JP2021076715A (ja) | 音声取得装置、音声認識システム、情報処理方法、及び情報処理プログラム | |
JPWO2018105373A1 (ja) | 情報処理装置、情報処理方法、および情報処理システム | |
WO2020017165A1 (ja) | 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム | |
JP2019053180A (ja) | 音響処理装置、音声認識装置、音響処理方法、音声認識方法、音響処理プログラム及び音声認識プログラム | |
JP6927331B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
WO2019142420A1 (ja) | 情報処理装置および情報処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2018508407 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017773486 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017773486 Country of ref document: EP Effective date: 20181031 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17773486 Country of ref document: EP Kind code of ref document: A1 |