WO2015129372A1 - Audio system - Google Patents
Audio system Download PDFInfo
- Publication number
- WO2015129372A1 WO2015129372A1 PCT/JP2015/052279 JP2015052279W WO2015129372A1 WO 2015129372 A1 WO2015129372 A1 WO 2015129372A1 JP 2015052279 W JP2015052279 W JP 2015052279W WO 2015129372 A1 WO2015129372 A1 WO 2015129372A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- output
- audio
- voice
- audio data
- home appliance
- Prior art date
Links
- 230000005540 biological transmission Effects 0.000 claims description 12
- 238000004891 communication Methods 0.000 description 66
- 238000000034 method Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000004044 response Effects 0.000 description 5
- 238000012217 deletion Methods 0.000 description 4
- 230000037430 deletion Effects 0.000 description 4
- 230000004397 blinking Effects 0.000 description 3
- 238000001816 cooling Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 1
- 238000010411 cooking Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/2818—Controlling appliance services of a home automation network by calling their functionalities from a device located outside both the home and the home network
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
Definitions
- the present invention relates to an audio system including an electronic device that outputs audio received from a server.
- Patent Document 1 discloses a refrigerator that selects and outputs an audio signal and a music signal received from a center server according to the operating status or environmental status of the refrigerator body.
- an adapter is provided separately from the home appliance, and this adapter receives and stores audio data corresponding to the property from the application server, and this audio / voice data is stored in the audio of the home appliance.
- a configuration for outputting from the generator is disclosed.
- Japanese Patent Publication Japanese Patent Laid-Open No. 2002-303482 (published on Oct. 18, 2002)”
- Japanese Patent Publication Japanese Patent Laid-Open No. 2008-046424 (published on Feb. 28, 2008)”
- audio output from conventional home appliances cannot be flexibly handled because the conditions at the time of audio output are fixed.
- a sound output suitable for the convenience of the user is not made from the home appliance, or a sound output is not performed so as to be used effectively.
- the present invention has been made in view of the above problems, and an object thereof is to construct an audio system or the like that can output audio suitable for the convenience of the user from an electronic device.
- an audio system includes an audio generation unit that generates audio data, a condition setting unit that sets an output condition at the time of audio output for each audio data, and a set
- An audio server including a transmission unit that transmits condition information indicating an output condition to an electronic device, a reception unit that receives the condition information, and an output control unit that performs audio output according to audio data according to the received condition information It is characterized by having.
- the voice server can set the voice output output condition for each voice data, and the electronic device performs voice output according to the voice data according to the output condition. It is possible to perform audio output with improved convenience. In addition, it is possible to more effectively utilize the audio output according to various scenes.
- FIG. 1 It is a figure which shows schematic structure of the audio
- FIG. 1 is a diagram showing a schematic configuration of an audio system 100 according to the present embodiment.
- the audio system 100 includes home appliances (electronic devices) 10-1, 10-2, 10-3, 10-4 installed in a user's home, a cloud server (audio server) 20, Communication terminal devices 30-1, 30-2 and 30-3 are configured to be connected via a wide area communication network 62.
- the audio system 100 includes home appliances (electronic devices) 10-1, 10-2, 10-3, 10-4 installed in a user's home, a cloud server (audio server) 20, Communication terminal devices 30-1, 30-2 and 30-3 are configured to be connected via a wide area communication network 62.
- FIG. 1 four home appliances 10-1, 10-2, 10-3, 10-4, and three communication terminal devices 30-1, 30-2, 30-3 are illustrated.
- the number and kind are not limited, and when there is no need to explain individually, the household appliance 10 and the communication terminal device 30 are used as a general term. Further, the number of user homes included in the voice system 100 is not limited.
- the home appliance 10 is connected to the home appliance adapter 5 (5-1, 5-2, 5-3, 5-4).
- the home appliance adapter 5 is a device for connecting the home appliance 10 to the wide-area communication network 62 and making it a so-called network home appliance that can be controlled via the wide-area communication network 62.
- the cloud server 20 registers the communication terminal device 30 and the home appliance adapter 5 in association with each other, and the communication terminal device 30 connects to the home appliance adapter 5 via the cloud server 20.
- 10 can be operated remotely.
- the communication terminal device 30 is configured such that the home appliance 10 connected to the home appliance adapter 5 registered in association with itself in the cloud server 20 can be remotely operated via the cloud server 20.
- the communication terminal device 30 receives, from the cloud server 20, information related to the home appliance 10 connected to the home appliance adapter 5 registered in association with the cloud server 20.
- the communication terminal device 30 can mention a smart phone and a tablet terminal as an example.
- a plurality of home appliances 10 can be remotely operated from one communication terminal device 30.
- the home appliance 10 connected to one home appliance adapter 5 can be remotely operated from a plurality of communication terminal devices 30.
- the user's home 50 is provided with a wireless local area network (LAN) that is a narrow area communication network.
- the wireless LAN relay station 40 is connected to a wide area communication network 62 including the Internet.
- the relay station 40 is a communication device such as a WiFi (registered trademark) router or a WiFi access point.
- WiFi registered trademark
- a configuration including the Internet is illustrated as the wide area communication network 62, but a telephone line network, a mobile communication network, a CATV (CAble TeleVision) communication network, a satellite communication network, or the like can also be used.
- the cloud server 20 and the home appliance adapter 5 installed in the user home 50 can communicate with each other via the wide area communication network 62 and the wireless LAN relay station 40. Further, the cloud server 20 and the communication terminal device 30 can communicate with each other via the wide area communication network 62.
- the communication terminal device 30 and the Internet in the wide area communication network 62 are connected by using 3G (3rd generation), LTE (Long Termination Evolution), a home or public WiFi access point, and the like. Note that the home appliance adapter 5 and the communication terminal device 30 are both wireless communication devices, and can communicate with each other via the relay station 40 without using the wide area communication network 62.
- FIG. 3 is a block diagram illustrating a schematic configuration of the cloud server 20 in the audio system 100.
- the cloud server 20 is a server that manages each home appliance 10, and includes a control unit 21, a storage unit 22, and a communication unit 23 as illustrated in FIG.
- the control unit 21 is a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and is a block that controls the operation of each unit of the cloud server 20.
- the control unit 21 functions as a voice creation unit 21a that creates voice data to be output by the voice output unit 14 of the home appliance 10, and as a condition setting unit 21b that sets output conditions at the time of voice output for each voice data. It has the function of.
- the voice creation unit 21a is a block that creates voice data, creates voice data based on the voice synthesis function, adds identification information, and makes it ready for transmission to the home appliance 10. In addition, adding the identification information to the voice data recorded in advance instead of the voice synthesis so that the voice data can be transmitted to the home appliance 10 is included in the voice creation by the voice creation unit 21a.
- the voice data created by the voice creation unit 21a is referred to as first voice data.
- the first audio data is transmitted to the home appliance 10 by the control unit 21 controlling the communication unit 23.
- the condition setting unit 21b sets an output condition (attribute) at the time of sound output for each sound data, adds identification information to the condition information indicating the set output condition, and makes it possible to transmit to the home appliance 10 .
- the condition setting unit 21b outputs not only the first voice data created by the voice creation unit 21a but also the voice data held in advance by the home appliance 10 (for example, held from the time of shipment). Conditions may be set. Hereinafter, this audio data is referred to as second audio data. Specific examples of audio data and output conditions will be described later.
- the condition information is transmitted to the home appliance 10 by the control unit 21 controlling the communication unit 23. Since the output condition is set for each audio data, the condition information indicating the setting condition is associated with each audio data.
- the condition information indicating the output condition set for the first audio data is referred to as first condition information
- the condition information indicating the output condition set for the second audio data is referred to as second condition information.
- the cloud server 20 is configured to create voice data and set output conditions based on data from the communication terminal device 30. Furthermore, the cloud server 20 receives information (for example, weather information) about voice data and output conditions from another server or the like via the wide area communication network 62, and creates voice data based on the information or outputs conditions. Or may be configured.
- information for example, weather information
- the storage unit 22 is a block that stores various information (data) used in the cloud server 20.
- the storage unit 22 includes a first audio data created by the cloud server 20, condition information associated with the audio data, and a DB (DataBase) 22 a that stores state information indicating the state of the home appliance 10 for each home appliance 10. is doing. With this DB 22a, individual control for each home appliance 10 from the cloud server 20 becomes possible.
- the communication unit 23 is a block that performs mutual communication with the home appliance adapter 5 via the wide area communication network 62.
- the communication unit 23 also performs mutual communication with the communication terminal device 30 via the wide area communication network 62.
- the control part (transmission part) 21 controls the communication part 23, and transmits the condition information which shows the set output conditions to the household appliance adapter 5.
- the home appliance adapter 5 is a device that enables the home appliance 10 to perform network communication.
- the home appliance adapter 5 receives various data including operation signals (commands) from the cloud server 20.
- Data received from the cloud server 20 includes data from the communication terminal device 30 via the cloud server 20 in addition to data from the cloud server 20 itself. Further, the data received from the cloud server 20 may be for controlling the home appliance adapter 5 itself, or may be transmitted to the home appliance 10 to control the home appliance 10. Further, the home appliance adapter 5 transmits information about the home appliance 10 transmitted from the home appliance 10 to the cloud server 20.
- FIG. 2 is a block diagram illustrating a schematic configuration of the home appliance 10 and the home appliance adapter 5 in the audio system 100.
- the home appliance adapter 5 includes a control unit (reception unit, output control unit) 6, a storage unit 7, a communication unit 8, and a connection unit 9.
- the control unit 6 is a block that comprehensively controls the operation of each unit of the home appliance adapter 5.
- the control unit 6 is composed of a computer device including an arithmetic processing unit such as a CPU or a dedicated processor, for example.
- the control unit 6 reads out and executes a program for performing various controls in the home appliance adapter 5 stored in the storage unit 7, thereby comprehensively controlling the operation of each unit of the home appliance adapter 5.
- the control unit 6 controls the communication unit 8 to receive the first voice information and the condition information from the cloud server 20, and the sound setting unit 15 outputs the voice according to the voice data according to the received condition information. It has the function of the output control part performed from. In the present embodiment, it is assumed that the sound setting unit 15 is controlled from the control unit 6 so that sound output can be performed.
- the storage unit 7 is a block that stores various information used in the home appliance adapter 5. Moreover, since the memory
- the communication unit 8 is a block that performs mutual communication with the cloud server 20 via the wide area communication network 62.
- the connection unit 9 is a block that communicates with the connection unit 19 of the home appliance 10.
- the connection between the connection unit 9 and the connection unit 19 of the home appliance 10 may be, for example, a connection using a USB (Universal Serial Bus) connector.
- the home appliance 10 includes, for example, an air conditioner, a refrigerator, a washing machine, a cooking appliance, a lighting device, a hot water supply device, a photographing device, various AV (Audio-Visual) devices, various home robots (for example, a cleaning robot, a housework support robot, Animal type robots, etc.).
- the home appliance 10 includes a control unit (output control unit) 11, a storage unit 13, an audio output unit 14, a sound setting unit 15, an audio switching unit 16, a state detection unit 17, and an LED (Light Emitting Diode).
- a lamp (notification unit) 18 and a connection unit 19 are provided.
- the home appliance 10 performs an operation according to the received operation signal.
- the control unit 11 is a block that controls the operation of each unit of the home appliance 10.
- the control unit 11 includes a computer device including an arithmetic processing unit such as a CPU or a dedicated processor.
- the control unit 11 comprehensively controls the operation of each unit of the home appliance 10 by reading and executing a program for performing various controls in the home appliance 10 stored in the storage unit 13.
- the control unit 11 has a function of an output control unit that outputs the second sound data stored in the storage unit 13 by sound according to the setting conditions.
- the storage unit 13 includes a RAM (Random Access Memory), a ROM (Read Only Memory), an HDD (Hard Disk Drive), and the like, and is a block that stores various data used in the home appliance 10.
- the storage unit 13 stores in advance second audio data that is output as audio from the home appliance 10.
- the output condition at the time of sound output is set also in the second sound data, and is stored in the storage unit 13 as second condition information.
- the second condition information may be stored from the time of shipment of the home appliance 10, or may be received by the cloud server 20 set or changed with respect to the second audio data.
- “Output audio data” is used in the same meaning as “output audio corresponding to audio data”.
- the state detection unit 17 is a block that detects state information indicating the state of the home appliance 10.
- the status information include information indicating a setting status and an operating status.
- the state information may be a state where the home appliance 10 is placed, that is, environmental information regarding the surrounding environment. Examples of the environmental condition include temperature and humidity inside the user home 50 or outside the user home 50. These are merely examples, and any information may be used as long as the information is related to the home appliance 10.
- the audio output unit 14 is an audio output device such as a speaker.
- the control unit 6 outputs a sound corresponding to the first sound data stored in the storage unit 7, and the control unit 11 outputs a sound corresponding to the second sound data stored in the storage unit 13 from the sound output unit 14. . Details of the output of the audio data will be described later.
- the sound setting unit 15 uses the first sound data stored in the storage unit 7 of the home appliance adapter 5 and the second sound stored in the storage unit 13 for the sound volume or sound quality of the sound data output from the sound output unit 14. This block is individually set (adjusted) for data.
- the control unit 11 controls the sound setting unit 15 to set the volume or sound quality. Further, the volume or sound quality may be set according to an instruction from the cloud server 20.
- the volume or sound quality of the first audio data received from the cloud server 20 and the second audio data previously held in the home appliance 10 at the home appliance 10. Can be set individually. Therefore, it can be set so that the user can easily hear. If the first voice data and the second voice data are set to have the same voice and volume, the user can listen to both voice data without distinction. If different settings are made, the user can easily distinguish and listen to both audio data.
- the audio switching unit 16 is a block having a switch function for switching the audio data output from the audio output unit 14 to one or both of the first audio data and the second audio data.
- the voice switching by the voice switching unit 16 may be executed upon receiving a user operation, or may be executed according to an instruction from the cloud server 20. Since the voice switching unit 16 is configured in this way, the home appliance 10 wants to output either one or both of the first voice data and the second voice data, that is, voice data that the user wants to hear. Can be output.
- the LED lamp 18 is lit (or flashed) under the control of the control unit 11 when the home appliance 10 has audio data to be output.
- the LED lamp 18 is provided on a button (or provided as a button), and a sensor for detecting the pressing of the button is provided. When the sensor detects that the button has been pressed, the control unit 11 controls to output sound data to be output and turn off the LED lamp 18.
- the LED lamp 18 may be configured such that the color or lighting pattern to be lit changes according to the type of audio data or the expiration date of audio data to be described later.
- the LED lamp 18 and the button (button function) may be provided separately.
- connection unit 19 is a block that communicates with the connection unit 9 of the home appliance adapter 5.
- the present embodiment has a configuration in which the home appliance adapter 5 that enables remote operation of the home appliance 10 is externally attached.
- the communication function part enabling remote operation optional as an external configuration, the common home appliance adapter 5 can be applied to different types of home appliances 10 and costs can be reduced.
- a configuration in which a communication function part is incorporated in the home appliance 10 in advance (a configuration in which the home appliance 10 and the home appliance adapter 5 are integrated) may be employed.
- the home appliance 10 is configured not only to be remotely operated from the communication terminal device 30, but also to be able to be operated by short-distance wireless communication using, for example, infrared rays from a remote controller (not shown) or an operation from a main body operation unit (not shown). Yes. Alternatively, operation by voice or gesture may be possible.
- the cloud server 20 can set an audio output output condition for each audio data, and the home appliance 10 performs audio output according to the audio data in accordance with the output condition. It is possible to perform audio output with improved performance. In addition, it is possible to more effectively utilize the audio output according to various scenes.
- the output condition is set for. Therefore, the selection range of the audio data output from the audio output unit 14 is widened, and the audio data can be output appropriately according to the set output conditions.
- the audio data output from the audio output unit 14 includes first audio data and second audio data.
- the control unit 6 is based on the first condition information indicating the output condition set in the first audio data received from the cloud server 20, and the control unit 11 is the second condition indicating the output condition set in the second audio data. Based on the information, voice data to be output this time is specified, and voice output corresponding to the specified voice data is performed from the voice output unit 14.
- the output condition corresponding to the first condition information or the second condition information is a condition that a sound is output when the state detection unit 17 detects specific state information of the home appliance 10.
- the control unit 6 or the control unit 11 performs the first audio data or the second sound data for which the output condition is set. Identify and output audio data.
- the specific status information of the home appliance 10 that triggers voice output includes information obtained from items common to the home appliance 10 of different types, model numbers, etc. (for example, the LED lamp 18 provided as a button, etc.) Any of the information obtained by items (doors or the like) specific to the types of home appliances 10 is included.
- the output condition corresponding to the first condition information or the second condition information is a condition that voice output is performed at a specific time (so-called chatting output).
- the control unit 6 or the control unit 11 specifies and outputs the first audio data or the second audio data for which the output condition is set.
- the output condition may be, for example, a condition that audio is output when a predetermined event that triggers audio output occurs.
- the predetermined event is, for example, an event that the home appliance 10 is turned on, an event that a button provided with the LED lamp 18 of the home appliance 1 is pressed, or there is no user near the home appliance 10 when audio data is output.
- a condition may be that audio is output a plurality of times when a predetermined event occurs (for example, when a button is pressed). Therefore, when the condition setting unit 21b sets a predetermined event as a trigger for outputting sound according to the audio data as the output condition, the control unit 6 and the control unit 11 respond to the predetermined event when the predetermined event occurs. Output audio data.
- the control unit 6 or the control unit 11 When notifying the user of the presence of audio data to be output by blinking the LED lamp 18, the control unit 6 or the control unit 11 has received an output instruction from the user.
- the LED lamp 18 is provided.
- the sound output unit 14 When the pressed button is pressed, the sound output unit 14 performs sound output corresponding to the sound data to be output. In this case, the output condition is to output when an output instruction from the user is received.
- the presence of audio data can be notified to the user by the LED lamp 18 blinking, and the user can issue an output instruction only when the user wants to confirm and output the notification. Therefore, user convenience is further improved.
- the output condition set by the condition setting unit 21b may be a condition for re-outputting (listening to) the audio data.
- the conditions for re-output are as follows. In the audio data, A: an output condition of blinking the LED lamp 18 before outputting the audio, and B: an output condition of preferentially outputting audio when a button provided with the LED lamp 18 is pressed are set.
- the output condition of A is changed to a state in which there is no sound to be prioritized until re-output, for example, lighting or extinguishing .
- the output condition of B is maintained, and if the output condition of re-output is not satisfied, the output condition of B disappears.
- the cloud server 20 holds (manages) output conditions set for each audio data as list information (audio list) of output conditions for the audio data for each home appliance 10.
- the home appliance adapter 5 of each home appliance 10 also manages the output conditions received from the cloud server 20 as a voice list.
- the audio list includes at least audio data identification information, an index, and various operation flags. This operation flag is used for confirming the execution of the operation, and a specific example will be described later.
- a part of the voice list held in the cloud server 20 is transmitted to the home appliance adapter 5 to become a voice list in which the home appliance adapter 5 is held.
- the identification information of the audio data is common to the cloud server 20 and the home appliance adapter 5.
- voice lists managed by the cloud server 20 those that are transmitted to the home appliance adapter 5 are set with “DL (download) flag” among the operation flags (set to ON).
- the home appliance adapter 5 downloads audio data in which a “DL (download) flag” is set (turned on). If the audio data is already downloaded by the home appliance adapter 5, the home appliance adapter 5 does nothing and keeps the audio data.
- voice lists managed by the cloud server 20 those that have been transmitted to the home appliance adapter 5 and are deleted from the home appliance adapter 5 are defeated (turned off) among the operation flags.
- deletion may be performed by another method.
- the identification information of the voice data to be deleted from the home appliance adapter 5 is transmitted to the cloud server 20, and the cloud server 20 sets the index value corresponding to the corresponding identification information to empty (NULL) in the managed voice list.
- NULL empty
- the audio list managed by the home appliance adapter 5 includes the audio data scheduled to be received or received by the home appliance adapter 5, and those deleted from the audio list managed by the cloud server 20 by the above method are removed from the own audio list.
- voice list management method such as voice deletion is an explanation of one embodiment of the present invention, and another method may be used for another embodiment.
- a part of the voice list managed by the cloud server 20 is a list managed by the home appliance adapter 5, but the voice list managed by the cloud server 20 is stored in the home appliance adapter 5. The same list may be managed.
- FIG. 10 An example of the voice list is shown in FIG.
- “message” is the content of audio output as audio data.
- the “message” itself is identification information of voice data.
- the voice data only needs to be identified by an ID code as will be described later. . It is assumed that voice data corresponding to the identification information (voice data to be actually output) can be called using the voice list.
- trigger, notification, and re-output are audio data output conditions (attributes), and the output conditions are set for audio data that is flagged (set to 1). It shows that.
- the voice data of the message “Good morning” is flagged at the trigger door. Therefore, “Good morning” is set as an output condition to be output at the occurrence of an event that the door is opened as a trigger.
- the voice data of the message “packed too much” is flagged in the trigger button and door and the re-output. Therefore, “Too much jamming” is output when the event that the door is opened as a trigger, the event that the button is pressed, and that the re-output is performed. It is set as a condition.
- the voice data of the message “OO-chan, there is a cake” is flagged for trigger buttons and doors, notification, and re-output. Therefore, “OO-chan, there is cake” is output when an event occurs when the door is opened as a trigger and when an event occurs when the button is pressed. Is set as an output condition, and that re-output is performed.
- the condition setting unit 21b sets a priority for outputting audio data as the output condition.
- a predetermined event for example, when status information by a user operation is detected by the status detection unit 17
- the control unit 6 and the control unit 11 output audio data to be output according to this priority.
- the condition setting unit 21b sets a priority for each audio data.
- the priority changes in value according to time.
- the set priority is transmitted from the cloud server 20 and received by the home appliance adapter 5.
- the priority may be transmitted and received in a graph for a certain period as shown in FIG. By transmitting / receiving priority for a certain period, even if communication between the home appliance adapter 5 and the cloud server 20 is interrupted, audio output according to the priority can be performed for a certain period.
- the voices A to D all indicate the first voice data
- the home appliance voices E and F indicate the second voice data, which all correspond to the same predetermined event (for example, a specific user operation).
- Audio data FIG. 4 is merely an example, and the present invention is not limited to this.
- the priority of the second audio data is always set to 0, and whether the priority is higher or lower is expressed by plus and minus.
- the priority of the first audio data may be set between 0 and 6.
- the priority has seven levels from ⁇ 3 to +3, but there is no limitation to this level.
- the time and priority values are discrete, but the priority at the intermediate time is linearly complemented as shown in the graph.
- the priority is 1 when the time is from 3 to 7 (for example, from 3 o'clock to 7 o'clock), the priority is 3 when the time is from 11 to 14 From 14 onward, it can be seen that the priority is -3. Therefore, if the voice data is voice data with an output condition that it should be output by time 14 (for example, voice data “Burning garbage out?”), The priority gradually increases before time 14. Therefore, the priority is the highest value (3) from time 11 to time 14 immediately before time 14, and the lowest value (-3) after time 14. preferable.
- information relating to output conditions for example, voice data “burned garbage out?”
- Information relating to the date of burning garbage is set in the cloud server 20 by the user from the communication terminal device 30 via the wide area communication network 62. May be.
- the priority is ⁇ 3 from time 0 to 16
- the priority is 3 from time 16 to 17, and the priority from time 17 onwards. It can be seen that is -3. Therefore, if the voice B is voice data related to an item to be executed between the times 16 to 17 (for example, voice data “news will start from now”), the maximum value (only from the time 16 to 17 ( 3), and others are the minimum value (-3), which is preferable.
- the priorities of the voices C and D are always set to ⁇ 1 and 1, respectively. Therefore, it is preferable to correspond to voice data that needs to output voice C from time to time and voice data that needs to output voice D frequently.
- the priority of the household appliance voices E and F is always zero, and the priority is not set or is set to zero. I understand that.
- the output probability that the sound A is output is the priority of A / the priority of the sound that is the output candidate (here, A to D, E, and F). Calculated from the sum.
- the priorities are from -3 to 3, in order to make this a positive number, the same number, for example, 4 may be added to all.
- the priority of the voice A and the voice C is 5 (1 + 4)
- the priority of the voice B, the household appliance voice E, and the household appliance voice F is 4 (0 + 4).
- the priority after output may be set to be low (set so as not to be output) or not to be changed depending on the content and type of audio data.
- audio data output with the highest priority when a predetermined event as a trigger occurs may be created.
- an output condition priority output attribute
- this condition is satisfied (the priority output attribute is true)
- output is performed with the highest priority.
- a special value as the priority for example, a value exceeding the normal range such as 4 when the normal range of the priority is set to -3 to 3) ) Is set, the above output probability is not calculated for the audio data, and the audio data is always or more than the audio data set in the normal range of priority. Prioritize and output audio.
- the home appliance adapter 5 causes the storage unit 7 to turn on the LED lamp 18 and to output audio data (“” that is output when a button press of the LED lamp 18 is detected.
- step B0 lamp lighting instruction information for turning on the LED lamp 18 is transmitted to the home appliance 10 (step B1).
- step A0 the lamp lighting instruction information
- step A1 the LED lamp 18 is turned on (step A1). Thereafter, an utterance sequence in which voice output is performed by the home appliance 10 is executed.
- a predetermined event serving as a trigger for voice output is an event of a user operation on a home appliance, and there are a plurality of voice data corresponding to the event.
- the home appliance 10 detects an operation of the user's own device such as pushing a button on the own device or opening a door (step B2), the home appliance 10 transmits state information that is information of the detected operation to the home appliance adapter 5. To do.
- the home appliance adapter 5 When the home appliance adapter 5 receives the state information (step B2), the home appliance adapter 5 selects a plurality of pieces of sound data corresponding to the operation detected by the state information from the sound data in the home appliance adapter 5 (step B3). And the audio
- the home appliance 10 receives this sound data (step A4) and outputs the sound from the sound output unit 14 (step A5).
- the household appliance adapter 5 transmits the identification information (ID code) of the output audio data to the cloud server 20 as output information for indicating the output audio data together with the audio output (step B6).
- the cloud server 20 receives the output information (step C0), the cloud server 20 holds the output information (step C1). The held output information is used for creating the next audio data. The creation of the next audio data will be described in detail in the second
- steps B6 and C1 by transmitting / receiving the identification information of the audio data output as output information, it is determined whether or not audio output has been performed as follows.
- the cloud server 20 holds the sound list for each home appliance 10 by transmitting the identification information of the sound data output to the cloud server 20.
- the voice data identified by the received identification information is “output state”.
- the “state of not being output” is a case where identification information corresponding to any audio data in the audio list is not received.
- the cloud server 20 generates a “reason for not outputting” based on the contents of the audio data output before and after and the state information of the home appliance.
- step C10 when the cloud server 20 updates the voice list held for each home appliance 10 (step C10), the data related to the updated portion of the updated voice list in the regular communication with the home appliance adapter 5 Is transmitted to the home appliance adapter 5.
- home appliance adapter 5 receives the data related to the updated part of the voice list (step B10), it corrects the voice list held by home appliance adapter 5 based on the received data (step B11). This modification may be overwritten.
- step B12 unnecessary audio data is deleted and reception of the audio data corresponding to the updated data is started (step B12), and the corresponding audio data is requested from the cloud server 20 (step B13).
- step C12 the cloud server 20 transmits voice data corresponding to the one updated this time to the home appliance adapter 5 (step C13), and the home appliance adapter 5 receives the voice data (step B14).
- the present embodiment is configured such that in the audio system 100 configuration of the first embodiment, the audio data output next time is determined based on the output information related to the output state of the audio data output this time by the audio output unit 14. is there.
- the control unit 6 output information transmission unit
- the control unit 6 further sends output information related to the output state of the audio data output by the audio output unit 14 from the communication unit 8 to the cloud server.
- the voice creation unit 21a creates voice data to be output next time by the voice output unit 14 based on the received output information, and the condition setting unit 21b is created.
- the control unit 21 sets the condition information indicating the set output condition of the audio data to be output next time and the audio data to be output next time from the communication unit 23. It is the structure which transmits to the household appliance 1. Since it is the structure similar to Embodiment 1 except this, the same code
- the output information transmitted from the home appliance adapter 5 to the cloud server 20 is information indicating whether audio output has been performed.
- the output information may include state information of the home appliance 10 at that time.
- the output information including the state information includes, for example, information indicating whether audio output has been performed in a state where the user is present (user is detected) or not (user is not detected). Of course, it is not limited to this.
- the cloud server 20 creates audio data to be output next time based on the received output information, and sets the output condition of the audio data to be output next time, but does not generate the audio data to be output next time. (Or create zero data).
- the case where the audio data to be output next time is not created is the case where there is no next audio output corresponding to the current audio output.
- the condition setting unit 21b of the cloud server 20 sets, for example, output presence / absence, output contents, output time, and the like as output conditions.
- the sound data to be output next time created in the cloud server 20 and the condition information indicating the output condition are transmitted to the home appliance 10. Since the audio data output next time is based on the output information of the audio data output this time, it is possible to perform audio output with improved user convenience.
- the next audio data of the content following the output audio data is created and output, or when output information that is not output is received.
- status information is detected as the occurrence of a predetermined event that triggers output after a predetermined time has elapsed, the same audio data that was not output may be output again.
- the output audio can have continuity. Further, by re-outputting (retrying), the possibility that the user hears the voice can be increased.
- FIGS. 7 and 8 are examples in the case where state information is detected as an occurrence of a predetermined event that triggers voice output in the home appliance 10 (an event has occurred), and FIG. 9 is a voice from the cloud server 20 without an event. An example of when an output request is issued is shown. In the example of FIG. 9, it is assumed that the cloud server 20 acquires state information indicating the operation state of the home appliance 10 every predetermined time. 7, 8, and 9, the home appliance 10 is described as including the home appliance adapter 5.
- the home appliance 10 when the door of the home appliance 10 is opened, the home appliance 10 (here, the refrigerator) transmits the state information (door open) to the cloud server 20.
- the cloud server 20 transmits an output request for voice output of today's weather.
- the home appliance 10 Upon receiving this output request, the home appliance 10 outputs a voice output “Today's weather is cloudy and rainy” based on this output request, and sends output information (output completed) that the voice output has been performed to the cloud server 20. To do.
- the cloud server 20 transmits an output request accordingly.
- the cloud server 20 has already “Output completed” has been received and the current time is 7:10 am, that is, the elapsed time from the previous output is little, so an output request for outputting the continuation of today's weather as audio is transmitted. To do.
- the home appliance 10 that has received this outputs a voice output that is “last rain in the afternoon. Don't forget your umbrella.” Send.
- the cloud server 20 transmits an output request in response to this, but at this time, the cloud server 20 has already “ Since "No output” has been received, an output request for voice output of today's weather is transmitted.
- the home appliance 10 Upon receiving this, the home appliance 10 outputs a sound output “Today's weather is cloudy and rainy”, and transmits output information (output completed) that the sound output has been performed to the cloud server 20.
- the home appliance 10 Since the current time is 11:00 am, an output request for voice output of the afternoon weather is transmitted. Upon receiving this, the home appliance 10 outputs a voice message “It is likely to rain in the afternoon” and sends output information (output completed) that the voice output has been performed to the cloud server 20.
- an output request for causing the user to confirm the operation mode from the cloud server 20 to the home appliance 10 is output from the cloud server 20 on a certain day in winter. Send. Even if the home appliance 10 receives this output request, it did not respond to the output request, so the output information (no output) that the audio output was not performed is given for the reason (here, the audio data built in the home appliance 10). That is, it is transmitted to the cloud server 20 together with the information of giving priority to the output of the second data.
- the cloud server 20 receives this, and after 5 minutes from the output request, if it remains in the home appliance cooling operation (the cloud server 20 acquires state information indicating the operation state of the home appliance 10 at predetermined time intervals) ), The output request for outputting the voice data for allowing the user to confirm the operation mode is transmitted again. Upon receiving this, the home appliance 10 outputs a voice message “Are you inadvertently performing the cooling operation?” And transmits output information (output completed) that the voice output has been performed to the cloud server 20.
- the cloud server 20 transmits an output request for voice output of voice data that allows the user to check the operation mode to the home appliance 1. Even when the home appliance 10 receives this output request, it did not respond to the output request, so the output information (no output) that the audio output was not performed is given for the reason (here, that there is no person in the room). The information is transmitted to the cloud server 20 together with information on whether or not a person is detected. When the cloud server 20 receives this information, there is no person in the room as the reason why the voice output has not been performed even if the home appliance cooling operation is performed after 5 minutes from the output request. Is not sent again.
- the output sound can be given continuity, or by re-output (retry), the possibility that the user can hear the sound can be increased, Unnecessary output can be suppressed.
- the condition setting unit 21b sets the expiration date of the audio data as an output condition, and the audio output unit 14 responds to the audio data whose expiration date has passed.
- the configuration does not output audio. Since it is the structure similar to Embodiment 1 except this, the same code
- the capacity of the storage unit 7 of the home appliance adapter 5 is limited, and there is a case where not much audio data can be held.
- the storage unit 7 can hold 64 pieces of audio data for about 7 seconds.
- some audio data to be output by voice may be meaningless or wrong information depending on the time of day, even if such audio data is stored in the storage unit 7. Even if it exists, it is useless, and the variation of the audio data to be output is narrowed.
- the condition setting unit 21b sets the expiration date of the audio data as the output condition.
- the set expiration date is set for each audio data and transmitted to the home appliance adapter 5.
- the audio data with the expiration date set is output from the audio output unit 14 under the control of the control unit 6.
- the control unit 6 confirms the expiration date and is within the expiration date. Are output in order from the earliest expiration date.
- the expiration date of the audio data is set, and the audio data is output according to the expiration date. Therefore, depending on the timing, there is no need to output anything that is meaningless or wrong. In addition, it is possible to output only audio data within the expiration date in a timely manner.
- audio data that has expired is deleted from the storage unit 7.
- the audio data is deleted as follows.
- the cloud server 20 deletes the expired voice data from the voice list corresponding to the target home appliance 10 based on the time information. Thereafter, communication with the home appliance adapter 5 is performed, and the voice data in the storage unit 7 of the home appliance adapter 5 is deleted so as to synchronize with the voice list. By this deletion, useless holding and useless sound output can be suppressed, and effective sound data can be increased.
- the deletion is based on time information, but at the time of regular communication between the home appliance adapter 5 and the cloud server 20, or when a voice output request is made from the cloud server 20 and new voice data is registered in the home appliance adapter 5, etc. There may be.
- the audio system 100, the home appliance 10, the home appliance adapter 5, and the cloud server 20 (in particular, the control units 6, 11, 21) described in the first to third embodiments are each formed in an integrated circuit (IC chip) or the like. It may be realized by a logic circuit (hardware), or may be realized by software using a CPU (Central Processing Unit).
- IC chip integrated circuit
- CPU Central Processing Unit
- the voice system 100, the home appliance 10, the home appliance adapter 5, and the cloud server 20 are respectively a CPU that executes instructions of a program that is software that realizes each function, and the program and various data are computers (or CPUs).
- a ROM (Read Only Memory) or a storage device (referred to as “recording medium”) recorded in a readable manner, a RAM (Random Access Memory) for developing the program, and the like are provided.
- the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it.
- a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
- the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
- the present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
- the present invention is not limited to the above-described embodiments, and various modifications are possible, and the present invention also relates to embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is included in the technical scope. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
- the voice system (100) includes a voice creation unit (21a) that creates voice data, a condition setting unit (21b) that sets output conditions for voice output for each voice data, and a set A voice server including a transmission unit (control unit 21) that transmits condition information indicating an output condition to the electronic device (home appliance 10), a reception unit (control unit 6) that receives the condition information, and the received condition And an electronic device including an output control unit (control unit 6, control unit 11) that performs audio output according to audio data according to the information.
- the audio server can set the output condition of the audio output for each audio data, and the electronic device performs the audio output according to the audio data according to the output condition. Therefore, it is possible to perform audio output with improved user convenience from the electronic device. In addition, it is possible to more effectively utilize the audio output from the electronic device according to various situations.
- the audio generation unit generates first audio data
- the condition setting unit outputs an output condition of the first audio data.
- the transmission unit transmits first condition information indicating an output condition set for the first audio data by the condition setting unit and the first audio data to the electronic device, and
- the output control unit specifies voice data based on the first condition information received from the voice server and second condition information indicating a prescribed output condition of second voice data other than the first voice data, Audio output is performed according to the specified audio data.
- the audio data to be output is specified based on the first condition information and the second condition information for which is set. Therefore, the selection range of the audio data to be output is widened, and the audio data can be appropriately output according to the set conditions.
- the condition setting unit sets the priority of the audio data as the output condition, and the audio data in which the priority is set
- the output control unit specifies audio data to be output as audio according to the priority.
- the priority set by the condition setting unit as the output condition changes depending on time
- the output control unit specifies audio data to be output according to the priority
- the electronic device in the audio system according to aspect 5 of the present invention, includes a notification unit (LED lamp 18) that notifies the user of the presence of audio data to be output.
- the output control unit When the output instruction is received from the user, the output control unit outputs a sound corresponding to the sound data to be output.
- the user can be notified of the presence of audio data, and the user can issue an output instruction only when the user wants to confirm and output the notification. Therefore, user convenience is further improved.
- the condition setting unit sets a predetermined event serving as a trigger for outputting audio in accordance with audio data as the output condition.
- the output control unit performs audio output according to audio data when the predetermined event occurs.
- the occurrence of the predetermined event includes, for example, a case where a button of the electronic device is pressed, and a case where a predetermined time has elapsed after detecting that there is no user near the electronic device when outputting audio data. Further, it may be configured to output a plurality of times when the button is pressed.
- the electronic device transmits an output information related to the output state of the audio data output this time (control unit 6). ), And the sound generation unit generates sound data to be output next time by the output control unit based on the output information received from the electronic device, and the condition setting unit outputs the generated next sound output
- the output condition of the audio data to be output is set, and the transmission unit transmits the set condition information indicating the output condition of the audio data to be output next time and the audio data to be output next time to the electronic device.
- the output information of the audio data output this time (for example, whether or not audio output has been performed, or whether or not audio output has been performed in the presence of a user (detected user)), Audio data to be output next time is created.
- the created audio data to be output next time and condition information indicating the output condition are received by the electronic device. Since the audio data output next time is based on the output information of the audio data output this time, it is possible to perform audio output with improved user convenience. For example, it is possible to output the next audio data following the previously output audio data, or to re-output (retry) the audio data that has not been output as necessary.
- the condition setting unit sets an expiration date of audio data as the output condition
- the output control unit Does not output audio according to audio data that has passed.
- the electronic device converts the audio data to be output as audio into either the first audio data or the second audio data. Or a voice switching unit (16) for switching to both.
- the electronic device wants to output one or both of the first audio data received from the audio server and the second audio data stored in advance (the user wants to listen), the audio Data can be output.
- the electronic device determines the volume or sound quality of the audio data to be output as the first audio data and the second audio data. And a sound setting section (15) for setting individually.
- the volume or sound quality of the first sound data received from the sound server and the second sound data held in advance can be individually set by the electronic device from the sound setting unit. Therefore, it can be set so that it is easy for the user to hear. If the first voice data and the second voice data are set to have the same voice and volume, the user can listen to both voice data without distinction. If different settings are made, the user can easily distinguish and listen to both audio data.
- an electronic device according to aspect 11 of the present invention is an electronic device provided in any one of the sound systems according to aspects 1 to 10 described above.
- the voice server according to aspect 12 of the present invention is a voice server provided in any one of the voice systems according to the above aspects 1 to 10.
- the voice system according to any one of the above aspects 1 to 10 can be configured by using the electronic device and the voice server.
- the audio system, electronic device, or audio server according to each aspect of the present invention may be realized by a computer.
- the computer is used as each means provided in the audio system, electronic device, or audio server.
- a program that realizes a sound system, an electronic device, or a sound server by a computer by operating and a computer-readable recording medium that records the program also fall within the scope of the present invention.
- the present invention can be used for an audio system including an electronic device that is connected to a communication network and outputs audio received from a server.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Selective Calling Equipment (AREA)
- Telephonic Communication Services (AREA)
Abstract
An audio system or the like is constructed in which audio output convenient for a user is outputted from an electronic device. An audio system (100) is provided with a cloud server (20) for setting output conditions for each set of audio data and transmitting condition information indicating the output conditions to an electronic device, and a home appliance (10) for issuing audio output corresponding to the audio data in accordance with the received condition information.
Description
本発明は、サーバから受信した音声を出力する電子機器を備えた音声システムに関する。
The present invention relates to an audio system including an electronic device that outputs audio received from a server.
家電(家庭用電子機器)にて何らかのアクションが発生すると、例えば、使用者により家電のドアが開けられたり、家電のボタンが押されたり、あるいは、家電の稼働状態が変化したりすると、そのアクションに応じて家電に内蔵された音声をスピーカから出力(再生、発話)するものが存在する。このような音声出力する家電において、ネットワークにて接続したサーバから受信した音声を出力する家電(家庭用電子機器)も開発されている。例えば、特許文献1には、センターサーバから受信した音声信号、楽曲信号を、冷蔵庫本体の稼働状況あるいは環境状況などに応じて選択して出力する冷蔵庫が開示されている。また、例えば、特許文献2には、家電機器とは別体にアダプタが設けられ、このアダプタはプロパティに対応する音声データをアプリケーションサーバから受信し記憶し、この音音声データを、家電機器の音声発生部から出力する構成が開示されている。
When an action occurs in a home appliance (home electronic device), for example, when the user opens the door of the home appliance, presses a home appliance button, or changes the operating state of the home appliance, the action In response to the above, there is a device that outputs (reproduces, speaks) a voice built into a home appliance from a speaker. Among such household appliances that output voice, household appliances (household electronic devices) that output voice received from a server connected via a network have also been developed. For example, Patent Document 1 discloses a refrigerator that selects and outputs an audio signal and a music signal received from a center server according to the operating status or environmental status of the refrigerator body. Also, for example, in Patent Document 2, an adapter is provided separately from the home appliance, and this adapter receives and stores audio data corresponding to the property from the application server, and this audio / voice data is stored in the audio of the home appliance. A configuration for outputting from the generator is disclosed.
しかしながら、従来の家電から出力される音声は、音声出力時の条件が固定されており、柔軟に対応できない。つまり、従来技術では、家電から、ユーザの利便性に適った音声出力がなされない場合、また、有効に活用されるように音声出力がされない場合がある。
However, audio output from conventional home appliances cannot be flexibly handled because the conditions at the time of audio output are fixed. In other words, in the conventional technology, there is a case where a sound output suitable for the convenience of the user is not made from the home appliance, or a sound output is not performed so as to be used effectively.
そこで、本発明は、上記課題に鑑みなされ、その目的は、電子機器からユーザの利便性に適った音声出力がなされる音声システム等を構築することにある。
Therefore, the present invention has been made in view of the above problems, and an object thereof is to construct an audio system or the like that can output audio suitable for the convenience of the user from an electronic device.
上記の課題を解決するために、本発明の一態様に係る音声システムは、音声データを作成する音声作成部、音声データ毎に音声出力時の出力条件を設定する条件設定部、及び設定された出力条件を示す条件情報を電子機器に送信する送信部、を備えた音声サーバと、上記条件情報を受信する受信部、及び受信した上記条件情報に従って音声データに応じた音声出力を行う出力制御部、を備えたことを特徴としている。
In order to solve the above problems, an audio system according to an aspect of the present invention includes an audio generation unit that generates audio data, a condition setting unit that sets an output condition at the time of audio output for each audio data, and a set An audio server including a transmission unit that transmits condition information indicating an output condition to an electronic device, a reception unit that receives the condition information, and an output control unit that performs audio output according to audio data according to the received condition information It is characterized by having.
本発明の一態様に係る音声システムによると、音声サーバにて音声データ毎に音声出力の出力条件を設定でき、電子機器にてこの出力条件に従って音声データに応じた音声出力がなされるので、ユーザの利便性を向上させた音声出力を行うことが可能となる。また、音声出力を様々な場面に応じてより有効に活用することが可能となる。
According to the voice system of one aspect of the present invention, the voice server can set the voice output output condition for each voice data, and the electronic device performs voice output according to the voice data according to the output condition. It is possible to perform audio output with improved convenience. In addition, it is possible to more effectively utilize the audio output according to various scenes.
〔実施の形態1〕
以下、本発明の一実施形態について図1~6、10に基づいて説明すれば以下の通りである。 [Embodiment 1]
Hereinafter, an embodiment of the present invention will be described with reference to FIGS.
以下、本発明の一実施形態について図1~6、10に基づいて説明すれば以下の通りである。 [Embodiment 1]
Hereinafter, an embodiment of the present invention will be described with reference to FIGS.
(音声システムの構成)
図1は、本実施の形態に係る音声システム100の概略構成を示す図である。図1に示すように、音声システム100は、ユーザ宅に設置されている家電(電子機器)10-1,10-2,10-3,10-4と、クラウドサーバ(音声サーバ)20と、通信端末装置30-1,30-2,30-3とが広域通信ネットワーク62を介して接続するよう構成されている。図1では、4つの家電10-1,10-2,10-3,10-4、3つの通信端末装置30-1,30-2,30-3を図示している。しかし、これらにおいて、数や種類は限定されず、個別に説明する必要のない場合は、総称として家電10と通信端末装置30とを用いる。また、音声システム100に含まれるユーザ宅の数も限定されない。 (Configuration of voice system)
FIG. 1 is a diagram showing a schematic configuration of anaudio system 100 according to the present embodiment. As shown in FIG. 1, the audio system 100 includes home appliances (electronic devices) 10-1, 10-2, 10-3, 10-4 installed in a user's home, a cloud server (audio server) 20, Communication terminal devices 30-1, 30-2 and 30-3 are configured to be connected via a wide area communication network 62. In FIG. 1, four home appliances 10-1, 10-2, 10-3, 10-4, and three communication terminal devices 30-1, 30-2, 30-3 are illustrated. However, in these, the number and kind are not limited, and when there is no need to explain individually, the household appliance 10 and the communication terminal device 30 are used as a general term. Further, the number of user homes included in the voice system 100 is not limited.
図1は、本実施の形態に係る音声システム100の概略構成を示す図である。図1に示すように、音声システム100は、ユーザ宅に設置されている家電(電子機器)10-1,10-2,10-3,10-4と、クラウドサーバ(音声サーバ)20と、通信端末装置30-1,30-2,30-3とが広域通信ネットワーク62を介して接続するよう構成されている。図1では、4つの家電10-1,10-2,10-3,10-4、3つの通信端末装置30-1,30-2,30-3を図示している。しかし、これらにおいて、数や種類は限定されず、個別に説明する必要のない場合は、総称として家電10と通信端末装置30とを用いる。また、音声システム100に含まれるユーザ宅の数も限定されない。 (Configuration of voice system)
FIG. 1 is a diagram showing a schematic configuration of an
本実施の形態では、家電10には、家電アダプタ5(5-1,5-2,5-3,5-4)が接続されている。家電アダプタ5は、家電10を広域通信ネットワーク62に接続させ、広域通信ネットワーク62を介して制御できる、いわゆるネットワーク家電にするための機器である。
In this embodiment, the home appliance 10 is connected to the home appliance adapter 5 (5-1, 5-2, 5-3, 5-4). The home appliance adapter 5 is a device for connecting the home appliance 10 to the wide-area communication network 62 and making it a so-called network home appliance that can be controlled via the wide-area communication network 62.
本実施の形態では、クラウドサーバ20にて、通信端末装置30と家電アダプタ5とを対応づけて登録しており、通信端末装置30は、クラウドサーバ20を介して、家電アダプタ5に接続した家電10を遠隔操作できる。通信端末装置30は、クラウドサーバ20にて自身と対応づけて登録された家電アダプタ5に接続した家電10を、クラウドサーバ20を介して遠隔操作可能に構成されている。通信端末装置30は、クラウドサーバ20にて自身と対応づけて登録された家電アダプタ5に接続した家電10に関する情報をクラウドサーバ20から受信する。通信端末装置30は、例として、スマートフォンやタブレット端末を挙げることができる。1台の通信端末装置30から複数の家電10を遠隔操作することが可能である。また、複数の通信端末装置30から、1つの家電アダプタ5に接続された家電10を遠隔操作できる。
In the present embodiment, the cloud server 20 registers the communication terminal device 30 and the home appliance adapter 5 in association with each other, and the communication terminal device 30 connects to the home appliance adapter 5 via the cloud server 20. 10 can be operated remotely. The communication terminal device 30 is configured such that the home appliance 10 connected to the home appliance adapter 5 registered in association with itself in the cloud server 20 can be remotely operated via the cloud server 20. The communication terminal device 30 receives, from the cloud server 20, information related to the home appliance 10 connected to the home appliance adapter 5 registered in association with the cloud server 20. The communication terminal device 30 can mention a smart phone and a tablet terminal as an example. A plurality of home appliances 10 can be remotely operated from one communication terminal device 30. Moreover, the home appliance 10 connected to one home appliance adapter 5 can be remotely operated from a plurality of communication terminal devices 30.
ユーザ宅50には、狭域通信ネットワークである無線LAN(Wireless Local Area Network)が整備されており、無線LANの中継局40は、インターネットを含む広域通信ネットワーク62と接続されている。中継局40は、例えばWiFi(登録商標)ルータやWiFiアクセスポイントなどの通信機器である。ここでは、広域通信ネットワーク62としてインターネットを含む構成を例示しているが、電話回線網、移動体通信網、CATV(CAble TeleVision)通信網、衛星通信網などを利用することもできる。
The user's home 50 is provided with a wireless local area network (LAN) that is a narrow area communication network. The wireless LAN relay station 40 is connected to a wide area communication network 62 including the Internet. The relay station 40 is a communication device such as a WiFi (registered trademark) router or a WiFi access point. Here, a configuration including the Internet is illustrated as the wide area communication network 62, but a telephone line network, a mobile communication network, a CATV (CAble TeleVision) communication network, a satellite communication network, or the like can also be used.
広域通信ネットワーク62及び無線LANの中継局40を介して、クラウドサーバ20とユーザ宅50に設置された家電アダプタ5とが通信可能となっている。また、広域通信ネットワーク62を介して、クラウドサーバ20と通信端末装置30とが通信可能になっている。通信端末装置30と広域通信ネットワーク62におけるインターネットとの間は、3G(3rd Generation)、LTE(Long Term Evolution)や、宅内あるいは公衆のWiFiアクセスポイントなどを利用して接続される。なお、家電アダプタ5と通信端末装置30とはいずれも、無線通信機器であり、広域通信ネットワーク62を介することなく、中継局40を介して相互に通信することもできる。
The cloud server 20 and the home appliance adapter 5 installed in the user home 50 can communicate with each other via the wide area communication network 62 and the wireless LAN relay station 40. Further, the cloud server 20 and the communication terminal device 30 can communicate with each other via the wide area communication network 62. The communication terminal device 30 and the Internet in the wide area communication network 62 are connected by using 3G (3rd generation), LTE (Long Termination Evolution), a home or public WiFi access point, and the like. Note that the home appliance adapter 5 and the communication terminal device 30 are both wireless communication devices, and can communicate with each other via the relay station 40 without using the wide area communication network 62.
(クラウドサーバの構成)
図3は、音声システム100におけるクラウドサーバ20の概略構成を示すブロック図である。クラウドサーバ20は、各家電10を管理するサーバであり、図3に示すように、制御部21、記憶部22、及び通信部23を備えている。制御部21は、例えば、CPU(Central Processing Unit)や専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置からなり、クラウドサーバ20の各部の動作を制御するブロックである。また、制御部21は、家電10の音声出力部14にて出力する音声データを作成する音声作成部21aとしての機能と、音声データ毎に音声出力時の出力条件を設定する条件設定部21bとしての機能とを有する。 (Cloud server configuration)
FIG. 3 is a block diagram illustrating a schematic configuration of thecloud server 20 in the audio system 100. The cloud server 20 is a server that manages each home appliance 10, and includes a control unit 21, a storage unit 22, and a communication unit 23 as illustrated in FIG. For example, the control unit 21 is a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and is a block that controls the operation of each unit of the cloud server 20. Further, the control unit 21 functions as a voice creation unit 21a that creates voice data to be output by the voice output unit 14 of the home appliance 10, and as a condition setting unit 21b that sets output conditions at the time of voice output for each voice data. It has the function of.
図3は、音声システム100におけるクラウドサーバ20の概略構成を示すブロック図である。クラウドサーバ20は、各家電10を管理するサーバであり、図3に示すように、制御部21、記憶部22、及び通信部23を備えている。制御部21は、例えば、CPU(Central Processing Unit)や専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置からなり、クラウドサーバ20の各部の動作を制御するブロックである。また、制御部21は、家電10の音声出力部14にて出力する音声データを作成する音声作成部21aとしての機能と、音声データ毎に音声出力時の出力条件を設定する条件設定部21bとしての機能とを有する。 (Cloud server configuration)
FIG. 3 is a block diagram illustrating a schematic configuration of the
音声作成部21aは、音声データを作成するブロックであり、音声合成機能に基づき音声データを作成し、識別情報を付加し、家電10へ送信可能な状態にする。また、音声合成ではなく予め録音された音声データに識別情報を付加し、家電10へ送信可能な状態にすることも、音声作成部21aによる音声の作成に含めるものとする。以下では、音声作成部21aにて作成された音声データを第1音声データと称する。この第1音声データは、制御部21が通信部23を制御し家電10に送信される。
The voice creation unit 21a is a block that creates voice data, creates voice data based on the voice synthesis function, adds identification information, and makes it ready for transmission to the home appliance 10. In addition, adding the identification information to the voice data recorded in advance instead of the voice synthesis so that the voice data can be transmitted to the home appliance 10 is included in the voice creation by the voice creation unit 21a. Hereinafter, the voice data created by the voice creation unit 21a is referred to as first voice data. The first audio data is transmitted to the home appliance 10 by the control unit 21 controlling the communication unit 23.
条件設定部21bは、音声データ毎に音声出力時の出力条件(属性)を設定し、設定した出力条件を示す条件情報に対して、識別情報を付加し、家電10へ送信可能な状態にする。条件設定部21bは、音声作成部21aで作成された第1音声データに対してだけでなく、家電10が予め保持している(例えば、出荷時から保持している)音声データに対して出力条件を設定してもよい。以下では、この音声データを第2音声データと称する。音声データ及び出力条件の具体例は後述する。条件情報は、制御部21が通信部23を制御し家電10に送信する。出力条件は音声データ毎に設定されるため、設定条件を示す条件情報は、各音声データに対応付けられている。なお、第1音声データに対して設定された出力条件を示す条件情報を第1条件情報と称し、第2音声データに対して設定された出力条件を示す条件情報を第2条件情報と称する。
The condition setting unit 21b sets an output condition (attribute) at the time of sound output for each sound data, adds identification information to the condition information indicating the set output condition, and makes it possible to transmit to the home appliance 10 . The condition setting unit 21b outputs not only the first voice data created by the voice creation unit 21a but also the voice data held in advance by the home appliance 10 (for example, held from the time of shipment). Conditions may be set. Hereinafter, this audio data is referred to as second audio data. Specific examples of audio data and output conditions will be described later. The condition information is transmitted to the home appliance 10 by the control unit 21 controlling the communication unit 23. Since the output condition is set for each audio data, the condition information indicating the setting condition is associated with each audio data. The condition information indicating the output condition set for the first audio data is referred to as first condition information, and the condition information indicating the output condition set for the second audio data is referred to as second condition information.
また、クラウドサーバ20は、通信端末装置30からのデータを基に、音声データを作成したり、出力条件を設定したりするように構成されていている。さらに、クラウドサーバ20は、広域通信ネットワーク62を経由して他のサーバ等から音声データや出力条件に関する情報(例えば、ウエザー情報)を受信し、これに基づき、音声データを作成したり、出力条件を設定したりするように構成されていてもよい。
Further, the cloud server 20 is configured to create voice data and set output conditions based on data from the communication terminal device 30. Furthermore, the cloud server 20 receives information (for example, weather information) about voice data and output conditions from another server or the like via the wide area communication network 62, and creates voice data based on the information or outputs conditions. Or may be configured.
記憶部22は、クラウドサーバ20で用いられる各種情報(データ)を記憶するブロックである。記憶部22は、クラウドサーバ20で作成される第1音声データ、音声データに対応付けた条件情報、家電10の状態を示す状態情報を、家電10毎に記憶したDB(DataBase)22aなどを有している。このDB22aにより、クラウドサーバ20からの家電10毎の個別の制御が可能となる。
The storage unit 22 is a block that stores various information (data) used in the cloud server 20. The storage unit 22 includes a first audio data created by the cloud server 20, condition information associated with the audio data, and a DB (DataBase) 22 a that stores state information indicating the state of the home appliance 10 for each home appliance 10. is doing. With this DB 22a, individual control for each home appliance 10 from the cloud server 20 becomes possible.
通信部23は、広域通信ネットワーク62を介して家電アダプタ5と相互通信を行うブロックである。通信部23は、また、広域通信ネットワーク62を介して通信端末装置30とも相互通信を行う。なお、制御部(送信部)21は、通信部23は制御して、設定された出力条件を示す条件情報を家電アダプタ5に送信する。
The communication unit 23 is a block that performs mutual communication with the home appliance adapter 5 via the wide area communication network 62. The communication unit 23 also performs mutual communication with the communication terminal device 30 via the wide area communication network 62. In addition, the control part (transmission part) 21 controls the communication part 23, and transmits the condition information which shows the set output conditions to the household appliance adapter 5. FIG.
(家電アダプタ及び家電の構成)
家電アダプタ5は、家電10をネットワーク通信可能にする機器である。家電アダプタ5は、クラウドサーバ20からの操作信号(命令)を含む各種データを受信する。クラウドサーバ20から受信するデータには、クラウドサーバ20自体からのものに加え、クラウドサーバ20を介した通信端末装置30からのものも含まれる。また、クラウドサーバ20から受信するデータは、家電アダプタ5自体を制御するものである場合もあれば、家電10に伝達して家電10を制御するものである場合もある。さらに、家電アダプタ5は、家電10から伝達された家電10に関する情報をクラウドサーバ20に送信する。 (Configuration of home appliance adapter and home appliance)
Thehome appliance adapter 5 is a device that enables the home appliance 10 to perform network communication. The home appliance adapter 5 receives various data including operation signals (commands) from the cloud server 20. Data received from the cloud server 20 includes data from the communication terminal device 30 via the cloud server 20 in addition to data from the cloud server 20 itself. Further, the data received from the cloud server 20 may be for controlling the home appliance adapter 5 itself, or may be transmitted to the home appliance 10 to control the home appliance 10. Further, the home appliance adapter 5 transmits information about the home appliance 10 transmitted from the home appliance 10 to the cloud server 20.
家電アダプタ5は、家電10をネットワーク通信可能にする機器である。家電アダプタ5は、クラウドサーバ20からの操作信号(命令)を含む各種データを受信する。クラウドサーバ20から受信するデータには、クラウドサーバ20自体からのものに加え、クラウドサーバ20を介した通信端末装置30からのものも含まれる。また、クラウドサーバ20から受信するデータは、家電アダプタ5自体を制御するものである場合もあれば、家電10に伝達して家電10を制御するものである場合もある。さらに、家電アダプタ5は、家電10から伝達された家電10に関する情報をクラウドサーバ20に送信する。 (Configuration of home appliance adapter and home appliance)
The
図2は、音声システム100における家電10及び家電アダプタ5の概略構成を示すブロック図である。家電アダプタ5は、図2に示すように、制御部(受信部、出力制御部)6、記憶部7、通信部8、及び接続部9を備えている。
FIG. 2 is a block diagram illustrating a schematic configuration of the home appliance 10 and the home appliance adapter 5 in the audio system 100. As illustrated in FIG. 2, the home appliance adapter 5 includes a control unit (reception unit, output control unit) 6, a storage unit 7, a communication unit 8, and a connection unit 9.
制御部6は、家電アダプタ5の各部の動作を統括的に制御するブロックである。制御部6は、例えば、CPUや専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置から成る。制御部6は、記憶部7に記憶されている家電アダプタ5における各種制御を実施するためのプログラムを読み出して実行することで、家電アダプタ5の各部の動作を統括的に制御する。また、制御部6は、通信部8を制御しクラウドサーバ20から第1音声情報及び条件情報を受信する受信部の機能、及び受信した条件情報に従って音声データに応じた音声出力を音設定部15から行う出力制御部の機能を有する。本実施の形態では、制御部6から音設定部15を制御して音声出力を行えるように構成されているものとする。
The control unit 6 is a block that comprehensively controls the operation of each unit of the home appliance adapter 5. The control unit 6 is composed of a computer device including an arithmetic processing unit such as a CPU or a dedicated processor, for example. The control unit 6 reads out and executes a program for performing various controls in the home appliance adapter 5 stored in the storage unit 7, thereby comprehensively controlling the operation of each unit of the home appliance adapter 5. In addition, the control unit 6 controls the communication unit 8 to receive the first voice information and the condition information from the cloud server 20, and the sound setting unit 15 outputs the voice according to the voice data according to the received condition information. It has the function of the output control part performed from. In the present embodiment, it is assumed that the sound setting unit 15 is controlled from the control unit 6 so that sound output can be performed.
記憶部7は、家電アダプタ5で用いられる各種情報を記憶するブロックである。また、記憶部7は、クラウドサーバ20から受信したから第1音声データ及び条件情報を記憶する。
The storage unit 7 is a block that stores various information used in the home appliance adapter 5. Moreover, since the memory | storage part 7 was received from the cloud server 20, it memorize | stores 1st audio | voice data and condition information.
通信部8は、広域通信ネットワーク62を介してクラウドサーバ20と相互通信を行うブロックである。接続部9は、家電10の接続部19と相互通信するブロックである。接続部9と家電10の接続部19との間の接続は、例えば、USB(Universal Serial Bus)コネクタによる接続などであってもよい。
The communication unit 8 is a block that performs mutual communication with the cloud server 20 via the wide area communication network 62. The connection unit 9 is a block that communicates with the connection unit 19 of the home appliance 10. The connection between the connection unit 9 and the connection unit 19 of the home appliance 10 may be, for example, a connection using a USB (Universal Serial Bus) connector.
家電10は、例えば、空気調和機、冷蔵庫、洗濯機、調理器具、照明装置、給湯機器、撮影機器、各種AV(Audio-Visual)機器、各種家庭用ロボット(例えば、掃除ロボット、家事支援ロボット、動物型ロボット等)等である。図2に示すように、家電10は、制御部(出力制御部)11、記憶部13、音声出力部14、音設定部15、音声切替部16、状態検知部17、LED(Light Emitting Diode)ランプ(通知部)18、接続部19を備える。家電10は、制御部11による制御の下、操作信号を受信すると、受信した操作信号に応じた動作を実行する。
The home appliance 10 includes, for example, an air conditioner, a refrigerator, a washing machine, a cooking appliance, a lighting device, a hot water supply device, a photographing device, various AV (Audio-Visual) devices, various home robots (for example, a cleaning robot, a housework support robot, Animal type robots, etc.). As shown in FIG. 2, the home appliance 10 includes a control unit (output control unit) 11, a storage unit 13, an audio output unit 14, a sound setting unit 15, an audio switching unit 16, a state detection unit 17, and an LED (Light Emitting Diode). A lamp (notification unit) 18 and a connection unit 19 are provided. When receiving the operation signal under the control of the control unit 11, the home appliance 10 performs an operation according to the received operation signal.
制御部11は、家電10の各部の動作を制御するブロックである。制御部11は、例えば、CPUや専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置から成る。制御部11は、記憶部13に記憶されている家電10における各種制御を実施するためのプログラムを読み出して実行することで、家電10の各部の動作を統括的に制御する。また、制御部11は、記憶部13に記憶された第2音声データをその設定条件に従って音設定部15にて音声出力する出力制御部の機能を有する。
The control unit 11 is a block that controls the operation of each unit of the home appliance 10. For example, the control unit 11 includes a computer device including an arithmetic processing unit such as a CPU or a dedicated processor. The control unit 11 comprehensively controls the operation of each unit of the home appliance 10 by reading and executing a program for performing various controls in the home appliance 10 stored in the storage unit 13. In addition, the control unit 11 has a function of an output control unit that outputs the second sound data stored in the storage unit 13 by sound according to the setting conditions.
記憶部13は、RAM(Random Access Memory)、ROM(Read Only Memory)、HDD(Hard Disk Drive)などを含み、家電10にて用いられる各種データを記憶するブロックである。記憶部13には、家電10から音声出力される第2音声データが予め記憶されている。第2音声データにも音声出力時の出力条件が設定され、第2条件情報として記憶部13に記憶されている。第2条件情報は、家電10の出荷時から記憶されているものでも、あるいは、クラウドサーバ20が第2音声データに対して設定あるいは変更したものを受信してもよい。第1音声データと第2音声データとを区別する必要がない場合、また、区別しなくても理解される場合については音声データとのみ記す。また、「音声データを出力する」は、「音声データに応じた音声を出力する」と同じ意味で用いる。
The storage unit 13 includes a RAM (Random Access Memory), a ROM (Read Only Memory), an HDD (Hard Disk Drive), and the like, and is a block that stores various data used in the home appliance 10. The storage unit 13 stores in advance second audio data that is output as audio from the home appliance 10. The output condition at the time of sound output is set also in the second sound data, and is stored in the storage unit 13 as second condition information. The second condition information may be stored from the time of shipment of the home appliance 10, or may be received by the cloud server 20 set or changed with respect to the second audio data. When it is not necessary to distinguish between the first sound data and the second sound data, and when it is understood without being distinguished, only the sound data is described. “Output audio data” is used in the same meaning as “output audio corresponding to audio data”.
状態検知部17は、家電10の状態を示す状態情報を検知するブロックである。状態情報としては、例えば、設定状況、動作状況を示す情報等が挙げられる。また、状態情報は、家電10の置かれた状態、すなわち周囲環境に関する環境情報であってもよい。環境条件として、例えば、ユーザ宅50内あるいはユーザ宅50外の気温や湿度が挙げられる。なお、これらは例示であり、家電10に関連する情報であればどのようなものでもよい。
The state detection unit 17 is a block that detects state information indicating the state of the home appliance 10. Examples of the status information include information indicating a setting status and an operating status. Further, the state information may be a state where the home appliance 10 is placed, that is, environmental information regarding the surrounding environment. Examples of the environmental condition include temperature and humidity inside the user home 50 or outside the user home 50. These are merely examples, and any information may be used as long as the information is related to the home appliance 10.
音声出力部14は、スピーカなどの音声出力装置である。制御部6は、記憶部7に記憶された第1音声データに応じた音声を、制御部11は、記憶部13に記憶された第2音声データに応じた音声を音声出力部14から出力する。音声データの出力の詳細は後述する。
The audio output unit 14 is an audio output device such as a speaker. The control unit 6 outputs a sound corresponding to the first sound data stored in the storage unit 7, and the control unit 11 outputs a sound corresponding to the second sound data stored in the storage unit 13 from the sound output unit 14. . Details of the output of the audio data will be described later.
音設定部15は、音声出力部14から出力される音声データの音声の音量または音質を、家電アダプタ5の記憶部7に記憶された第1音声データと記憶部13に記憶された第2音声データとで個別に設定(調整)するブロックである。ユーザ操作を受け付けると、制御部11は、音設定部15を制御して、音量または音質の設定を実行する。また、クラウドサーバ20からの指示により、音量または音質の設定を実行してもよい。
The sound setting unit 15 uses the first sound data stored in the storage unit 7 of the home appliance adapter 5 and the second sound stored in the storage unit 13 for the sound volume or sound quality of the sound data output from the sound output unit 14. This block is individually set (adjusted) for data. When receiving a user operation, the control unit 11 controls the sound setting unit 15 to set the volume or sound quality. Further, the volume or sound quality may be set according to an instruction from the cloud server 20.
このように音設定部15が構成されていることで、家電10にて、クラウドサーバ20から受信した第1音声データと、家電10で予め保持している第2音声データと、について音量または音質個別に設定できる。よって、ユーザが聞きやすいように設定することができる。第1音声データと第2音声データとで音声や音量が同じになるように設定すると、ユーザは両方の音声データを区別なく聞くことができる。また、異なるように設定すると、ユーザは両方の音声データを容易に区別して聞くことができる。
Since the sound setting unit 15 is configured in this manner, the volume or sound quality of the first audio data received from the cloud server 20 and the second audio data previously held in the home appliance 10 at the home appliance 10. Can be set individually. Therefore, it can be set so that the user can easily hear. If the first voice data and the second voice data are set to have the same voice and volume, the user can listen to both voice data without distinction. If different settings are made, the user can easily distinguish and listen to both audio data.
音声切替部16は、音声出力部14から出力する音声データを、上記第1音声データ及び上記第2音声データのいずれか一方、あるいは両方に切り替えるスイッチ機能を有するブロックである。音声切替部16による音声の切り替えは、ユーザ操作を受け付けて実行しても、あるいは、クラウドサーバ20からの指示によりに実行してもよい。このように音声切替部16が構成されていることで、家電10にて、第1音声データと第2音声データとについて、どちらか一方あるいは両方の音声出力させたい、つまりユーザが聞きたい音声データを出力させることができる。
The audio switching unit 16 is a block having a switch function for switching the audio data output from the audio output unit 14 to one or both of the first audio data and the second audio data. The voice switching by the voice switching unit 16 may be executed upon receiving a user operation, or may be executed according to an instruction from the cloud server 20. Since the voice switching unit 16 is configured in this way, the home appliance 10 wants to output either one or both of the first voice data and the second voice data, that is, voice data that the user wants to hear. Can be output.
LEDランプ18は、制御部11の制御により、家電10が出力すべき音声データを有している場合点灯(または点滅)を行う。また、LEDランプ18はボタンに設けられ(あるいはボタンとして設けられ)、このボタンの押下を検知するセンサが設けられている。センサがボタンの押下を検知した場合には、制御部11の制御により、出力すべき音声データを音声出力し、LEDランプ18の消灯を行う。LEDランプ18は、音声データの種類あるいは後述する音声データの有効期限等に応じて点灯する色あるいは点灯パターンが変化するように構成されていてもよい。なお、LEDランプ18が設けられていない家電10もある。また、LEDランプ18とボタン(ボタン機能)とは別々に設けてもよい。
The LED lamp 18 is lit (or flashed) under the control of the control unit 11 when the home appliance 10 has audio data to be output. The LED lamp 18 is provided on a button (or provided as a button), and a sensor for detecting the pressing of the button is provided. When the sensor detects that the button has been pressed, the control unit 11 controls to output sound data to be output and turn off the LED lamp 18. The LED lamp 18 may be configured such that the color or lighting pattern to be lit changes according to the type of audio data or the expiration date of audio data to be described later. There are also home appliances 10 in which the LED lamp 18 is not provided. Further, the LED lamp 18 and the button (button function) may be provided separately.
接続部19は、家電アダプタ5の接続部9と相互通信するブロックである。
The connection unit 19 is a block that communicates with the connection unit 9 of the home appliance adapter 5.
以上のように、本実施の形態では、家電10の遠隔操作を可能にする家電アダプタ5が外付けされた構成である。遠隔操作を可能とする通信機能部分を外付け構成としてオプション化することで、共通の家電アダプタ5を種類の異なる家電10に適用させることができ、コストを抑えることができる。もちろん、家電10内部に通信機能部分が予め組み込まれている構成(家電10と家電アダプタ5と一体化した構成)であってもよい。
As described above, the present embodiment has a configuration in which the home appliance adapter 5 that enables remote operation of the home appliance 10 is externally attached. By making the communication function part enabling remote operation optional as an external configuration, the common home appliance adapter 5 can be applied to different types of home appliances 10 and costs can be reduced. Of course, a configuration in which a communication function part is incorporated in the home appliance 10 in advance (a configuration in which the home appliance 10 and the home appliance adapter 5 are integrated) may be employed.
さらに、家電10は、通信端末装置30からの遠隔操作だけでなく、図示しないリモコンからの例えば赤外線を用いた近距離無線通信による操作や、図示しない本体操作部からの操作が可能に構成されている。あるいは、音声やジェスチャーによる操作が可能あってもよい。
Furthermore, the home appliance 10 is configured not only to be remotely operated from the communication terminal device 30, but also to be able to be operated by short-distance wireless communication using, for example, infrared rays from a remote controller (not shown) or an operation from a main body operation unit (not shown). Yes. Alternatively, operation by voice or gesture may be possible.
音声システム100の上記構成により、クラウドサーバ20にて音声データ毎に音声出力の出力条件を設定でき、家電10にてこの出力条件に従って音声データに応じた音声出力がなされるので、ユーザの利便性を向上させた音声出力を行うことが可能となる。また、音声出力を様々な場面に応じてより有効に活用することが可能となる。また、家電アダプタ5の記憶部7に記憶される、クラウドサーバ20から受信した第1音声データと、家電10の記憶部13にて出荷時から記憶されている第2音声データとがあり、これらには出力条件が設定されている。よって、音声出力部14から出力される音声データの選択の幅が広がり、設定された出力条件に従って適切に音声データを出力することができる。
With the above configuration of the audio system 100, the cloud server 20 can set an audio output output condition for each audio data, and the home appliance 10 performs audio output according to the audio data in accordance with the output condition. It is possible to perform audio output with improved performance. In addition, it is possible to more effectively utilize the audio output according to various scenes. In addition, there are first audio data received from the cloud server 20 and stored in the storage unit 7 of the home appliance adapter 5 and second audio data stored in the storage unit 13 of the home appliance 10 from the time of shipment. The output condition is set for. Therefore, the selection range of the audio data output from the audio output unit 14 is widened, and the audio data can be output appropriately according to the set output conditions.
(音声出力する音声データの特定)
音声出力部14から出力される音声データには、第1音声データと第2音声データとがある。制御部6は、クラウドサーバ20から受信した第1音声データに設定された出力条件を示す第1条件情報に基づき、及び制御部11は第2音声データに設定された出力条件を示す第2条件情報に基づき、今回出力する音声データを特定し、該特定した音声データに応じた音声出力を音声出力部14から行う。 (Identification of audio data to be output)
The audio data output from theaudio output unit 14 includes first audio data and second audio data. The control unit 6 is based on the first condition information indicating the output condition set in the first audio data received from the cloud server 20, and the control unit 11 is the second condition indicating the output condition set in the second audio data. Based on the information, voice data to be output this time is specified, and voice output corresponding to the specified voice data is performed from the voice output unit 14.
音声出力部14から出力される音声データには、第1音声データと第2音声データとがある。制御部6は、クラウドサーバ20から受信した第1音声データに設定された出力条件を示す第1条件情報に基づき、及び制御部11は第2音声データに設定された出力条件を示す第2条件情報に基づき、今回出力する音声データを特定し、該特定した音声データに応じた音声出力を音声出力部14から行う。 (Identification of audio data to be output)
The audio data output from the
出力する音声データを特定する処理に関し、次に、具体例を用いて説明する。
Next, the process for specifying the audio data to be output will be described using a specific example.
第1条件情報または第2条件情報に対応する出力条件が、状態検知部17による家電10の特定の状態情報の検知があった場合に音声出力する、という条件であるとする。この場合、状態検知部17により特定の状態情報が検知されると、つまり、出力条件が満たされると、制御部6または制御部11は、この出力条件が設定された第1音声データまたは第2音声データを特定して出力する。なお、音声出力のトリガとなる家電10の特定の状態情報には、種類や型番等が異なる家電10に共通の項目(例えば、ボタンとして設けられたLEDランプ18等)により得られる情報と、特定の種類の家電10に固有のもの(ドア等)により得られる情報のいずれも含む。
Suppose that the output condition corresponding to the first condition information or the second condition information is a condition that a sound is output when the state detection unit 17 detects specific state information of the home appliance 10. In this case, when specific state information is detected by the state detection unit 17, that is, when the output condition is satisfied, the control unit 6 or the control unit 11 performs the first audio data or the second sound data for which the output condition is set. Identify and output audio data. The specific status information of the home appliance 10 that triggers voice output includes information obtained from items common to the home appliance 10 of different types, model numbers, etc. (for example, the LED lamp 18 provided as a button, etc.) Any of the information obtained by items (doors or the like) specific to the types of home appliances 10 is included.
また、第1条件情報または第2条件情報に対応する出力条件が、特定時刻になると音声出力する(いわゆる、おしゃべりのための出力)、という条件である場合がある。この場合、特定時刻になると、つまり、出力条件が満たされると、制御部6または制御部11は、この出力条件が設定された第1音声データまたは第2音声データを特定して出力する。
Also, there are cases where the output condition corresponding to the first condition information or the second condition information is a condition that voice output is performed at a specific time (so-called chatting output). In this case, when the specific time comes, that is, when the output condition is satisfied, the control unit 6 or the control unit 11 specifies and outputs the first audio data or the second audio data for which the output condition is set.
(出力条件)
条件設定部21bが設定する出力条件について、さらに、具体例を用いて説明する。 (Output conditions)
The output conditions set by thecondition setting unit 21b will be further described using a specific example.
条件設定部21bが設定する出力条件について、さらに、具体例を用いて説明する。 (Output conditions)
The output conditions set by the
出力条件は、例えば、音声出力をさせるトリガとなる所定イベントが発生した場合に音声出力される、という条件であってもよい。所定イベントとは、例えば、家電10の電源ONになったという事象、家電1のLEDランプ18が設けられたボタンが押されたという事象、また、音声データの出力時に家電10付近にユーザがいないことを検知した後所定時間経過したという事象等である。また、例えば、所定イベントが発生した場合に(例えば、ボタンが押された場合に)、複数回音声出力される、という条件であってもよい。そこで、条件設定部21bが、出力条件として、音声データに応じた音声出力をさせるトリガとなる所定イベントを設定すると、制御部6は及び制御部11は、所定イベント発生時に、所定イベントに応じた音声データの出力を行う。
The output condition may be, for example, a condition that audio is output when a predetermined event that triggers audio output occurs. The predetermined event is, for example, an event that the home appliance 10 is turned on, an event that a button provided with the LED lamp 18 of the home appliance 1 is pressed, or there is no user near the home appliance 10 when audio data is output. For example, an event that a predetermined time has elapsed after detecting this. Further, for example, a condition may be that audio is output a plurality of times when a predetermined event occurs (for example, when a button is pressed). Therefore, when the condition setting unit 21b sets a predetermined event as a trigger for outputting sound according to the audio data as the output condition, the control unit 6 and the control unit 11 respond to the predetermined event when the predetermined event occurs. Output audio data.
また、LEDランプ18を点滅させることで、出力すべき音声データの存在をユーザに通知する場合、制御部6または制御部11は、ユーザからの出力指示を受け付けた、例えば、LEDランプ18が設けられたボタンが押された場合に、出力すべき音声データに応じた音声出力を音声出力部14から行う。この場合は、ユーザからの出力指示を受け付けると出力する、というのが出力条件である。この場合、LEDランプ18が点滅により音声データの存在をユーザに通知でき、ユーザは通知を確認して出力させたい場合のみ、出力指示をすることができる。よって、ユーザの利便性がより向上する。
When notifying the user of the presence of audio data to be output by blinking the LED lamp 18, the control unit 6 or the control unit 11 has received an output instruction from the user. For example, the LED lamp 18 is provided. When the pressed button is pressed, the sound output unit 14 performs sound output corresponding to the sound data to be output. In this case, the output condition is to output when an output instruction from the user is received. In this case, the presence of audio data can be notified to the user by the LED lamp 18 blinking, and the user can issue an output instruction only when the user wants to confirm and output the notification. Therefore, user convenience is further improved.
また、条件設定部21bが設定する出力条件は、音声データを再出力(聞き直し)させる条件であってもよい。再出力をさせる条件とは、つぎのようなものである。音声データに、A:音声出力前にLEDランプ18を点滅させるという出力条件、及び、B:LEDランプ18が設けられたボタンを押すと優先的に音声出力するという出力条件が設定されていた場合に、ユーザがボタンを押して音声データを出力させた後に、再出力の出力条件が満たされると、再出力がされるまでAの出力条件は優先すべき音声が無い状態、例えば点灯または消灯に変更、Bの出力条件が維持され、再出力の出力条件が満たされなければ、Bの出力条件が消滅する、というような出力条件である。
Further, the output condition set by the condition setting unit 21b may be a condition for re-outputting (listening to) the audio data. The conditions for re-output are as follows. In the audio data, A: an output condition of blinking the LED lamp 18 before outputting the audio, and B: an output condition of preferentially outputting audio when a button provided with the LED lamp 18 is pressed are set. In addition, after the user presses the button to output the audio data, if the output condition for re-output is satisfied, the output condition of A is changed to a state in which there is no sound to be prioritized until re-output, for example, lighting or extinguishing , The output condition of B is maintained, and if the output condition of re-output is not satisfied, the output condition of B disappears.
クラウドサーバ20は、音声データ毎に設定された出力条件を、家電10毎に、音声データの出力条件の一覧情報(音声リスト)として保持(管理)している。また、各家電10の家電アダプタ5も、クラウドサーバ20から受信した出力条件を音声リストとして管理している。音声リストは、少なくとも、音声データの識別情報、インデックス、及び各種操作フラグを含んでいる。この操作フラグは、操作の実行の確認に使用されるものであり、具体例を後述する。クラウドサーバ20にて保持する音声リストの一部が、家電アダプタ5に送信されて、家電アダプタ5が保持される音声リストとなる。なお、音声データの識別情報は、クラウドサーバ20と家電アダプタ5とで共通のものとする。
The cloud server 20 holds (manages) output conditions set for each audio data as list information (audio list) of output conditions for the audio data for each home appliance 10. The home appliance adapter 5 of each home appliance 10 also manages the output conditions received from the cloud server 20 as a voice list. The audio list includes at least audio data identification information, an index, and various operation flags. This operation flag is used for confirming the execution of the operation, and a specific example will be described later. A part of the voice list held in the cloud server 20 is transmitted to the home appliance adapter 5 to become a voice list in which the home appliance adapter 5 is held. Note that the identification information of the audio data is common to the cloud server 20 and the home appliance adapter 5.
クラウドサーバ20の管理する音声リストのうち、家電アダプタ5に送信するものについては、操作フラグのうち「DL(ダウンロード)フラグ」を立てる(ONにする)。家電アダプタ5は「DL(ダウンロード)フラグ」が立った(ONにされた)音声データをダウンロードする。もしその音声データが、家電アダプタ5がすでにダウンロード済みのものの場合は、家電アダプタ5は何もせずその音声データを保持し続ける。クラウドサーバ20の管理する音声リストのうち、家電アダプタ5に送信済みであり、家電アダプタ5から削除するものについては、操作フラグのうち「DLフラグ」を倒す(OFFにする)。
Among the voice lists managed by the cloud server 20, those that are transmitted to the home appliance adapter 5 are set with “DL (download) flag” among the operation flags (set to ON). The home appliance adapter 5 downloads audio data in which a “DL (download) flag” is set (turned on). If the audio data is already downloaded by the home appliance adapter 5, the home appliance adapter 5 does nothing and keeps the audio data. Among the voice lists managed by the cloud server 20, those that have been transmitted to the home appliance adapter 5 and are deleted from the home appliance adapter 5 are defeated (turned off) among the operation flags.
また、削除は、別の方法で行ってもよい。例えば、家電アダプタ5から削除したい音声データの識別情報をクラウドサーバ20に送信し、クラウドサーバ20は管理する音声リストにて、該当する識別情報に対応するインデックスの値を空(NULL)にする、などの方法を用いてもよい。
Also, deletion may be performed by another method. For example, the identification information of the voice data to be deleted from the home appliance adapter 5 is transmitted to the cloud server 20, and the cloud server 20 sets the index value corresponding to the corresponding identification information to empty (NULL) in the managed voice list. Such a method may be used.
ここで、クラウドサーバ20と家電アダプタ5とは定期通信を行うものとし、家電アダプタ5の管理する音声リストが修正され、必要に応じて音声データそのものの送受信が行われる。家電アダプタ5の管理する音声リストは、自身が受信予定あるいは受信済みの音声データを含むものとし、上述の方法にてクラウドサーバ20が管理する音声リストから削除されたものは自身の音声リストから外す。
Here, it is assumed that the cloud server 20 and the home appliance adapter 5 perform regular communication, the voice list managed by the home appliance adapter 5 is corrected, and the voice data itself is transmitted and received as necessary. The audio list managed by the home appliance adapter 5 includes the audio data scheduled to be received or received by the home appliance adapter 5, and those deleted from the audio list managed by the cloud server 20 by the above method are removed from the own audio list.
なお、以上の音声削除等の音声リストの管理の方法は、本発明の一実施形態についての説明であり、別の実施形態については、別の方法を用いても構わない。また、本実施形態では、クラウドサーバ20にて管理する音声リストの一部が、家電アダプタ5にて管理するリストであるとするが、クラウドサーバ20にて管理する音声リストが、家電アダプタ5にて管理するリストと同一となっていてもよい。
Note that the above-described voice list management method such as voice deletion is an explanation of one embodiment of the present invention, and another method may be used for another embodiment. In this embodiment, a part of the voice list managed by the cloud server 20 is a list managed by the home appliance adapter 5, but the voice list managed by the cloud server 20 is stored in the home appliance adapter 5. The same list may be managed.
音声リストの一例を図10に示す。図10に示す音声リストでは、設定条件が音声データ毎に設定される。図10に示す音声リストにおいて、「メッセージ」とは音声データにて出力される音声の内容である。図10に示す音声リストでは、「メッセージ」自体が音声データの識別情報となっているが、音声リストにおいても、音声データは後述するようにIDコードにて識別されるようになっていればよい。音声リストを用いて、識別情報に対応する音声データ(実際に出力する音声データ)を、呼び出すことができるものとする。
An example of the voice list is shown in FIG. In the voice list shown in FIG. 10, setting conditions are set for each voice data. In the audio list shown in FIG. 10, “message” is the content of audio output as audio data. In the voice list shown in FIG. 10, the “message” itself is identification information of voice data. However, in the voice list, the voice data only needs to be identified by an ID code as will be described later. . It is assumed that voice data corresponding to the identification information (voice data to be actually output) can be called using the voice list.
図10の音声リストにおいて、トリガ、通知、再出力が、音声データの出力条件(属性)であり、フラグが立っている(1になっている)音声データに、その出力条件が設定されていることを示す。図10の音声リストでは、「おはよう」というメッセージの音声データは、トリガのドアにフラグが立っている。よって、「おはよう」は、トリガとしてドアが開かれるというイベントの発生時に出力されることを出力条件として設定されている。また、「詰め込み過ぎです」というメッセージの音声データは、トリガのボタンおよびドアと、再出力とにフラグが立っている。よって、「詰め込み過ぎです」は、トリガとしてドアが開かれるというイベントの発生時及びボタンがおされるというイベントの発生時に出力されること、及び、再出力が行われるものであること、を出力条件として設定されている。また、「〇〇ちゃん、ケーキがあるよ」というメッセージの音声データは、トリガのボタンおよびドアと、通知と、再出力とにフラグが立っている。そのため、よって、「〇〇ちゃん、ケーキがあるよ」は、トリガとしてドアが開かれるというイベントの発生時及びボタンが押されるというイベントの発生時に出力されること、LEDランプ18での通知すなわち点滅が行われるものであること、及び、再出力が行われるものであること、を出力条件として設定されている。
In the audio list of FIG. 10, trigger, notification, and re-output are audio data output conditions (attributes), and the output conditions are set for audio data that is flagged (set to 1). It shows that. In the voice list of FIG. 10, the voice data of the message “Good morning” is flagged at the trigger door. Therefore, “Good morning” is set as an output condition to be output at the occurrence of an event that the door is opened as a trigger. In addition, the voice data of the message “packed too much” is flagged in the trigger button and door and the re-output. Therefore, “Too much jamming” is output when the event that the door is opened as a trigger, the event that the button is pressed, and that the re-output is performed. It is set as a condition. In addition, the voice data of the message “OO-chan, there is a cake” is flagged for trigger buttons and doors, notification, and re-output. Therefore, “OO-chan, there is cake” is output when an event occurs when the door is opened as a trigger and when an event occurs when the button is pressed. Is set as an output condition, and that re-output is performed.
(優先度)
音声出力のトリガとなる所定イベント(例えば、家電10へのユーザ操作)があり、所定イベントに対応した音声データが複数ある場合がある。そこで、本実施の形態では、条件設定部21bは、上記出力条件として音声データの出力のための優先度を設定する。そして、この場合、所定イベントが発生すると(例えば、ユーザ操作による状態情報が状態検知部17で検知されると)、この優先度に従って、制御部6及び制御部11は、音声出力する音声データを特定し、音声出力部14から出力する。優先度の高いものほど、出力確率が高く設けられている、つまり、頻繁に出力されるようになっている。 (priority)
There is a predetermined event (for example, a user operation on the home appliance 10) that triggers voice output, and there may be a plurality of audio data corresponding to the predetermined event. Therefore, in the present embodiment, thecondition setting unit 21b sets a priority for outputting audio data as the output condition. In this case, when a predetermined event occurs (for example, when status information by a user operation is detected by the status detection unit 17), the control unit 6 and the control unit 11 output audio data to be output according to this priority. Specify and output from the audio output unit 14. The higher the priority, the higher the output probability, that is, the higher the output probability.
音声出力のトリガとなる所定イベント(例えば、家電10へのユーザ操作)があり、所定イベントに対応した音声データが複数ある場合がある。そこで、本実施の形態では、条件設定部21bは、上記出力条件として音声データの出力のための優先度を設定する。そして、この場合、所定イベントが発生すると(例えば、ユーザ操作による状態情報が状態検知部17で検知されると)、この優先度に従って、制御部6及び制御部11は、音声出力する音声データを特定し、音声出力部14から出力する。優先度の高いものほど、出力確率が高く設けられている、つまり、頻繁に出力されるようになっている。 (priority)
There is a predetermined event (for example, a user operation on the home appliance 10) that triggers voice output, and there may be a plurality of audio data corresponding to the predetermined event. Therefore, in the present embodiment, the
優先度の設定について以下に説明する。条件設定部21bは、各音声データに対して優先度を設定する。ここでは、優先度は、時刻に応じて値が変化するものする。設定された優先度は、クラウドサーバ20から送信され、家電アダプタ5にて受信される。優先度は、例えば、図4に示すような、一定期間分のグラフにして、送受信されてもよい。一定期間分の優先度を送受信することで、家電アダプタ5とクラウドサーバ20との通信が途切れた場合でも、一定期間は優先度に従った音声出力が可能になる。
Priority setting is explained below. The condition setting unit 21b sets a priority for each audio data. Here, the priority changes in value according to time. The set priority is transmitted from the cloud server 20 and received by the home appliance adapter 5. For example, the priority may be transmitted and received in a graph for a certain period as shown in FIG. By transmitting / receiving priority for a certain period, even if communication between the home appliance adapter 5 and the cloud server 20 is interrupted, audio output according to the priority can be performed for a certain period.
図4では、音声A~Dは、いずれも第1音声データ、家電音声E,Fは、第2音声データを示しており、これらは、全て同じ所定イベント(例えば、特定のユーザー操作)に対応した音声データである。なお、図4は単なる例示であり、これに限定されるものではない。なお、図4では、第2音声データの優先度を常に0として、それより優先度が高いか低いかをプラスとマイナスで表現している。例えば、第2音声データの優先度が常に3である場合に、第1音声データの優先度を0から6の間で設定してもよい。また、図4では、優先度は-3から+3の7段階であるが、この段階にも限定はない。なお、図4に示される優先度のグラフは、時刻及び優先度の値は離散的であるが、中間の時刻における優先度についてはグラフの通り、線形補完するものとする。
In FIG. 4, the voices A to D all indicate the first voice data, and the home appliance voices E and F indicate the second voice data, which all correspond to the same predetermined event (for example, a specific user operation). Audio data. FIG. 4 is merely an example, and the present invention is not limited to this. In FIG. 4, the priority of the second audio data is always set to 0, and whether the priority is higher or lower is expressed by plus and minus. For example, when the priority of the second audio data is always 3, the priority of the first audio data may be set between 0 and 6. In FIG. 4, the priority has seven levels from −3 to +3, but there is no limitation to this level. In the priority graph shown in FIG. 4, the time and priority values are discrete, but the priority at the intermediate time is linearly complemented as shown in the graph.
図4に示される音声Aの優先度を示すグラフからは、時刻が3から7まで(例えば3時から7時まで)は優先度が1、時刻が11から14までは優先度が3、時刻14以降では優先度が-3であることがわかる。そのため、音声Aが、時刻14までに出力されるべきという出力条件の音声データ(例えば、「燃えるゴミ出した?」という音声データ)であれば、時刻14より前では、徐々に優先度が大きくなり、時刻14直前の時刻11から14までは優先度が最高値(3)となり、時刻14以降は最低値(-3)となるため。好ましい。出力条件に関する情報、例えば、「燃えるゴミ出した?」という音声データの場合、燃えるゴミの日に関する情報は、ユーザが通信端末装置30から広域通信ネットワーク62経由でクラウドサーバ20に設定するものであってもよい。
From the graph showing the priority of the voice A shown in FIG. 4, the priority is 1 when the time is from 3 to 7 (for example, from 3 o'clock to 7 o'clock), the priority is 3 when the time is from 11 to 14 From 14 onward, it can be seen that the priority is -3. Therefore, if the voice data is voice data with an output condition that it should be output by time 14 (for example, voice data “Burning garbage out?”), The priority gradually increases before time 14. Therefore, the priority is the highest value (3) from time 11 to time 14 immediately before time 14, and the lowest value (-3) after time 14. preferable. In the case of information relating to output conditions, for example, voice data “burned garbage out?”, Information relating to the date of burning garbage is set in the cloud server 20 by the user from the communication terminal device 30 via the wide area communication network 62. May be.
また、図4に示される音声Bの優先度を示すグラフからは、時刻が0から16までは優先度が-3、時刻が16から17までは優先度が3、時刻が17以降では優先度が-3であることがわかる。よって、音声Bが、時刻16から17の間に実行されるべき事項に関する音声データ(例えば、「今からニュースが始まるよ」という音声データ)であれば、時刻16から17までにのみ最高値(3)なり、他は最低値(-3)となるので、好ましい。
Further, from the graph showing the priority of the voice B shown in FIG. 4, the priority is −3 from time 0 to 16, the priority is 3 from time 16 to 17, and the priority from time 17 onwards. It can be seen that is -3. Therefore, if the voice B is voice data related to an item to be executed between the times 16 to 17 (for example, voice data “news will start from now”), the maximum value (only from the time 16 to 17 ( 3), and others are the minimum value (-3), which is preferable.
また、図4に示される音声C及びDの優先度を示すグラフからは、音声C及びDの優先度は、それぞれ、常に-1及び1に設定されていることがわかる。そのため、音声Cを時々出力する必要のある音声データ、音声Dを頻繁に出力する必要のある音声データに対応させるのが好ましい。
Further, from the graph showing the priorities of the voices C and D shown in FIG. 4, it can be seen that the priorities of the voices C and D are always set to −1 and 1, respectively. Therefore, it is preferable to correspond to voice data that needs to output voice C from time to time and voice data that needs to output voice D frequently.
また、図4に示される家電音声E,Fの優先度を示すグラフからは、家電音声E,Fの優先度は常にゼロであり、優先度は設定されていないか、ゼロに設定されていることがわかる。
Moreover, from the graph which shows the priority of the household appliance voices E and F shown in FIG. 4, the priority of the household appliance voices E and F is always zero, and the priority is not set or is set to zero. I understand that.
ここで、優先度が正の数の場合、音声Aが出力される出力確率は、Aの優先度/出力候補となる音声(ここでは、A~D,E,Fとする)の優先度の合計から算出される。上記では、優先度-3から3であるので、これを正の数にするため、全てに同じ数、例えば4を加えればよい。この場合、図4の各グラフから、例えば、時刻3から7において、音声A及び音声Cの優先度が5(1+4)、音声B、家電音声E及び家電音声Fの優先度が4(0+4)、音声Dの優先度が3(-1+4)であることがわかる。よって、Aの優先度は、5/(5+4+5+3+4+4)=1/4となる。よって、音声Aは時刻3から7において、出力確率1/4で出力される。
Here, when the priority is a positive number, the output probability that the sound A is output is the priority of A / the priority of the sound that is the output candidate (here, A to D, E, and F). Calculated from the sum. In the above, since the priorities are from -3 to 3, in order to make this a positive number, the same number, for example, 4 may be added to all. In this case, from each graph of FIG. 4, for example, at times 3 to 7, the priority of the voice A and the voice C is 5 (1 + 4), and the priority of the voice B, the household appliance voice E, and the household appliance voice F is 4 (0 + 4). It can be seen that the priority of the voice D is 3 (−1 + 4). Therefore, the priority of A is 5 / (5 + 4 + 5 + 3 + 4 + 4) = 1/4. Therefore, the voice A is output at the output probability ¼ from the time 3 to 7.
本実施の形態では、優先度が最高になってから、一度も出力されずに優先度が最低になる音声データがあった場合は、トリガとなる所定イベントの発生がなくても強制出力するようにしてもよい。
In the present embodiment, if there is audio data that has never been output and has the lowest priority after the highest priority, it is forcibly output even if a predetermined event that triggers does not occur. It may be.
また、出力後の優先度については、音声データの内容や種類によって低く設定する(出力されないように設定する)、あるいは、変化させないように設定してもよい。また、上記した優先度設定方法と組み合わせて、トリガとなる所定イベントが発生した場合に最優先で出力される音声データを作成してもよい。この場合、上記した優先度設定方法よりも優先される出力条件(優先出力属性)を設定し、この条件が満たされる(優先出力属性が真である)場合、最優先で出力するものとする。あるいは、優先出力属性を設定するのではなく、優先度として特別な値(例えば、優先度の通常の範囲が-3~3に設定されている場合において、4などの通常の範囲を超えた値)が設定されている音声データがある場合、当該音声データについては上記の出力確率を計算せず、当該音声データを、必ず、又は、優先度の通常の範囲に設定されている音声データよりも優先させて、音声出力するようにしてもよい。
Also, the priority after output may be set to be low (set so as not to be output) or not to be changed depending on the content and type of audio data. Further, in combination with the above-described priority setting method, audio data output with the highest priority when a predetermined event as a trigger occurs may be created. In this case, an output condition (priority output attribute) that is prioritized over the above-described priority setting method is set, and when this condition is satisfied (the priority output attribute is true), output is performed with the highest priority. Or, instead of setting the priority output attribute, a special value as the priority (for example, a value exceeding the normal range such as 4 when the normal range of the priority is set to -3 to 3) ) Is set, the above output probability is not calculated for the audio data, and the audio data is always or more than the audio data set in the normal range of priority. Prioritize and output audio.
(音声出力における処理の流れ)
音声システム100での音声出力における処理の流れを、図5及び6を参照に具体例を用いて説明する。まず、LEDランプ18を点灯することでユーザに出力すべき音声データがあること通知する、LEDランプ点灯シーケンスの一例について説明する。 (Processing flow for audio output)
The flow of processing in audio output in theaudio system 100 will be described using a specific example with reference to FIGS. First, an example of an LED lamp lighting sequence for notifying the user that there is audio data to be output by lighting the LED lamp 18 will be described.
音声システム100での音声出力における処理の流れを、図5及び6を参照に具体例を用いて説明する。まず、LEDランプ18を点灯することでユーザに出力すべき音声データがあること通知する、LEDランプ点灯シーケンスの一例について説明する。 (Processing flow for audio output)
The flow of processing in audio output in the
図5に示すように、家電アダプタ5は、記憶部7に、LEDランプ18を点灯させ、LEDランプ18のボタンの押下が検知された場合に出力するという出力条件が設定された音声データ(「LEDランプ18」を点灯すべき音声)があることを確認した場合(工程B0)、LEDランプ18を点灯させるランプ点灯指示情報を家電10に送信する(工程B1)。家電10が、ランプ点灯指示情報を受信すると(工程A0)、LEDランプ18を点灯させる(工程A1)。この後、家電10にて音声出力が行われる発話シーケンスが実行される。
As shown in FIG. 5, the home appliance adapter 5 causes the storage unit 7 to turn on the LED lamp 18 and to output audio data (“” that is output when a button press of the LED lamp 18 is detected. When it is confirmed that there is a voice to turn on the LED lamp 18 (step B0), lamp lighting instruction information for turning on the LED lamp 18 is transmitted to the home appliance 10 (step B1). When the home appliance 10 receives the lamp lighting instruction information (step A0), the LED lamp 18 is turned on (step A1). Thereafter, an utterance sequence in which voice output is performed by the home appliance 10 is executed.
次に、発話シーケンスの一例について説明するが、ここでは、音声出力のトリガとなる所定イベントは家電へのユーザの操作の事象であるとし、また、イベントに応じた音声データが複数あるものとする。発話シーケンスでは、家電10は、自機のボタンを押したりドアを開けたり等のユーザによる自機の操作を検知すると(工程B2)、検知した操作の情報である状態情報を家電アダプタ5に送信する。家電アダプタ5は、状態情報を受信すると(工程B2)、家電アダプタ5内にある音声データから、状態情報により検知された操作に対応する音声データを複数選択する(工程B3)。そして、工程B3で選択した音声データから優先度を基に出力する音声データを決定し(工程B4)、決定した音声データを直接音声出力部14に送る(工程B5)。家電10は、この音声データを受信し(工程A4)、音声出力部14から音声出力する(工程A5)。家電アダプタ5は、音声出力と共に、出力した音声データを示すための出力情報として、出力した音声データの識別情報(IDコード)をクラウドサーバ20に送信する(工程B6)。クラウドサーバ20は、出力情報を受信すると(工程C0)、出力情報を保持する(工程C1)。保持された出力情報は次回の音声データの作成に利用されるが、次回の音声データの作成については、実施の形態2にて詳細に説明する。
Next, an example of an utterance sequence will be described. Here, it is assumed that a predetermined event serving as a trigger for voice output is an event of a user operation on a home appliance, and there are a plurality of voice data corresponding to the event. . In the utterance sequence, when the home appliance 10 detects an operation of the user's own device such as pushing a button on the own device or opening a door (step B2), the home appliance 10 transmits state information that is information of the detected operation to the home appliance adapter 5. To do. When the home appliance adapter 5 receives the state information (step B2), the home appliance adapter 5 selects a plurality of pieces of sound data corresponding to the operation detected by the state information from the sound data in the home appliance adapter 5 (step B3). And the audio | voice data output based on a priority are determined from the audio | voice data selected by process B3 (process B4), and the determined audio | voice data are directly sent to the audio | voice output part 14 (process B5). The home appliance 10 receives this sound data (step A4) and outputs the sound from the sound output unit 14 (step A5). The household appliance adapter 5 transmits the identification information (ID code) of the output audio data to the cloud server 20 as output information for indicating the output audio data together with the audio output (step B6). When the cloud server 20 receives the output information (step C0), the cloud server 20 holds the output information (step C1). The held output information is used for creating the next audio data. The creation of the next audio data will be described in detail in the second embodiment.
ここで、工程B6、C1において、出力情報として出力した音声データの識別情報を送受信するとこで、音声出力が行われたか否かの判断が、以下のように行われる。家電アダプタ5は音声データを音声出力部14から出力すると、クラウドサーバ20に出力した音声データの識別情報を送信することで、クラウドサーバ20は、家電10毎に、音声リストを保持しており、この音声リストを参照して、受信した識別情報により識別される音声データが「出力された状態」であるかを判断する。「出力されていない状態」とは、音声リスト中のいずれの音声データに対応する識別情報も受信しない場合である。また、クラウドサーバ20では、前後で出力された音声データの内容や家電の状態情報を基に、「出力しなかった理由」を生成する。
Here, in steps B6 and C1, by transmitting / receiving the identification information of the audio data output as output information, it is determined whether or not audio output has been performed as follows. When the home appliance adapter 5 outputs the sound data from the sound output unit 14, the cloud server 20 holds the sound list for each home appliance 10 by transmitting the identification information of the sound data output to the cloud server 20. With reference to this voice list, it is determined whether or not the voice data identified by the received identification information is “output state”. The “state of not being output” is a case where identification information corresponding to any audio data in the audio list is not received. In addition, the cloud server 20 generates a “reason for not outputting” based on the contents of the audio data output before and after and the state information of the home appliance.
次に音声データをクラウドサーバ20にて管理し、家電アダプタ5が受信する、音声管理・受信シーケンスの一例について説明する。図6に示すように、クラウドサーバ20は、家電10毎に保持している音声リストを更新すると(工程C10)、家電アダプタ5との定期通信において、その更新した音声リストの更新した部分に関するデータを家電アダプタ5に送信する。家電アダプタ5は、音声リストの更新した部分に関するデータを受信すると(工程B10)、この受信したデータに基づき家電アダプタ5が保持する音声リストの修正を行う(工程B11)。この修正は、上書きでもよい。そして、不要の音声データの削除と、今回更新されたものに対応する音声データの受信を開始し(工程B12)、この対応する音声データをクラウドサーバ20に要求する(工程B13)。クラウドサーバ20は、要求を受信すると(工程C12)、今回更新されたものに対応する音声データを家電アダプタ5に送信し(工程C13)、家電アダプタ5は、その音声データを受信する(工程B14)。
Next, an example of a voice management / reception sequence in which voice data is managed by the cloud server 20 and received by the home appliance adapter 5 will be described. As shown in FIG. 6, when the cloud server 20 updates the voice list held for each home appliance 10 (step C10), the data related to the updated portion of the updated voice list in the regular communication with the home appliance adapter 5 Is transmitted to the home appliance adapter 5. When home appliance adapter 5 receives the data related to the updated part of the voice list (step B10), it corrects the voice list held by home appliance adapter 5 based on the received data (step B11). This modification may be overwritten. Then, unnecessary audio data is deleted and reception of the audio data corresponding to the updated data is started (step B12), and the corresponding audio data is requested from the cloud server 20 (step B13). When the cloud server 20 receives the request (step C12), the cloud server 20 transmits voice data corresponding to the one updated this time to the home appliance adapter 5 (step C13), and the home appliance adapter 5 receives the voice data (step B14). ).
〔実施の形態2〕
本実施の形態は、実施の形態1の音声システム100構成において、さらに、音声出力部14にて今回音声出力した音声データの出力状態に関する出力情報に基づき、次回音声出力した音声データが決まる構成である。具体的には、音声システム100構成において、さらに、制御部6(出力情報送信部)が、音声出力部14にて今回音声出力した音声データの出力状態に関する出力情報を、通信部8からクラウドサーバ20に送信する出力情報送信する機能を有し、音声作成部21aは、受信した出力情報に基づき、音声出力部14にて次回音声出力する音声データを作成し、条件設定部21bは、作成された次回音声出力する音声データの出力条件を設定し、制御部21は、設定された次回音声出力する音声データの出力条件を示す条件情報と、該次回音声出力する音声データとを通信部23から家電1に送信する、構成である。これ以外は実施の形態1と同様の構成であるため、同じ構成部材には同じ符号を付し、説明を省略する。 [Embodiment 2]
The present embodiment is configured such that in theaudio system 100 configuration of the first embodiment, the audio data output next time is determined based on the output information related to the output state of the audio data output this time by the audio output unit 14. is there. Specifically, in the configuration of the audio system 100, the control unit 6 (output information transmission unit) further sends output information related to the output state of the audio data output by the audio output unit 14 from the communication unit 8 to the cloud server. The voice creation unit 21a creates voice data to be output next time by the voice output unit 14 based on the received output information, and the condition setting unit 21b is created. The control unit 21 sets the condition information indicating the set output condition of the audio data to be output next time and the audio data to be output next time from the communication unit 23. It is the structure which transmits to the household appliance 1. Since it is the structure similar to Embodiment 1 except this, the same code | symbol is attached | subjected to the same structural member and description is abbreviate | omitted.
本実施の形態は、実施の形態1の音声システム100構成において、さらに、音声出力部14にて今回音声出力した音声データの出力状態に関する出力情報に基づき、次回音声出力した音声データが決まる構成である。具体的には、音声システム100構成において、さらに、制御部6(出力情報送信部)が、音声出力部14にて今回音声出力した音声データの出力状態に関する出力情報を、通信部8からクラウドサーバ20に送信する出力情報送信する機能を有し、音声作成部21aは、受信した出力情報に基づき、音声出力部14にて次回音声出力する音声データを作成し、条件設定部21bは、作成された次回音声出力する音声データの出力条件を設定し、制御部21は、設定された次回音声出力する音声データの出力条件を示す条件情報と、該次回音声出力する音声データとを通信部23から家電1に送信する、構成である。これ以外は実施の形態1と同様の構成であるため、同じ構成部材には同じ符号を付し、説明を省略する。 [Embodiment 2]
The present embodiment is configured such that in the
家電アダプタ5からクラウドサーバ20に送信される出力情報とは、音声出力が行われたか否かを示す情報である。また、出力情報は、その時の家電10の状態情報を含んでいてもよい。状態情報を含んでいる出力情報としては、例えば、ユーザがいる(ユーザを検知した)あるいはいない(ユーザを検知しない)状態で、音声出力が行われたか否かを示す情報等がある。もちろん、これに限定されない。
The output information transmitted from the home appliance adapter 5 to the cloud server 20 is information indicating whether audio output has been performed. Moreover, the output information may include state information of the home appliance 10 at that time. The output information including the state information includes, for example, information indicating whether audio output has been performed in a state where the user is present (user is detected) or not (user is not detected). Of course, it is not limited to this.
クラウドサーバ20は、受信した出力情報に基づき、次回音声出力する音声データを作成し、この次回音声出力する音声データの出力条件を設定するが、これには、次回音声出力する音声データを作らない(あるいはゼロデータを作る)場合も含まれる。次回音声出力する音声データを作らない場合とは、今回の音声出力に応じた次回音声出力が無い場合である。クラウドサーバ20の条件設定部21bでは、出力条件として、例えば、出力の有無、出力内容、出力時間等を設定する。クラウドサーバ20にて作成された次回出力する音声データとその出力条件を示す条件情報は家電10に送信される。次回出力される音声データは、今回音声出力した音声データの出力情報に基づくものであるため、ユーザの利便性をより向上させた音声出力を行うことが可能となる。例えば、出力したという出力情報を受信した場合には、出力した音声データに続く内容の次の音声データを作成し、出力させるようにしたり、出力されなかったという出力情報を受信した場合には、所定時間経過後に出力のトリガとなる所定イベントの発生として状態情報を検知すると、出力されなかった音声データと同じものを再度出力させるようにしてもよい。出力した音声データに続く内容の次の音声データが出力されることで、出力される音声に継続性を持たせることができる。また、再出力(リトライ)することで、ユーザがその音声を聞く可能性を増やすことができる。
The cloud server 20 creates audio data to be output next time based on the received output information, and sets the output condition of the audio data to be output next time, but does not generate the audio data to be output next time. (Or create zero data). The case where the audio data to be output next time is not created is the case where there is no next audio output corresponding to the current audio output. The condition setting unit 21b of the cloud server 20 sets, for example, output presence / absence, output contents, output time, and the like as output conditions. The sound data to be output next time created in the cloud server 20 and the condition information indicating the output condition are transmitted to the home appliance 10. Since the audio data output next time is based on the output information of the audio data output this time, it is possible to perform audio output with improved user convenience. For example, when output information that is output is received, the next audio data of the content following the output audio data is created and output, or when output information that is not output is received, When status information is detected as the occurrence of a predetermined event that triggers output after a predetermined time has elapsed, the same audio data that was not output may be output again. By outputting the next audio data of the content following the output audio data, the output audio can have continuity. Further, by re-outputting (retrying), the possibility that the user hears the voice can be increased.
本実施の形態における音声出力の具体例を図7から9を用いて説明する。図7、8は、家電10にて音声出力のトリガとなる所定イベントの発生として状態情報が検知された(イベントが発生した)場合の例、図9は、イベント無しに、クラウドサーバ20から音声の出力要求が発せられた場合の例を示す。なお、図9の例では、家電10の運転状態を示す状態情報をクラウドサーバ20は所定時間毎に取得しているものとする。また、図7、8、9では、家電10に家電アダプタ5が含まれているものとして説明する。
Specific examples of audio output in the present embodiment will be described with reference to FIGS. 7 and 8 are examples in the case where state information is detected as an occurrence of a predetermined event that triggers voice output in the home appliance 10 (an event has occurred), and FIG. 9 is a voice from the cloud server 20 without an event. An example of when an output request is issued is shown. In the example of FIG. 9, it is assumed that the cloud server 20 acquires state information indicating the operation state of the home appliance 10 every predetermined time. 7, 8, and 9, the home appliance 10 is described as including the home appliance adapter 5.
図7(a)で示す具体例では、午前7時に、家電10(ここでは、冷蔵庫とする)は、家電10のドアが開けられると、その状態情報(扉開)をクラウドサーバ20に送信し、それに応じて、クラウドサーバ20は、今日の天気を音声出力させる出力要求を送信する。家電10は、この出力要求を受信するとこの出力要求に基づき、「今日の天気は曇りのち雨」という音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。その後、午前7時10分に再度家電10から「扉開」をクラウドサーバ20に送信すると、それに応じて、クラウドサーバ20は、出力要求を送信するが、このとき、クラウドサーバ20は、既に「出力済」を受信しており、かつ、現時刻が午前7時10分であり、つまり、前回の出力からの経過時間が少しであるので、今日の天気の続きを音声出力させる出力要求を送信する。これを受信した家電10は、「午後には雨です。傘をお忘れなく。」という前回に継続する音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。
In the specific example shown in FIG. 7A, at 7 am, when the door of the home appliance 10 is opened, the home appliance 10 (here, the refrigerator) transmits the state information (door open) to the cloud server 20. In response to this, the cloud server 20 transmits an output request for voice output of today's weather. Upon receiving this output request, the home appliance 10 outputs a voice output “Today's weather is cloudy and rainy” based on this output request, and sends output information (output completed) that the voice output has been performed to the cloud server 20. To do. Thereafter, when “door open” is transmitted again from the home appliance 10 to the cloud server 20 at 7:10 am, the cloud server 20 transmits an output request accordingly. At this time, the cloud server 20 has already “ "Output completed" has been received and the current time is 7:10 am, that is, the elapsed time from the previous output is little, so an output request for outputting the continuation of today's weather as audio is transmitted. To do. The home appliance 10 that has received this outputs a voice output that is “last rain in the afternoon. Don't forget your umbrella.” Send.
それに対して、図7(b)で示す具体例では、午前7時に、家電10から「扉開」をクラウドサーバ20に送信し、それに応じて、クラウドサーバ20は、今日の天気を出力させる出力要求を送信する。家電10は、この出力要求を受信しても、出力要求には応じなかったため、音声出力が行われなかったという出力情報(否出力)を、その理由(ここでは、家電10に内蔵の音声データ、すなわち第2データの出力を優先した)の情報と共に、クラウドサーバ20に送信する。その後、午前7時10分に再度家電10から「扉開」をクラウドサーバ20に送信すると、これに応じてクラウドサーバ20は、出力要求を送信するが、このとき、クラウドサーバ20は、既に「否出力」を受信しているので、今日の天気を音声出力させる出力要求を送信する。これを受信すると、家電10は、「今日の天気は曇りのち雨」という音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。
On the other hand, in the specific example shown in FIG. 7B, at 7 am, “door open” is transmitted from the home appliance 10 to the cloud server 20, and the cloud server 20 outputs the current weather accordingly. Send a request. Even if the home appliance 10 receives this output request, it did not respond to the output request, so the output information (no output) that the audio output was not performed is given for the reason (here, the audio data built in the home appliance 10). That is, it is transmitted to the cloud server 20 together with the information of giving priority to the output of the second data. Thereafter, when “door open” is transmitted again from the home appliance 10 to the cloud server 20 at 7:10 am, the cloud server 20 transmits an output request in response to this, but at this time, the cloud server 20 has already “ Since "No output" has been received, an output request for voice output of today's weather is transmitted. Upon receiving this, the home appliance 10 outputs a sound output “Today's weather is cloudy and rainy”, and transmits output information (output completed) that the sound output has been performed to the cloud server 20.
また、図8(a)で示す具体例では、午前7時に、家電10から「扉開」をクラウドサーバ20に送信し、それに応じて、クラウドサーバ20は、天気予報を音声出力させる出力要求を送信する。家電10は、この出力要求を受信するとこの出力要求に基づき、「今日の天気は曇りのち雨」という音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。その後、午前11時に再度家電10から「扉開」をクラウドサーバ20に送信すると、これに応じてクラウドサーバ20は、出力要求を送信するが、このとき、クラウドサーバ20は、既に「出力済」を受信しており、現時刻が午前11時であるため、午後の天気を音声出力させる出力要求を送信する。これを受信すると、家電10は、「午後には雨になりそう」という音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。
Further, in the specific example shown in FIG. 8A, at 7 am, “door open” is transmitted from the home appliance 10 to the cloud server 20, and accordingly, the cloud server 20 issues an output request for outputting the weather forecast as a voice. Send. Upon receiving this output request, the home appliance 10 outputs a voice output “Today's weather is cloudy and rainy” based on this output request, and sends output information (output completed) that the voice output has been performed to the cloud server 20. To do. Thereafter, when “door open” is transmitted again from the home appliance 10 to the cloud server 20 at 11:00 am, the cloud server 20 transmits an output request in response to this, but at this time, the cloud server 20 has already “output completed”. Since the current time is 11:00 am, an output request for voice output of the afternoon weather is transmitted. Upon receiving this, the home appliance 10 outputs a voice message “It is likely to rain in the afternoon” and sends output information (output completed) that the voice output has been performed to the cloud server 20.
それに対して、図8(b)で示す具体例では、午前7時に、家電10から「扉開」をクラウドサーバ20に送信し、それに応じて、クラウドサーバ20は、天気予報を音声出力させる出力要求を送信する。家電10は、この出力要求を受信しても、出力要求には応じなかったため、音声出力が行われなかったという出力情報(否出力)を、その理由の情報共に、クラウドサーバ20に送信する。その後、午前11時に再度家電10から「扉開」をクラウドサーバ20に送信すると、これに応じてクラウドサーバ20は、出力要求を送信するが、このとき、クラウドサーバ20は、既に「否出力」を受信しており、現時刻が午前11時であるため、午後の天気を音声出力させる出力要求を送信する。これを受信すると、家電10は、「午後には雨になりそう」という音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。
On the other hand, in the specific example shown in FIG. 8B, at 7 am, “door open” is transmitted from the home appliance 10 to the cloud server 20, and the cloud server 20 outputs the weather forecast as an audio output accordingly. Send a request. Even when the home appliance 10 receives this output request, it does not respond to the output request, and therefore transmits output information (no output) that the audio output has not been performed, together with the reason information, to the cloud server 20. Thereafter, when “door open” is transmitted again from the home appliance 10 to the cloud server 20 at 11:00 am, the cloud server 20 transmits an output request in response to this, but at this time, the cloud server 20 has already “not output”. Since the current time is 11:00 am, an output request for voice output of the afternoon weather is transmitted. Upon receiving this, the home appliance 10 outputs a voice message “It is likely to rain in the afternoon” and sends output information (output completed) that the voice output has been performed to the cloud server 20.
また、図9(a)で示す具体例では、冬のある日、クラウドサーバ20から、家電10(ここでは、エアコンとする)に、ユーザに運転モードを確認させる音声データを音声出力させる出力要求を送信する。家電10は、この出力要求を受信しても、出力要求には応じなかったため、音声出力が行われなかったという出力情報(否出力)を、その理由(ここでは、家電10に内蔵の音声データ、すなわち第2データの出力を優先した)の情報と共に、クラウドサーバ20に送信する。クラウドサーバ20はこれを受信し、出力要求をしてから5分後、家電冷房運転のままであると(家電10の運転状態を示す状態情報をクラウドサーバ20は所定時間毎に取得している)、ユーザに運転モードを確認させる音声データを音声出力させる出力要求を再度送信する。家電10は、これを受信すると、「誤って冷房運転にしていませんか?」という音声出力をし、音声出力が行われたという出力情報(出力済)をクラウドサーバ20に送信する。
Further, in the specific example shown in FIG. 9A, an output request for causing the user to confirm the operation mode from the cloud server 20 to the home appliance 10 (here, an air conditioner) is output from the cloud server 20 on a certain day in winter. Send. Even if the home appliance 10 receives this output request, it did not respond to the output request, so the output information (no output) that the audio output was not performed is given for the reason (here, the audio data built in the home appliance 10). That is, it is transmitted to the cloud server 20 together with the information of giving priority to the output of the second data. The cloud server 20 receives this, and after 5 minutes from the output request, if it remains in the home appliance cooling operation (the cloud server 20 acquires state information indicating the operation state of the home appliance 10 at predetermined time intervals) ), The output request for outputting the voice data for allowing the user to confirm the operation mode is transmitted again. Upon receiving this, the home appliance 10 outputs a voice message “Are you inadvertently performing the cooling operation?” And transmits output information (output completed) that the voice output has been performed to the cloud server 20.
それに対して図9(b)で示す具体例では、冬のある日、クラウドサーバ20から、家電1に、ユーザに運転モードを確認させる音声データを音声出力させる出力要求を送信する。家電10は、この出力要求を受信しても、出力要求には応じなかったため、音声出力が行われなかったという出力情報(否出力)を、その理由(ここでは、室内に人がいないことを検知した、あるいは人がいること検知しなかった)の情報と共に、クラウドサーバ20に送信する。クラウドサーバ20は、この情報を受信すると、例え、出力要求をしてから5分後、家電冷房運転のままであっても、既に、音声出力が行われなかった理由として、室内に人がいないという情報を受信しているので、再送はしない。
On the other hand, in the specific example shown in FIG. 9B, on a certain day in winter, the cloud server 20 transmits an output request for voice output of voice data that allows the user to check the operation mode to the home appliance 1. Even when the home appliance 10 receives this output request, it did not respond to the output request, so the output information (no output) that the audio output was not performed is given for the reason (here, that there is no person in the room). The information is transmitted to the cloud server 20 together with information on whether or not a person is detected. When the cloud server 20 receives this information, there is no person in the room as the reason why the voice output has not been performed even if the home appliance cooling operation is performed after 5 minutes from the output request. Is not sent again.
このように、本実施の形態によると、出力される音声に継続性を持たせることができたり、再出力(リトライ)することで、ユーザがその音声を聞く可能性を増やすことができたり、無駄な出力を抑制したりすることができる。
As described above, according to the present embodiment, the output sound can be given continuity, or by re-output (retry), the possibility that the user can hear the sound can be increased, Unnecessary output can be suppressed.
〔実施の形態3〕
本実施の形態は、実施の形態1の音声システム100構成において、条件設定部21bが、出力条件として音声データの有効期限を設定し、音声出力部14は有効期限を過ぎた音声データに応じた音声出力をしない構成である。これ以外は実施の形態1と同様の構成であるため、同じ構成部材には同じ符号を付し、説明を省略する。 [Embodiment 3]
In the present embodiment, in the configuration of theaudio system 100 according to the first embodiment, the condition setting unit 21b sets the expiration date of the audio data as an output condition, and the audio output unit 14 responds to the audio data whose expiration date has passed. The configuration does not output audio. Since it is the structure similar to Embodiment 1 except this, the same code | symbol is attached | subjected to the same structural member and description is abbreviate | omitted.
本実施の形態は、実施の形態1の音声システム100構成において、条件設定部21bが、出力条件として音声データの有効期限を設定し、音声出力部14は有効期限を過ぎた音声データに応じた音声出力をしない構成である。これ以外は実施の形態1と同様の構成であるため、同じ構成部材には同じ符号を付し、説明を省略する。 [Embodiment 3]
In the present embodiment, in the configuration of the
家電アダプタ5の記憶部7の容量は限られており、それほど多くの音声データを保持できない場合もある。例えば、記憶部7は、約7秒分の音声データを64個保持できるようになっている。また、音声出力する音声データの中には、時刻によっては出力しても意味のないあるいは間違った情報になるものも存在しており、そのような音声データが記憶部7に記憶されていてもあっても無駄であり、出力する音声データのバリエーションを狭めてしまう。
The capacity of the storage unit 7 of the home appliance adapter 5 is limited, and there is a case where not much audio data can be held. For example, the storage unit 7 can hold 64 pieces of audio data for about 7 seconds. In addition, some audio data to be output by voice may be meaningless or wrong information depending on the time of day, even if such audio data is stored in the storage unit 7. Even if it exists, it is useless, and the variation of the audio data to be output is narrowed.
そこで、本実施の形態では、条件設定部21bは、出力条件として音声データの有効期限を設定する。設定された有効期限は、図11に示すように、有効期限は音声データ毎に設定され、家電アダプタ5に送信される。家電アダプタ5では、制御部6による制御により、有効期限の設定された音声データを、音声出力部14から出力するが、このとき、制御部6は、有効期限を確認し、有効期限内のものだけを、そして、有効期限が早いものから順に出力する。
Therefore, in the present embodiment, the condition setting unit 21b sets the expiration date of the audio data as the output condition. As shown in FIG. 11, the set expiration date is set for each audio data and transmitted to the home appliance adapter 5. In the home appliance adapter 5, the audio data with the expiration date set is output from the audio output unit 14 under the control of the control unit 6. At this time, the control unit 6 confirms the expiration date and is within the expiration date. Are output in order from the earliest expiration date.
以上の構成により、音声データの有効期限が設定され、音声データは有効期限に従って出力されるため、その時期によっては出力すると意味のないものや間違いとなるものを出力することがなくなる。また、有効期限内の音声データのみタイムリーに出力することが可能となる。
With the above configuration, the expiration date of the audio data is set, and the audio data is output according to the expiration date. Therefore, depending on the timing, there is no need to output anything that is meaningless or wrong. In addition, it is possible to output only audio data within the expiration date in a timely manner.
また、有効期限を過ぎた音声データは記憶部7から削除する。音声データの削除は以下のように行う。クラウドサーバ20が時刻情報に基づいて対象となる家電10に対応する音声リストから、有効期限切れの音声データを削除する。その後、家電アダプタ5との通信を行い、音声リストに同期するよう家電アダプタ5の記憶部7の音声データを削除する。この削除により、無駄な保持、無駄な音声出力を抑制でき、有効な音声データを増やすことが可能となる。なお、削除は、時刻情報に基づくものとするが、家電アダプタ5とクラウドサーバ20間の定期通信時や、クラウドサーバ20から音声出力要求を行い、家電アダプタ5への新しい音声データ登録時等であってもよい。
Also, audio data that has expired is deleted from the storage unit 7. The audio data is deleted as follows. The cloud server 20 deletes the expired voice data from the voice list corresponding to the target home appliance 10 based on the time information. Thereafter, communication with the home appliance adapter 5 is performed, and the voice data in the storage unit 7 of the home appliance adapter 5 is deleted so as to synchronize with the voice list. By this deletion, useless holding and useless sound output can be suppressed, and effective sound data can be increased. Note that the deletion is based on time information, but at the time of regular communication between the home appliance adapter 5 and the cloud server 20, or when a voice output request is made from the cloud server 20 and new voice data is registered in the home appliance adapter 5, etc. There may be.
〔実施の形態4〕
実施の形態1から3にて説明した音声システム100、家電10、家電アダプタ5、クラウドサーバ20(特に、制御部6,11,21)は、それぞれ、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 [Embodiment 4]
Theaudio system 100, the home appliance 10, the home appliance adapter 5, and the cloud server 20 (in particular, the control units 6, 11, 21) described in the first to third embodiments are each formed in an integrated circuit (IC chip) or the like. It may be realized by a logic circuit (hardware), or may be realized by software using a CPU (Central Processing Unit).
実施の形態1から3にて説明した音声システム100、家電10、家電アダプタ5、クラウドサーバ20(特に、制御部6,11,21)は、それぞれ、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 [Embodiment 4]
The
後者の場合、音声システム100、家電10、家電アダプタ5、クラウドサーバ20は、それぞれ、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラム及び各種データがコンピュータ(又はCPU)で読み取り可能に記録されたROM(Read Only Memory)又は記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)等を備えている。そして、コンピュータ(又はCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路等を用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。
In the latter case, the voice system 100, the home appliance 10, the home appliance adapter 5, and the cloud server 20 are respectively a CPU that executes instructions of a program that is software that realizes each function, and the program and various data are computers (or CPUs). A ROM (Read Only Memory) or a storage device (referred to as “recording medium”) recorded in a readable manner, a RAM (Random Access Memory) for developing the program, and the like are provided. And the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. The present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
本発明は上述した各実施の形態に限定されるものではなく、種々の変更が可能であり、異なる実施の形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施の形態についても本発明の技術的範囲に含まれる。さらに、各実施の形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。
The present invention is not limited to the above-described embodiments, and various modifications are possible, and the present invention also relates to embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is included in the technical scope. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
〔まとめ〕
本発明の態様1に係る音声システム(100)は、音声データを作成する音声作成部(21a)、音声データ毎に音声出力時の出力条件を設定する条件設定部(21b)、及び設定された出力条件を示す条件情報を電子機器(家電10)に送信する送信部(制御部21)、を備えた音声サーバと、上記条件情報を受信する受信部(制御部6)、及び受信した上記条件情報に従って音声データに応じた音声出力を行う出力制御部(制御部6、制御部11)、を備えた電子機器と、を備える。 [Summary]
The voice system (100) according to the first aspect of the present invention includes a voice creation unit (21a) that creates voice data, a condition setting unit (21b) that sets output conditions for voice output for each voice data, and a set A voice server including a transmission unit (control unit 21) that transmits condition information indicating an output condition to the electronic device (home appliance 10), a reception unit (control unit 6) that receives the condition information, and the received condition And an electronic device including an output control unit (controlunit 6, control unit 11) that performs audio output according to audio data according to the information.
本発明の態様1に係る音声システム(100)は、音声データを作成する音声作成部(21a)、音声データ毎に音声出力時の出力条件を設定する条件設定部(21b)、及び設定された出力条件を示す条件情報を電子機器(家電10)に送信する送信部(制御部21)、を備えた音声サーバと、上記条件情報を受信する受信部(制御部6)、及び受信した上記条件情報に従って音声データに応じた音声出力を行う出力制御部(制御部6、制御部11)、を備えた電子機器と、を備える。 [Summary]
The voice system (100) according to the first aspect of the present invention includes a voice creation unit (21a) that creates voice data, a condition setting unit (21b) that sets output conditions for voice output for each voice data, and a set A voice server including a transmission unit (control unit 21) that transmits condition information indicating an output condition to the electronic device (home appliance 10), a reception unit (control unit 6) that receives the condition information, and the received condition And an electronic device including an output control unit (control
上記構成によると、音声サーバにて音声データ毎に音声出力の出力条件を設定でき、電子機器にてこの出力条件に従って音声データに応じた音声出力がなされる。よって、電子機器からユーザの利便性を向上させた音声出力を行うことが可能となる。また、電子機器からの音声出力を様々な場面に応じてより有効に活用することが可能となる。
According to the above configuration, the audio server can set the output condition of the audio output for each audio data, and the electronic device performs the audio output according to the audio data according to the output condition. Therefore, it is possible to perform audio output with improved user convenience from the electronic device. In addition, it is possible to more effectively utilize the audio output from the electronic device according to various situations.
また、本発明の態様2に係る音声システムでは、上記態様1に係る音声システムにおいて、上記音声作成部は、第1音声データを作成し、上記条件設定部は、上記第1音声データの出力条件を設定し、上記送信部は、上記条件設定部により上記第1音声データに対して設定された出力条件を示す第1条件情報と、上記第1音声データとを上記電子機器に送信し、上記出力制御部は、上記音声サーバから受信した上記第1条件情報と、上記第1音声データ以外の第2音声データの規定の出力条件を示す第2条件情報とに基づき、音声データを特定し、該特定した音声データに応じた音声出力を行う。
In the audio system according to aspect 2 of the present invention, in the audio system according to aspect 1, the audio generation unit generates first audio data, and the condition setting unit outputs an output condition of the first audio data. The transmission unit transmits first condition information indicating an output condition set for the first audio data by the condition setting unit and the first audio data to the electronic device, and The output control unit specifies voice data based on the first condition information received from the voice server and second condition information indicating a prescribed output condition of second voice data other than the first voice data, Audio output is performed according to the specified audio data.
上記構成によると、音声サーバから受信した第1音声データと、それ以外の(例えば、電子機器にて出荷時から保持している)第2音声データとがある場合に、それらの出力時の条件が設定された第1条件情報と第2条件情報とに基づき、出力される音声データが特定される。よって、出力される音声データの選択の幅が広がり、設定された条件に従って適切に音声データを出力することができる。
According to the above configuration, when there is first audio data received from the audio server and other second audio data (for example, retained from the time of shipment in the electronic device), conditions for outputting them The audio data to be output is specified based on the first condition information and the second condition information for which is set. Therefore, the selection range of the audio data to be output is widened, and the audio data can be appropriately output according to the set conditions.
また、本発明の態様3に係る音声システムでは、上記態様1に係る音声システムにおいて、上記条件設定部は、上記出力条件として音声データの優先度を設定し、上記優先度が設定された音声データが複数ある場合、上記出力制御部は、上記優先度に従って音声出力する音声データを特定する。
In the audio system according to aspect 3 of the present invention, in the audio system according to aspect 1, the condition setting unit sets the priority of the audio data as the output condition, and the audio data in which the priority is set When there are a plurality of the output control units, the output control unit specifies audio data to be output as audio according to the priority.
上記構成によると、出力すべき音声データが複数ある場合、優先度に従った音声出力を行うことが可能となる。
According to the above configuration, when there are a plurality of audio data to be output, it is possible to output audio according to the priority.
また、本発明の態様4に係る音声システムでは、上記態様3に係る音声システムにおいて、上記条件設定部が上記出力条件として設定する上記優先度は、時刻に応じて値が変わるものであって、上記優先度が設定された音声データが複数ある場合、上記出力制御部は、上記優先度に従って音声出力する音声データを特定する。
Further, in the sound system according to aspect 4 of the present invention, in the sound system according to aspect 3, the priority set by the condition setting unit as the output condition changes depending on time, When there are a plurality of audio data with the priority set, the output control unit specifies audio data to be output according to the priority.
上記構成によると、出力すべき音声データが複数ある場合、時刻に応じて値が変わる優先度に従った音声出力を行うことが可能となる。
According to the above configuration, when there are a plurality of audio data to be output, it is possible to perform audio output according to the priority that changes in value according to time.
また、本発明の態様5に係る音声システムでは、上記態様1または2に係る音声システムにおいて、上記電子機器は、出力すべき音声データの存在をユーザに通知する通知部(LEDランプ18)を備え、上記出力制御部は、ユーザからの出力指示を受け付けた場合に、上記出力すべき音声データに応じた音声出力をする。
In the audio system according to aspect 5 of the present invention, in the audio system according to aspect 1 or 2, the electronic device includes a notification unit (LED lamp 18) that notifies the user of the presence of audio data to be output. When the output instruction is received from the user, the output control unit outputs a sound corresponding to the sound data to be output.
上記構成によると、音声データの存在をユーザに通知でき、ユーザは通知を確認して出力させたい場合のみ、出力指示をすることができる。よって、ユーザの利便性がより向上する。
According to the above configuration, the user can be notified of the presence of audio data, and the user can issue an output instruction only when the user wants to confirm and output the notification. Therefore, user convenience is further improved.
また、本発明の態様6に係る音声システムでは、上記態様1に係る音声システムにおいて、上記条件設定部は、上記出力条件として、音声データに応じた音声出力をさせるトリガとなる所定イベントを設定し、上記出力制御部は、上記所定イベント発生時に音声データに応じた音声出力を行う。
In the audio system according to aspect 6 of the present invention, in the audio system according to aspect 1, the condition setting unit sets a predetermined event serving as a trigger for outputting audio in accordance with audio data as the output condition. The output control unit performs audio output according to audio data when the predetermined event occurs.
上記構成によると、所定イベント発生時に音声を再出力することが可能となる。所定イベント発生とは、例えば、電子機器のボタンが押された場合、また、音声データの出力時に電子機器付近にユーザがいないことを検知した後所定時間経過した場合、等が挙げられる。また、ボタンが押されると複数回出力されるように構成されていてもよい。
According to the above configuration, it is possible to re-output sound when a predetermined event occurs. The occurrence of the predetermined event includes, for example, a case where a button of the electronic device is pressed, and a case where a predetermined time has elapsed after detecting that there is no user near the electronic device when outputting audio data. Further, it may be configured to output a plurality of times when the button is pressed.
また、本発明の態様7に係る音声システムでは、上記態様1に係る音声システムにおいて、上記電子機器は、今回音声出力した音声データの出力状態に関する出力情報を送信する出力情報送信部(制御部6)を備え、上記音声作成部は、上記電子機器から受信した上記出力情報に基づき、上記出力制御部が次回音声出力する音声データを作成し、上記条件設定部は、作成された上記次回音声出力する音声データの上記出力条件を設定し、上記送信部は、設定された上記次回音声出力する音声データの上記出力条件を示す条件情報と、該次回音声出力する音声データとを上記電子機器に送信する。
In the audio system according to aspect 7 of the present invention, in the audio system according to aspect 1, the electronic device transmits an output information related to the output state of the audio data output this time (control unit 6). ), And the sound generation unit generates sound data to be output next time by the output control unit based on the output information received from the electronic device, and the condition setting unit outputs the generated next sound output The output condition of the audio data to be output is set, and the transmission unit transmits the set condition information indicating the output condition of the audio data to be output next time and the audio data to be output next time to the electronic device. To do.
上記構成によると、今回音声出力した音声データの出力情報(例えば、音声出力が行われたか否か、あるいは、ユーザがいる(ユーザを検知した)状態で音声出力が行われたか、等)により、次回音声出力する音声データが作成される。この作成された次回出力する音声データとその出力条件を示す条件情報が電子機器にて受信される。次回出力される音声データは、今回音声出力した音声データの出力情報に基づくものであるため、ユーザの利便性をより向上させた音声出力を行うことが可能となる。例えば、前回出力した音声データに続く内容の次の音声データを出力するようにしたり、出力されなかった音声データを、必要に応じて再出力(リトライ)したりすることが可能となる。
According to the above configuration, the output information of the audio data output this time (for example, whether or not audio output has been performed, or whether or not audio output has been performed in the presence of a user (detected user)), Audio data to be output next time is created. The created audio data to be output next time and condition information indicating the output condition are received by the electronic device. Since the audio data output next time is based on the output information of the audio data output this time, it is possible to perform audio output with improved user convenience. For example, it is possible to output the next audio data following the previously output audio data, or to re-output (retry) the audio data that has not been output as necessary.
また、本発明の態様8に係る音声システムでは、上記態様1に係る音声システムにおいて、上記条件設定部は、上記出力条件として音声データの有効期限を設定し、上記出力制御部は、上記有効期限を過ぎた音声データに応じた音声出力をしない。
In the audio system according to aspect 8 of the present invention, in the audio system according to aspect 1, the condition setting unit sets an expiration date of audio data as the output condition, and the output control unit Does not output audio according to audio data that has passed.
上記構成によると、声データの有効期限が設定され、音声データは有効期限に従って出力されるため、有効なもののみ、タイムリーに出力することが可能となる。また、無駄な出力を抑制できる。
According to the above configuration, since the expiration date of the voice data is set and the voice data is output according to the expiration date, only valid data can be output in a timely manner. Moreover, useless output can be suppressed.
また、本発明の態様9に係る音声システムでは、上記態様2に係る音声システムにおいて、上記電子機器は、音声出力する音声データを、上記第1音声データ及び上記第2音声データのいずれか一方に、あるいは両方に切り替える音声切替部(16)を備る。
Moreover, in the audio system according to aspect 9 of the present invention, in the audio system according to aspect 2, the electronic device converts the audio data to be output as audio into either the first audio data or the second audio data. Or a voice switching unit (16) for switching to both.
上記構成によると、電子機器にて、音声サーバから受信した第1音声データと、予め保持している第2音声データと、についてどちらか一方あるいは両方の出力させたい(ユーザが聞きたい)、音声データを出力させることができる。
According to the above configuration, the electronic device wants to output one or both of the first audio data received from the audio server and the second audio data stored in advance (the user wants to listen), the audio Data can be output.
また、本発明の態様10に係る音声システムでは、上記態様2に係る音声システムにおいて、上記電子機器は、音声出力する音声データの音量または音質を、上記第1音声データと上記第2音声データとで個別に設定する音設定部(15)を備えている。
Moreover, in the audio system according to aspect 10 of the present invention, in the audio system according to aspect 2, the electronic device determines the volume or sound quality of the audio data to be output as the first audio data and the second audio data. And a sound setting section (15) for setting individually.
上記構成によると、電子機器にて、音設定部から、音声サーバから受信した第1音声データと予め保持している第2音声データとについての音量または音質を、個別に設定できる。よって、ユーザに聞きやすいように設定することができる。第1音声データと第2音声データとで、音声や音量が同じになるように設定すると、ユーザは両方の音声データを区別なく聞くことができる。また、異なるように設定すると、ユーザは両方の音声データを容易に区別して聞くことができる。
According to the above configuration, the volume or sound quality of the first sound data received from the sound server and the second sound data held in advance can be individually set by the electronic device from the sound setting unit. Therefore, it can be set so that it is easy for the user to hear. If the first voice data and the second voice data are set to have the same voice and volume, the user can listen to both voice data without distinction. If different settings are made, the user can easily distinguish and listen to both audio data.
また、本発明の態様11に係る電子機器は、上記態様1から10のいずれか1つの係る音声システムが備えた電子機器である。
Further, an electronic device according to aspect 11 of the present invention is an electronic device provided in any one of the sound systems according to aspects 1 to 10 described above.
また、本発明の態様12に係る音声サーバは、上記態様1から10のいずれか1つの係る音声システムが備えた音声サーバである。
Further, the voice server according to aspect 12 of the present invention is a voice server provided in any one of the voice systems according to the above aspects 1 to 10.
上記電子機器及び音声サーバを用いることで、上記態様1から10のいずれか1つの係る音声システムを構成することができる。
The voice system according to any one of the above aspects 1 to 10 can be configured by using the electronic device and the voice server.
また、本発明の各態様に係る音声システム、電子機器、または音声サーバは、コンピュータによって実現してもよく、この場合には、コンピュータを音声システム、電子機器、または音声サーバが備える各手手段として動作させることにより音声システム、電子機器、または音声サーバをコンピュータにて実現させるプログラム、及びそれを記録したコンピュータ読み取り可能な記録媒体も本発明の範疇に入る。
In addition, the audio system, electronic device, or audio server according to each aspect of the present invention may be realized by a computer. In this case, the computer is used as each means provided in the audio system, electronic device, or audio server. A program that realizes a sound system, an electronic device, or a sound server by a computer by operating and a computer-readable recording medium that records the program also fall within the scope of the present invention.
本発明は、通信ネットワークに接続し、サーバから受信した音声を出力する電子機器を備えた音声システム等に利用可能である。
The present invention can be used for an audio system including an electronic device that is connected to a communication network and outputs audio received from a server.
5,5-1,5-2,5-3,5-4 家電アダプタ
6 制御部(受信部、出力制御部、出力情報送信部)
7 記憶部
8 通信部
9 接続部
10,10-1,10-2,10-3,10-4 家電(電子機器)
11 制御部(出力制御部)
13 記憶部
14 音声出力部
15 音設定部
16 音声切替部
17 状態検知部
18 LEDランプ(通知部)
20 クラウドサーバ(音声サーバ)
21 制御部(送信部)
21a 音声作成部
21b 条件設定部
22 記憶部
22a データベース(DB)
23 通信部
23a エラー検知部
30,30-1,30-2,30-3 通信端末装置
40 中継局
50 ユーザ宅
62 広域通信ネットワーク
100 音声システム 5,5-1,5-2,5-3,5-4Home Appliance Adapter 6 Controller (Receiver, Output Controller, Output Information Transmitter)
7Storage unit 8 Communication unit 9 Connection unit 10, 10-1, 10-2, 10-3, 10-4 Home appliance (electronic device)
11 Control unit (output control unit)
13Storage Unit 14 Audio Output Unit 15 Sound Setting Unit 16 Audio Switching Unit 17 State Detection Unit 18 LED Lamp (Notification Unit)
20 Cloud server (voice server)
21 Control unit (transmission unit)
21aVoice creation unit 21b Condition setting unit 22 Storage unit 22a Database (DB)
23 communication unit 23a error detection unit 30, 30-1, 30-2, 30-3communication terminal device 40 relay station 50 user home 62 wide area communication network 100 voice system
6 制御部(受信部、出力制御部、出力情報送信部)
7 記憶部
8 通信部
9 接続部
10,10-1,10-2,10-3,10-4 家電(電子機器)
11 制御部(出力制御部)
13 記憶部
14 音声出力部
15 音設定部
16 音声切替部
17 状態検知部
18 LEDランプ(通知部)
20 クラウドサーバ(音声サーバ)
21 制御部(送信部)
21a 音声作成部
21b 条件設定部
22 記憶部
22a データベース(DB)
23 通信部
23a エラー検知部
30,30-1,30-2,30-3 通信端末装置
40 中継局
50 ユーザ宅
62 広域通信ネットワーク
100 音声システム 5,5-1,5-2,5-3,5-4
7
11 Control unit (output control unit)
13
20 Cloud server (voice server)
21 Control unit (transmission unit)
21a
23 communication unit 23a error detection unit 30, 30-1, 30-2, 30-3
Claims (5)
- 音声データを作成する音声作成部、音声データ毎に音声出力時の出力条件を設定する条件設定部、及び設定された出力条件を示す条件情報を電子機器に送信する送信部、を備えた音声サーバと、
上記条件情報を受信する受信部、及び受信した上記条件情報に従って音声データに応じた音声出力を行う出力制御部、を備えた電子機器と、を備えたことを特徴とする音声システム。 A voice server comprising a voice creation unit that creates voice data, a condition setting unit that sets an output condition at the time of voice output for each voice data, and a transmission unit that transmits condition information indicating the set output condition to an electronic device When,
An audio system comprising: an electronic device including: a receiving unit that receives the condition information; and an output control unit that performs audio output according to audio data in accordance with the received condition information. - 上記音声作成部は、第1音声データを作成し、
上記条件設定部は、上記第1音声データの出力条件を設定し、
上記送信部は、上記条件設定部により上記第1音声データに対して設定された出力条件を示す第1条件情報と、上記第1音声データとを上記電子機器に送信し、
上記出力制御部は、上記音声サーバから受信した上記第1条件情報と、上記第1音声データ以外の第2音声データの規定の出力条件を示す第2条件情報とに基づき、音声データを特定し、該特定した音声データに応じた音声出力を行うことを特徴とする請求項1に記載の音声システム。 The voice creation unit creates first voice data,
The condition setting unit sets an output condition of the first audio data,
The transmission unit transmits first condition information indicating an output condition set for the first audio data by the condition setting unit and the first audio data to the electronic device,
The output control unit identifies voice data based on the first condition information received from the voice server and second condition information indicating a prescribed output condition of second voice data other than the first voice data. The voice system according to claim 1, wherein voice output is performed according to the specified voice data. - 上記条件設定部は、上記出力条件として音声データの優先度を設定し、
上記優先度が設定された音声データが複数ある場合、上記出力制御部は、上記優先度に従って音声出力する音声データを特定することを特徴とする請求項1に記載の音声システム。 The condition setting unit sets the priority of audio data as the output condition,
2. The audio system according to claim 1, wherein when there is a plurality of audio data set with the priority, the output control unit specifies audio data to be output with audio according to the priority. 3. - 上記条件設定部が上記出力条件として設定する上記優先度は、時刻に応じて値が変わるものであって、
上記優先度が設定された音声データが複数ある場合、上記出力制御部は、上記優先度に従って音声出力する音声データを特定することを特徴とする請求項3に記載の音声システム。 The priority set by the condition setting unit as the output condition changes in value according to time,
4. The audio system according to claim 3, wherein when there is a plurality of audio data set with the priority, the output control unit specifies audio data to be output with audio according to the priority. - 上記電子機器は、出力すべき音声データの存在をユーザに通知する通知部を備え、
上記出力制御部は、ユーザからの出力指示を受け付けた場合に、上記出力すべき音声データに応じた音声出力をすることを特徴とする請求項1に記載の音声システム。 The electronic device includes a notification unit that notifies the user of the presence of audio data to be output,
The audio system according to claim 1, wherein the output control unit outputs an audio corresponding to the audio data to be output when an output instruction is received from a user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580001735.9A CN105493451A (en) | 2014-02-28 | 2015-01-28 | Audio system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-039525 | 2014-02-28 | ||
JP2014039525A JP2015163920A (en) | 2014-02-28 | 2014-02-28 | Voice system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015129372A1 true WO2015129372A1 (en) | 2015-09-03 |
Family
ID=54008698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/052279 WO2015129372A1 (en) | 2014-02-28 | 2015-01-28 | Audio system |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP2015163920A (en) |
CN (1) | CN105493451A (en) |
WO (1) | WO2015129372A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017073227A1 (en) * | 2015-10-29 | 2017-05-04 | シャープ株式会社 | Electronic device and method for controlling same |
JP2018021709A (en) * | 2016-08-04 | 2018-02-08 | シャープ株式会社 | Air Conditioning System |
CN108092925A (en) * | 2017-12-05 | 2018-05-29 | 佛山市顺德区美的洗涤电器制造有限公司 | Voice update method and device |
CN111554266A (en) * | 2019-02-08 | 2020-08-18 | 夏普株式会社 | Voice output device and electrical equipment |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6701038B2 (en) * | 2016-09-08 | 2020-05-27 | アズビル株式会社 | Monitoring device |
JP7026449B2 (en) | 2017-04-21 | 2022-02-28 | ソニーグループ株式会社 | Information processing device, receiving device, and information processing method |
JP7011189B2 (en) * | 2017-07-14 | 2022-02-10 | ダイキン工業株式会社 | Equipment control system |
JP6571144B2 (en) * | 2017-09-08 | 2019-09-04 | シャープ株式会社 | Monitoring system, monitoring device, server, and monitoring method |
JP2019066378A (en) * | 2017-10-03 | 2019-04-25 | 東芝ライフスタイル株式会社 | Operating sound comparison device |
JP7057204B2 (en) * | 2018-04-27 | 2022-04-19 | シャープ株式会社 | Network system, server and information processing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11161298A (en) * | 1997-11-28 | 1999-06-18 | Toshiba Corp | Method and device for voice synthesizer |
JP2004287193A (en) * | 2003-03-24 | 2004-10-14 | Equos Research Co Ltd | Device and program for data creation and onboard device |
JP2008026621A (en) * | 2006-07-21 | 2008-02-07 | Fujitsu Ltd | Information processor with speech interaction function |
JP2010181704A (en) * | 2009-02-06 | 2010-08-19 | Dainippon Printing Co Ltd | Voice information creation device, voice information playback device, server device and voice information playback system or the like |
-
2014
- 2014-02-28 JP JP2014039525A patent/JP2015163920A/en not_active Withdrawn
-
2015
- 2015-01-28 CN CN201580001735.9A patent/CN105493451A/en active Pending
- 2015-01-28 WO PCT/JP2015/052279 patent/WO2015129372A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11161298A (en) * | 1997-11-28 | 1999-06-18 | Toshiba Corp | Method and device for voice synthesizer |
JP2004287193A (en) * | 2003-03-24 | 2004-10-14 | Equos Research Co Ltd | Device and program for data creation and onboard device |
JP2008026621A (en) * | 2006-07-21 | 2008-02-07 | Fujitsu Ltd | Information processor with speech interaction function |
JP2010181704A (en) * | 2009-02-06 | 2010-08-19 | Dainippon Printing Co Ltd | Voice information creation device, voice information playback device, server device and voice information playback system or the like |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017073227A1 (en) * | 2015-10-29 | 2017-05-04 | シャープ株式会社 | Electronic device and method for controlling same |
JP2017084177A (en) * | 2015-10-29 | 2017-05-18 | シャープ株式会社 | Electronic apparatus and control method thereof |
JP2018021709A (en) * | 2016-08-04 | 2018-02-08 | シャープ株式会社 | Air Conditioning System |
CN108092925A (en) * | 2017-12-05 | 2018-05-29 | 佛山市顺德区美的洗涤电器制造有限公司 | Voice update method and device |
CN111554266A (en) * | 2019-02-08 | 2020-08-18 | 夏普株式会社 | Voice output device and electrical equipment |
Also Published As
Publication number | Publication date |
---|---|
JP2015163920A (en) | 2015-09-10 |
CN105493451A (en) | 2016-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015129372A1 (en) | Audio system | |
US11782590B2 (en) | Scene-operation method, electronic device, and non-transitory computer readable medium | |
US10334304B2 (en) | Set top box automation | |
EP3330939B1 (en) | Media rendering system | |
WO2020223854A1 (en) | Device network configuration method and apparatus, electronic device and storage medium | |
US20080027566A1 (en) | Home Network System | |
US20140128994A1 (en) | Logical sensor server for logical sensor platforms | |
US20070133569A1 (en) | Home network system and its configuration system | |
CN109240100A (en) | Intelligent home furnishing control method, equipment, system and storage medium | |
JP2016526221A (en) | Signaling device for teaching learning devices | |
JP2016518669A (en) | Intelligent remote control method, router, terminal, device, program, and recording medium | |
US20070169074A1 (en) | Upgrade apparatus and its method for home network system | |
JP6349386B2 (en) | Network system, server, communication device, and information processing method | |
JP2016063415A (en) | Network system, audio output method, server, device and audio output program | |
US10797982B2 (en) | Main electronic device for communicating within a network and method for operating a main electronic device for communicating within the network | |
JP2005157603A (en) | State information providing device and method, computer program for the same, recording medium with the program stored, and computer programmed by the program | |
CN113452589B (en) | Wide area network intelligent home remote control system and working method thereof | |
CN113825004B (en) | Multi-screen sharing method and device for display content, storage medium and electronic device | |
JP5714067B2 (en) | COMMUNICATION SYSTEM, SERVER DEVICE, CONTROL METHOD, AND PROGRAM | |
JP6607668B2 (en) | Network system, audio output method, server, device, and audio output program | |
JP6382026B2 (en) | Message transmission server, external device, message transmission system, message transmission server control method, control program, and recording medium | |
CN113703334B (en) | Intelligent scene updating method and device | |
CN113965460B (en) | Method and device for preventing error reset, intelligent device and storage medium | |
JP6113697B2 (en) | Write data transmitting device, audio output device, write data sharing system, control method, and control program | |
JP6430187B2 (en) | Electronic equipment and audio output system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201580001735.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15755201 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15755201 Country of ref document: EP Kind code of ref document: A1 |