CN109559742B - Voice control method, system, storage medium and computer equipment - Google Patents
Voice control method, system, storage medium and computer equipment Download PDFInfo
- Publication number
- CN109559742B CN109559742B CN201811321335.8A CN201811321335A CN109559742B CN 109559742 B CN109559742 B CN 109559742B CN 201811321335 A CN201811321335 A CN 201811321335A CN 109559742 B CN109559742 B CN 109559742B
- Authority
- CN
- China
- Prior art keywords
- control
- user
- voice
- voice command
- control strategy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000011217 control strategy Methods 0.000 claims abstract description 114
- 230000006399 behavior Effects 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000004590 computer program Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention discloses a voice control method, a system, a storage medium and computer equipment, wherein the method comprises the following steps: when the same voice command sent by at least two users at the same time is received, the voice command of each user is respectively identified by adopting a voiceprint identification mode so as to identify the user ID corresponding to each voice command; respectively searching a control strategy corresponding to each user ID and the voice instruction in a pre-stored database; processing the control strategies corresponding to all the user IDs according to a preset mode to obtain a target control strategy; and executing corresponding control behaviors according to the target control strategy. The invention can solve the problem that the control voice of each user cannot be well processed when a plurality of users send the same control voice at the same time.
Description
Technical Field
The invention relates to the technical field of Internet of things, in particular to a voice control method, a voice control system, a storage medium and computer equipment.
Background
Along with the rapid development of the internet of things technology, more and more intelligent household equipment of the internet of things enter the daily life of people, such as an intelligent camera, an intelligent air conditioner controller, an intelligent floor sweeping robot and the like, and convenience is brought to the life of people.
At present, the function control of intelligent household equipment is more and more common by adopting a voice recognition technology, people can realize corresponding equipment control by sending control voice, for example, a user can send the control voice of opening an air conditioner, so that the automatic opening of the air conditioner is realized. However, in the voice control method in the prior art, when a plurality of users send out the same control voice at the same time, the control voice of each user cannot be processed well, so that the use range of voice control is limited, and the use experience of the users is influenced.
Disclosure of Invention
Therefore, an embodiment of the present invention provides a voice control method to solve the problem that when multiple users send out the same control voice at the same time, the control voice of each user cannot be processed well.
According to an embodiment of the invention, the voice control method comprises the following steps:
when the same voice command sent by at least two users at the same time is received, the voice command of each user is respectively identified by adopting a voiceprint identification mode so as to identify the user ID corresponding to each voice command;
respectively searching a control strategy corresponding to each user ID and the voice instruction in a pre-stored database;
processing the control strategies corresponding to all the user IDs according to a preset mode to obtain a target control strategy;
and executing corresponding control behaviors according to the target control strategy.
According to the voice control method provided by the invention, when the same voice command sent by at least two users at the same time is received, the voice command of each user is firstly respectively identified by adopting a voiceprint identification mode, so that the user ID corresponding to each voice command can be identified, then the control strategies corresponding to each user ID and the voice command are respectively searched in a pre-stored database, wherein the control strategies corresponding to the voice commands sent by different users can be different, and finally the control strategies corresponding to all the user IDs are processed according to a preset mode, namely the control strategies corresponding to the voice commands sent by different users are comprehensively processed to obtain a final target control strategy which integrates the control strategies corresponding to the voice commands sent by each user, therefore, when a plurality of users send the same control voice at the same time, the control voice of each user can be well processed, the combined control effect of a plurality of voice instructions is achieved, the use range of voice control is widened, and the use experience of the users is optimized.
In addition, the voice control method according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the step of processing the control policies corresponding to all the user IDs according to a preset manner to obtain the target control policy includes:
analyzing the control strategy corresponding to each user ID respectively to obtain all control instructions of the control strategy corresponding to each user ID;
judging whether control instructions with opposite control behaviors exist in all control strategies;
if not, combining the control instructions of all the control strategies to obtain the target control strategy.
Further, in an embodiment of the present invention, the step of processing the control policies corresponding to all the user IDs according to a preset manner to obtain the target control policy includes:
analyzing the control strategy corresponding to each user ID respectively to obtain all control instructions of the control strategy corresponding to each user ID;
judging whether control instructions with opposite control behaviors exist in all control strategies;
if yes, searching the grade information corresponding to each user ID in a pre-stored database;
and merging the control instruction corresponding to the user ID with higher level information level into the target control strategy.
Further, in one embodiment of the present invention, the method further comprises:
when receiving voice information sent by at least two users at the same time, converting each voice information into text information respectively;
and carrying out semantic analysis on each character message, and determining whether any two voice messages are the same voice command according to a semantic analysis result corresponding to each voice message.
Further, in an embodiment of the present invention, the step of respectively recognizing the voice command of each user by using a voiceprint recognition method to recognize the user ID corresponding to each voice command includes:
respectively identifying the voice command of each user by adopting a voiceprint identification mode to obtain a unique voiceprint ID of each user;
and searching a user ID corresponding to each unique voiceprint in a pre-stored database according to the unique voiceprint ID of each user.
Further, in one embodiment of the present invention, the method further comprises:
and when the user ID of the voice command which cannot be identified exists, identifying the voice command by adopting a voiceprint identification mode, establishing the user ID and registering.
Another embodiment of the present invention provides a voice control system to solve the problem that when multiple users send out the same control voice at the same time, the control voice of each user cannot be processed well.
According to the voice control system of the embodiment of the invention, the system comprises:
the recognition module is used for recognizing the voice command of each user in a voiceprint recognition mode respectively when receiving the same voice command sent by at least two users at the same time so as to recognize the user ID corresponding to each voice command;
the searching module is used for respectively searching the control strategies corresponding to each user ID and the voice instruction in a pre-stored database;
the processing module is used for processing the control strategies corresponding to all the user IDs according to a preset mode so as to obtain a target control strategy;
and the execution module is used for executing corresponding control behaviors according to the target control strategy.
In addition, the voice control system according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the control strategy is composed of a plurality of control instructions, and the processing module includes:
the analysis unit is used for respectively analyzing the control strategy corresponding to each user ID so as to obtain all control instructions of the control strategy corresponding to each user ID;
the judging unit is used for judging whether control instructions with opposite control behaviors exist in all the control strategies;
and the first merging unit is used for merging the control instructions of all the control strategies to obtain the target control strategy when the judging unit judges that the control instructions with opposite control behaviors do not exist in all the control strategies.
Further, in an embodiment of the present invention, the processing module further includes:
the first searching unit is used for searching the grade information corresponding to each user ID in a pre-stored database when the judging unit judges that the control strategies have the control instructions with opposite control behaviors;
and the second merging unit is used for merging the control instruction corresponding to the user ID with higher level information level into the target control strategy.
Further, in one embodiment of the present invention, the system further comprises:
the conversion module is used for respectively converting each voice message into a text message when receiving the voice messages sent by at least two users simultaneously;
and the analysis determining module is used for carrying out semantic analysis on each character message and determining whether any two voice messages are the same voice command or not according to a semantic analysis result corresponding to each voice message.
Further, in one embodiment of the present invention, the identification module includes:
the identification unit is used for respectively identifying the voice command of each user by adopting a voiceprint identification mode so as to obtain the unique voiceprint ID of each user;
and the second searching unit is used for searching the user ID corresponding to each unique voiceprint in a pre-stored database according to the unique voiceprint ID of each user.
Further, in one embodiment of the present invention, the system further comprises:
and the registration module is used for identifying the voice instruction by adopting a voiceprint identification mode when the user ID of the voice instruction which cannot be identified exists, establishing the user ID and registering.
Another embodiment of the invention also proposes a storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
Another embodiment of the present invention also proposes a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of embodiments of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a voice control method according to an embodiment of the present invention;
FIG. 2 is a flow chart of determining whether two voice messages are the same voice command;
FIG. 3 is a flow chart of identifying a user ID;
FIG. 4 is a flow chart of the processing of control policies corresponding to all user IDs;
fig. 5 is a schematic structural diagram of a voice control system according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a voice control method according to an embodiment of the present invention at least includes steps S101 to S104:
s101, when the same voice command sent by at least two users at the same time is received, the voice command of each user is respectively identified by adopting a voiceprint identification mode so as to identify the user ID corresponding to each voice command;
the method can be applied to the environment of the Internet of things, the environment of the Internet of things comprises intelligent household equipment such as an intelligent air conditioner, an intelligent lamp and an intelligent television, and the working state of each intelligent household equipment can be controlled through voice. Specifically, a general intelligent control device can be further arranged, and the general intelligent control device is responsible for receiving voice instructions sent by users and controlling the working state of each intelligent household device according to the voice instructions.
For convenience of description, the same voice command sent by two users at the same time is taken as an example for introduction, for example, in a certain family, two parents and children send the same voice command at the same time after coming home, and it should be noted that "simultaneously" in step S101 may mean that the voice command sent by the parents and the voice command sent by the children are received at the same time, or that the voice command sent by the parents and the voice command sent by the children are received within a preset time range (for example, within 2S).
The same voice command may be identical voice commands, for example, the voice commands issued by father and son are all "i am home". The same voice command may be a voice command that is substantially the same after semantic analysis. Specifically, referring to fig. 2, the method provided in sub-steps S1011 to S1012 can be adopted to determine whether two voice messages are the same voice command:
s1011, when receiving the voice messages sent by at least two users at the same time, respectively converting each voice message into character messages;
and S1012, performing semantic analysis on each character message, and determining whether any two voice messages are the same voice command according to a semantic analysis result corresponding to each voice message.
For example, the voice command sent by the father is "i have arrived home", and the voice command sent by the son is "come home", although the voice commands of the father and the son are not completely the same, after the voice information is converted into the text information and the text information is subjected to semantic analysis, it can be determined that the two voice commands are substantially the same and belong to the same voice command. In specific implementation, the voice commands with the same semantics can be learned and stored in advance, and the user can set the voice commands with the same essential contents by himself.
When receiving the same voice command sent by the father and the son at the same time, respectively identifying the voice command of each user in a voiceprint identification manner to identify the user ID corresponding to each voice command, where, referring to fig. 3, the step of identifying the user ID may specifically include S1013 to S1014:
s1013, respectively identifying the voice command of each user by adopting a voiceprint identification mode to obtain a unique voiceprint ID of each user;
s1014, searching the user ID corresponding to each unique voiceprint in a pre-stored database according to the unique voiceprint ID of each user.
The method comprises the steps that each user voice has a unique voiceprint ID, before voiceprint identification, modeling, namely training or learning can be carried out on the voiceprint of the user in advance, so that the unique voiceprint ID of the user voice is determined, then the unique user ID is created, the mapping relation between the voiceprint ID and the user ID is established, and the mapping relation is stored in a database. Therefore, after the voice command of the father is identified in the voiceprint identification mode, the unique voiceprint ID of the father can be obtained, and the ID of the father user can be found in the database. Specifically, the user ID may include information of name, gender, age, and the like.
As a specific example, when there is a user ID of a voice command that cannot be recognized, the voice command is recognized by a voiceprint recognition method, the voiceprint ID of the user that cannot be recognized is stored, and then the user ID is established and registered.
S102, respectively searching a control strategy corresponding to each user ID and each voice instruction in a pre-stored database;
the control strategy of different users for the voice command can be stored in the database in advance, and the control strategy can be composed of a plurality of control commands. For example, for the father, when the father gives a voice command of "i am home", the corresponding control strategy may be composed of two control commands of "turning on a hall light" and "turning on a television". The mapping relation between the control strategy and the user ID of the father is stored in a database in advance.
For the son, when the son sends a voice command of 'i am home', the corresponding control strategy can be composed of two control commands of 'turning on a bedroom desk lamp' and 'turning on a bedroom air conditioner'. Similarly, the mapping relationship between the control strategy and the user ID of the son is also stored in the database in advance.
Therefore, when the same voice command that the father and the son send at the same time that the user arrives at home is received, the control strategies consisting of the two control commands of turning on the hall lamp and turning on the television corresponding to the user ID of the father and the control strategies consisting of the two control commands of turning on the bedroom desk lamp and turning on the bedroom air conditioner corresponding to the user ID of the son can be respectively searched in the database.
S103, processing the control strategies corresponding to all the user IDs according to a preset mode to obtain a target control strategy;
the control strategy corresponding to the user ID of the father and the control strategy corresponding to the user ID of the son can be processed in various ways to obtain the target control strategy.
As an optional implementation manner, referring to fig. 4, the step of processing the control policies corresponding to all the user IDs may include S1031 to S1035:
s1031, analyzing the control strategy corresponding to each user ID respectively to obtain all control instructions of the control strategy corresponding to each user ID;
s1032, judging whether control instructions with opposite control behaviors exist in all control strategies;
if not, go to step S1033, if yes, go to steps S1034 to S1035,
s1033, the control instructions of all the control strategies are merged to obtain the target control strategy.
S1034, searching the grade information corresponding to each user ID in a pre-stored database;
s1035, merging the control instruction corresponding to the user ID with higher level information into the target control policy.
For example, for the father, when the father gives a voice command of "i am home", the corresponding control strategy is composed of two control commands of "turning on a hall lamp" and "turning on a television".
For the son, when the son sends out a voice command of 'i arrive at home', the corresponding control strategy consists of two control commands of 'turning on a bedroom desk lamp' and 'turning on a bedroom air conditioner'.
For the above situation, there is no control instruction with opposite control behaviors in all the control strategies, so the control instructions of all the control strategies are merged to obtain the target control strategy, that is, the target control strategy is composed of four control instructions of "turning on a hall lamp", "turning on a television", "turning on a table lamp in a bedroom", and "turning on an air conditioner in a bedroom". In the subsequent step S104, the corresponding smart device is controlled to execute the corresponding function. It should be noted that, for the same control instruction in the control policies of different users, merging may also be performed, for example, if there is a control instruction of "turning on hall lantern" in the control policy corresponding to the user ID of the father, and there is a control instruction of "turning on hall lantern" in the control policy corresponding to the user ID of the son, then a control instruction of "turning on hall lantern" in the target control policy is obtained.
For another situation, if the father sends a voice command of "i am home", the corresponding control strategy consists of two control commands of "turning on hall lights" and "turning on air conditioners in the hall". For the son, when the son sends out a voice command of 'i am home', the corresponding control strategy consists of two control commands of 'turning on a bedroom desk lamp' and 'turning off a living room air conditioner'. For this situation, a level information may be assigned to the user ID of each user in advance, where the level information is, for example, level 1 information, level 2 information, level 3 information, and the like, and the different level information represents different levels, for example, the level degree (or importance degree) of the level 1 information is higher than the level degree of the level 2 information, the level degree (or importance degree) of the level 2 information is higher than the level degree of the level 3 information, and so on. The mapping relationship between the user ID and the level information is also stored in the database in advance. If the level information corresponding to the user ID of the father is level 1 level information and the level information corresponding to the user ID of the son is level 2 level information, the control instruction of turning on the air conditioner in the living room is merged into the target control strategy, the air conditioner in the living room is only turned on subsequently, the air conditioner in the living room is not turned off, and the target control strategy consists of three control instructions of turning on the hall lamp, turning on the air conditioner in the living room and turning on the table lamp in the bedroom. In specific implementation, grade information can be respectively distributed to each user, and the grade information of different users is different, so that a better control effect is realized, and the situation that the users with the same grade information have control instructions with opposite control behaviors is avoided. In addition, the user can set and change the grade information by himself to improve the interaction effect.
It is understood that, in specific implementation, the control policies corresponding to all the user IDs may also be processed in other manners, for example, when the user a (corresponding to the level 1 level information) and the user B (corresponding to the level 2 level information) send the same voice command at the same time, only the control policy corresponding to the voice command sent by the user a (with higher level of the level information) is executed, but the control policy corresponding to the voice command sent by the user B is not executed.
Optionally, the gender, age, etc. of the user may be analyzed, for example, when there is a same voice command issued by a male user and a female user at the same time, only the control strategy corresponding to the voice command of the female user is executed. When the senior and the young send the same voice command at the same time, only the control strategy corresponding to the voice command of the senior user is executed.
It should be noted that, those skilled in the art may combine and/or expand the above processing modes as needed, and no limitation is made herein.
And S104, executing corresponding control behaviors according to the target control strategy.
After the target control strategy is obtained, the corresponding smart home devices can be controlled to execute corresponding control behaviors, and particularly, the control behaviors corresponding to the smart home devices can be controlled by the total intelligent control device.
To sum up, according to the voice control method provided in this embodiment, when receiving a same voice command sent by at least two users at the same time, first of all, the voice command of each user is identified by using a voiceprint identification method, so as to identify a user ID corresponding to each voice command, and then control strategies corresponding to each user ID and the voice command are respectively searched in a pre-stored database, where the control strategies corresponding to the voice commands sent by different users may be different, and finally, the control strategies corresponding to all the user IDs are processed according to a preset method, that is, the control strategies corresponding to the voice commands sent by different users are comprehensively processed to obtain a final target control strategy, which integrates the control strategies corresponding to the voice commands sent by each user, so that when multiple users send the same control voice at the same time, the control voice of each user can be well processed, the combined control effect of a plurality of voice instructions is achieved, the use range of voice control is widened, and the use experience of the users is optimized.
Referring to fig. 5, based on the same inventive concept, a voice control system according to another embodiment of the present invention includes:
the recognition module 10 is configured to, when receiving a same voice instruction sent by at least two users at the same time, respectively recognize the voice instruction of each user in a voiceprint recognition manner, so as to recognize a user ID corresponding to each voice instruction;
the searching module 20 is configured to search a control strategy corresponding to each user ID and each voice instruction in a pre-stored database;
the processing module 30 is configured to process the control policies corresponding to all the user IDs in a preset manner to obtain a target control policy;
and the execution module 40 is used for executing corresponding control behaviors according to the target control strategy.
Wherein the control strategy is composed of a plurality of control instructions, and the processing module 30 includes:
an analyzing unit 31, configured to analyze the control policy corresponding to each user ID respectively to obtain all control instructions of the control policy corresponding to each user ID;
a judging unit 32, configured to judge whether there is a control instruction with an opposite control behavior in all control strategies;
a first merging unit 33, configured to merge the control instructions of all the control strategies to obtain a target control strategy when the determining unit 32 determines that there is no control instruction with an opposite control behavior in all the control strategies.
The processing module 30 further comprises:
the first searching unit 34 is configured to search, when the determining unit 32 determines that control instructions with opposite control behaviors exist in all the control policies, level information corresponding to each user ID in a pre-stored database;
a second merging unit 35, configured to merge the control instruction corresponding to the user ID with the higher level information level into the target control policy.
Wherein the system further comprises:
a conversion module 50, configured to convert each voice message into text messages when receiving voice messages simultaneously sent by at least two users;
and an analysis determining module 60, configured to perform semantic analysis on each piece of text information, and determine whether any two pieces of voice information are the same voice instruction according to a semantic analysis result corresponding to each piece of voice information.
Wherein the identification module 10 comprises:
the recognition unit 11 is configured to respectively recognize the voice instruction of each user by using a voiceprint recognition method to obtain a unique voiceprint ID of each user;
and a second searching unit 12, configured to search, in a pre-stored database, user IDs corresponding to the unique voiceprints according to the unique voiceprint ID of each user.
Wherein the system further comprises:
and a registering module 70, configured to, when there is a user ID of the voice instruction that cannot be recognized, recognize the voice instruction in a voiceprint recognition manner, establish the user ID, and register the user ID.
According to the voice control system provided by the invention, when the same voice command sent by at least two users at the same time is received, the voice command of each user is firstly respectively identified by adopting a voiceprint identification mode, so that the user ID corresponding to each voice command can be identified, then the control strategies corresponding to each user ID and the voice command are respectively searched in a prestored database, wherein the control strategies corresponding to the voice commands sent by different users can be different, and finally the control strategies corresponding to all the user IDs are processed according to a preset mode, namely the control strategies corresponding to the voice commands sent by different users are comprehensively processed to obtain the final target control strategy which integrates the control strategies corresponding to the voice commands sent by each user, therefore, when a plurality of users send the same control voice at the same time, the control voice of each user can be well processed, the combined control effect of a plurality of voice instructions is achieved, the use range of voice control is widened, and the use experience of the users is optimized.
Furthermore, an embodiment of the present invention also proposes a storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
Furthermore, an embodiment of the present invention also provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the above method when executing the program.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (7)
1. A method for voice control, the method comprising:
when the same voice command sent by at least two users at the same time is received, the voice command of each user is respectively identified by adopting a voiceprint identification mode so as to identify a user ID corresponding to each voice command, wherein the user ID comprises information of name, gender and age;
respectively searching a control strategy corresponding to each user ID and the voice instruction in a pre-stored database;
processing the control strategies corresponding to all the user IDs according to a preset mode to obtain a target control strategy;
executing corresponding control behaviors according to the target control strategy;
the control strategy is composed of a plurality of control instructions, and the step of processing the control strategies corresponding to all the user IDs according to a preset mode to obtain the target control strategy comprises the following steps:
analyzing the control strategy corresponding to each user ID respectively to obtain all control instructions of the control strategy corresponding to each user ID;
judging whether control instructions with opposite control behaviors exist in all control strategies;
if not, combining the control instructions of all the control strategies to obtain a target control strategy;
if yes, searching the grade information corresponding to each user ID in a pre-stored database;
and merging the control instruction corresponding to the user ID with higher level information level into the target control strategy.
2. The voice control method of claim 1, further comprising:
when receiving voice information sent by at least two users at the same time, converting each voice information into text information respectively;
and carrying out semantic analysis on each character message, and determining whether any two voice messages are the same voice command according to a semantic analysis result corresponding to each voice message.
3. The voice control method according to claim 1, wherein the step of recognizing the voice command of each user by a voiceprint recognition method to recognize the user ID corresponding to each voice command comprises:
respectively identifying the voice command of each user by adopting a voiceprint identification mode to obtain a unique voiceprint ID of each user;
and searching a user ID corresponding to each unique voiceprint in a pre-stored database according to the unique voiceprint ID of each user.
4. The voice control method of claim 1, further comprising:
and when the user ID of the voice command which cannot be identified exists, identifying the voice command by adopting a voiceprint identification mode, establishing the user ID and registering.
5. A voice control system, the system comprising:
the identification module is used for identifying the voice command of each user by adopting a voiceprint identification mode when the same voice command sent by at least two users at the same time is received so as to identify the user ID corresponding to each voice command, wherein the user ID comprises information of name, gender and age;
the searching module is used for respectively searching the control strategies corresponding to each user ID and the voice instruction in a pre-stored database;
the processing module is used for processing the control strategies corresponding to all the user IDs according to a preset mode so as to obtain a target control strategy;
the execution module is used for executing corresponding control behaviors according to the target control strategy;
the control strategy is composed of a plurality of control instructions, and the processing module comprises:
the analysis unit is used for respectively analyzing the control strategy corresponding to each user ID so as to obtain all control instructions of the control strategy corresponding to each user ID;
the judging unit is used for judging whether control instructions with opposite control behaviors exist in all the control strategies;
the first merging unit is used for merging the control instructions of all the control strategies to obtain a target control strategy when the judging unit judges that the control instructions with opposite control behaviors do not exist in all the control strategies;
the processing module further comprises:
the first searching unit is used for searching the grade information corresponding to each user ID in a pre-stored database when the judging unit judges that the control strategies have the control instructions with opposite control behaviors;
and the second merging unit is used for merging the control instruction corresponding to the user ID with higher level information level into the target control strategy.
6. A storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, carry out the steps of the method of any one of claims 1 to 4.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 4 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811321335.8A CN109559742B (en) | 2018-11-07 | 2018-11-07 | Voice control method, system, storage medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811321335.8A CN109559742B (en) | 2018-11-07 | 2018-11-07 | Voice control method, system, storage medium and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109559742A CN109559742A (en) | 2019-04-02 |
CN109559742B true CN109559742B (en) | 2021-06-04 |
Family
ID=65865839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811321335.8A Active CN109559742B (en) | 2018-11-07 | 2018-11-07 | Voice control method, system, storage medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109559742B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028835B (en) * | 2019-11-18 | 2022-08-09 | 北京小米移动软件有限公司 | Resource replacement method, device, system and computer readable storage medium |
CN111083065B (en) * | 2019-12-23 | 2022-05-20 | 珠海格力电器股份有限公司 | Method for preventing input command from being blocked, storage medium and computer equipment |
CN112071306A (en) * | 2020-08-26 | 2020-12-11 | 吴义魁 | Voice control method, system, readable storage medium and gateway equipment |
CN112349275A (en) * | 2020-11-10 | 2021-02-09 | 平安普惠企业管理有限公司 | Voice recognition method, device, equipment and medium suitable for multiple users |
CN112599136A (en) * | 2020-12-15 | 2021-04-02 | 江苏惠通集团有限责任公司 | Voice recognition method and device based on voiceprint recognition, storage medium and terminal |
CN115223552A (en) * | 2021-04-21 | 2022-10-21 | 博泰车联网科技(上海)股份有限公司 | Voice control method, terminal and computer storage medium |
CN113421567A (en) * | 2021-08-25 | 2021-09-21 | 江西影创信息产业有限公司 | Terminal equipment control method and system based on intelligent glasses and intelligent glasses |
CN113703331A (en) * | 2021-08-27 | 2021-11-26 | 武汉市惊叹号科技有限公司 | Distributed control system based on integrated platform of Internet of things |
CN118200473A (en) * | 2022-12-05 | 2024-06-14 | 中兴通讯股份有限公司 | Conference system control method, server and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101332928B1 (en) * | 2012-06-07 | 2013-11-26 | 주식회사 엘지유플러스 | System and method for providing multi tag information |
CN105444332A (en) * | 2014-08-19 | 2016-03-30 | 青岛海尔智能家电科技有限公司 | Equipment voice control method and device |
CN107707436A (en) * | 2017-09-18 | 2018-02-16 | 广东美的制冷设备有限公司 | Terminal control method, device and computer-readable recording medium |
CN108320753A (en) * | 2018-01-22 | 2018-07-24 | 珠海格力电器股份有限公司 | Control method, device and system of electrical equipment |
CN108389578A (en) * | 2018-02-09 | 2018-08-10 | 深圳市鹰硕技术有限公司 | Smart classroom speech control system |
-
2018
- 2018-11-07 CN CN201811321335.8A patent/CN109559742B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101332928B1 (en) * | 2012-06-07 | 2013-11-26 | 주식회사 엘지유플러스 | System and method for providing multi tag information |
CN105444332A (en) * | 2014-08-19 | 2016-03-30 | 青岛海尔智能家电科技有限公司 | Equipment voice control method and device |
CN107707436A (en) * | 2017-09-18 | 2018-02-16 | 广东美的制冷设备有限公司 | Terminal control method, device and computer-readable recording medium |
CN108320753A (en) * | 2018-01-22 | 2018-07-24 | 珠海格力电器股份有限公司 | Control method, device and system of electrical equipment |
CN108389578A (en) * | 2018-02-09 | 2018-08-10 | 深圳市鹰硕技术有限公司 | Smart classroom speech control system |
Also Published As
Publication number | Publication date |
---|---|
CN109559742A (en) | 2019-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109559742B (en) | Voice control method, system, storage medium and computer equipment | |
CN107919121B (en) | Control method and device of intelligent household equipment, storage medium and computer equipment | |
US20220317641A1 (en) | Device control method, conflict processing method, corresponding apparatus and electronic device | |
WO2021174759A1 (en) | Garbage classification processing method and apparatus, terminal, and storage medium | |
US10367652B2 (en) | Smart home automation systems and methods | |
CN108235697B (en) | Robot dynamic learning method and system, robot and cloud server | |
JP2020532757A (en) | Intercom-type communication using multiple computing devices | |
CN106156799B (en) | Object identification method and device of intelligent robot | |
KR102411619B1 (en) | Electronic apparatus and the controlling method thereof | |
US8356002B2 (en) | Learning apparatus and method of intelligent system | |
CN109377995B (en) | Method and device for controlling equipment | |
CN111178081B (en) | Semantic recognition method, server, electronic device and computer storage medium | |
WO2022217754A1 (en) | Robot control method and apparatus, storage medium, electronic device, program product, and robot | |
CN110147047A (en) | Smart home device screening technique, device, computer equipment and storage medium | |
CN113611305A (en) | Voice control method, system, device and medium in autonomous learning home scene | |
CN111710339B (en) | Voice recognition interaction system and method based on data visual display technology | |
CN115327932A (en) | Scene creation method and device, electronic equipment and storage medium | |
KR20110003811A (en) | Interactable robot | |
CN111933135A (en) | Terminal control method and device, intelligent terminal and computer readable storage medium | |
CN110989378A (en) | Intelligent home controller, interaction method thereof and storage medium | |
CN101271318A (en) | Interactive family entertainment robot and relevant control method | |
CN112859634A (en) | Intelligent service method based on intelligent home system and intelligent home system | |
CN116391225A (en) | Method and system for assigning unique voices to electronic devices | |
CN116566760B (en) | Smart home equipment control method and device, storage medium and electronic equipment | |
CN113658590A (en) | Control method and device of intelligent household equipment, readable storage medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |