CN103488401A - Voice assistant activating method and device - Google Patents
Voice assistant activating method and device Download PDFInfo
- Publication number
- CN103488401A CN103488401A CN201310467383.9A CN201310467383A CN103488401A CN 103488401 A CN103488401 A CN 103488401A CN 201310467383 A CN201310467383 A CN 201310467383A CN 103488401 A CN103488401 A CN 103488401A
- Authority
- CN
- China
- Prior art keywords
- speech data
- voice
- storehouse
- stored
- local voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a voice assistant activating method and device. A voice assistant can be activated in any interface, the operation mode is simple, and therefore user experience can be improved. The voice assistant activating method comprises the follow steps: a first indication signal is received, wherein the first indication signal indicates that the voice assistant is to be started, and the first indication signal is generated after a user triggers a first button on a controller; according to the first indication signal, the voice assistant is started. The voice assistant activating method and device are suitable for the technical field of electronic information.
Description
Technical field
The present invention relates to electronic information technical field, relate in particular to a kind of voice assistant Activiation method and device.
Background technology
Along with the raising of people's living standard, the popularity of intelligent terminal is more and more higher.The input mode of intelligent terminal comprises user manually input and phonetic entry, and wherein, phonetic entry more and more has been applied to current intelligent terminal.
In prior art, when voice activated assistant, after at first the user need to enter on intelligent terminal the list of application interface that comprises voice assistance application icon, after clicking list of application voice assistance application icon, speech recognition equipment is just responded user's phonetic entry, obtains the speech data of user's input.Although this mode can voice activated assistant, can not be under any interface all can voice activated assistant, and mode of operation is comparatively complicated, has reduced user's experience.
Summary of the invention
The invention provides a kind of voice assistant Activiation method and device, can be under arbitrary interface voice activated assistant, and mode of operation is simple, can promote the user and experience.
For achieving the above object, the embodiment of the present invention adopts following technical scheme:
First aspect, provide a kind of voice assistant Activiation method, and described method comprises:
Receive the first indicator signal, described the first indicator signal indication starts the voice assistant, and wherein, described the first indicator signal is to produce after the first button on user's trigger controller;
According to described the first indicator signal, start described voice assistant.
In first aspect the first, in possible implementation, in conjunction with first aspect, described, according to described the first indicator signal, after starting described voice assistant, described method also comprises:
Obtain the speech data of user's input;
Determine whether to identify described speech data by the local voice storehouse of pre-stored;
If determine and can identify described speech data by the local voice storehouse of pre-stored, according to the local voice storehouse of described pre-stored, identify described speech data;
If determine and cannot identify described speech data by the local voice storehouse of pre-stored, according to the voice-over-net storehouse of pre-stored, identify described speech data.
At first aspect the second in possible implementation, the possible implementation in conjunction with first aspect the first describedly determines whether that can identify described speech data by the local voice storehouse of pre-stored comprises:
Speech data in described speech data and described local voice storehouse is mated, obtained the first matching value;
If described the first matching value in the first threshold scope, determines whether to identify described speech data by the local voice storehouse of pre-stored;
If described the first matching value, not in described first threshold scope, is determined and cannot be identified described speech data by the local voice storehouse of pre-stored.
In the third possible implementation of first aspect, the possible implementation in conjunction with first aspect the second, pre-stored speech data corresponding to the first instruction in described local voice storehouse, wherein, described the first instruction comprises behavior part and object part;
If the speech data of user input is described the first instruction, described speech data in described speech data and described local voice storehouse mated specifically and comprised:
The speech data of described the first instruction behavior part and the speech data in described local voice storehouse are mated, the first speech data in the speech data of described the first instruction object part and described local voice storehouse is mated, wherein, described the first speech data is speech data corresponding with described the first instruction behavior part in described local voice storehouse.
In the 4th kind of possible implementation of first aspect, in conjunction with first aspect, to the third possible implementation method mode of first aspect, after described reception the first indicator signal, described method also comprises:
Show voice assistance application interface according to the first pre-configured display mode, wherein, described the first display mode is the floating window display mode that background presents translucent.
Second aspect, provide a kind of voice assistant active device, and described device comprises receiving element, start unit;
Described receiving element, for receiving the first indicator signal, described the first indicator signal indication starts the voice assistant, and wherein, described the first indicator signal is to produce after the first button on user's trigger controller;
Described start unit, for the first indicator signal of obtaining according to described acquiring unit, start described voice assistant.
In second aspect the first, in possible implementation, in conjunction with second aspect, described device also comprises acquiring unit, determining unit, recognition unit;
Described acquiring unit, at described start unit according to described the first indicator signal, after starting described voice assistant, obtain the speech data of user input;
Described determining unit, can identify the speech data that described acquiring unit obtains by the local voice storehouse of pre-stored for determining whether;
Described recognition unit, if can identify described speech data by the local voice storehouse of pre-stored for determining, identify described speech data according to the local voice storehouse of described pre-stored;
Described recognition unit, if also for determining, cannot identify described speech data by the local voice storehouse of pre-stored, identify described speech data according to the voice-over-net storehouse of pre-stored.
At second aspect the second in possible implementation, the possible implementation in conjunction with second aspect the first, described determining unit comprises acquisition module, determination module;
Described acquisition module, mated for the speech data by described speech data and described local voice storehouse, obtains the first matching value;
Described identification module, if for described the first matching value in the first threshold scope, determine and can identify described speech data by the local voice storehouse of pre-stored;
Described identification module, if also for described the first matching value not in described first threshold scope, determine and cannot identify described speech data by the local voice storehouse of pre-stored.
In the third possible implementation of second aspect, the possible implementation in conjunction with second aspect the second, pre-stored speech data corresponding to the first instruction in described local voice storehouse, wherein, described the first instruction comprises behavior part and object part;
Described acquisition module specifically for:
The speech data of described the first instruction behavior part and the speech data in described local voice storehouse are mated, the first speech data in the speech data of described the first instruction object part and described local voice storehouse is mated, wherein, described the first speech data is speech data corresponding with described the first instruction behavior part in described local voice storehouse.
In the 4th kind of possible implementation of second aspect, in conjunction with second aspect, to the third possible implementation of second aspect, described device also comprises display unit;
Described display unit, for after described receiving element is received the first indicator signal, show voice assistance application interface according to the first pre-configured display mode, and wherein, described the first display mode is the floating window display mode that background presents translucent.
The invention provides a kind of voice assistant Activiation method and device, described method comprises: receive the first indicator signal, described the first indicator signal indication starts the voice assistant, and wherein, described the first indicator signal is to produce after the first button on user's trigger controller; According to described the first indicator signal, start described voice assistant.The voice assistant Activiation method and the device that based on the embodiment of the present invention, provide, because voice assistant active device starts described voice assistant after receiving the first indicator signal, and described the first indicator signal is to produce after the first button on user's trigger controller, can start the voice assistant by the mode of " key activates ", after the list of application interface that does not need the user to enter on intelligent terminal to comprise voice assistance application icon, just start the voice assistant after clicking list of application voice assistance application icon, therefore the embodiment of the present invention provides voice assistant Activiation method and device can make voice activated assistant under user's meaning in office interface, and mode of operation is simple, can promote the user experiences.
The accompanying drawing explanation
A kind of voice assistant Activiation method that Fig. 1 provides for the embodiment of the present invention one;
The another kind of voice assistant Activiation method that Fig. 2 provides for the embodiment of the present invention two;
Another voice assistant Activiation method that Fig. 3 provides for the embodiment of the present invention two;
The voice assistant active device structural representation one that Fig. 4 provides for the embodiment of the present invention three;
The voice assistant active device structural representation two that Fig. 5 provides for the embodiment of the present invention three;
The voice assistant active device structural representation three that Fig. 6 provides for the embodiment of the present invention three;
The voice assistant active device structural representation four that Fig. 7 provides for the embodiment of the present invention three.
Embodiment
A kind of voice assistant Activiation method and the device that the embodiment of the present invention are provided below in conjunction with accompanying drawing are described in detail.
Embodiment mono-,
The embodiment of the present invention provides a kind of voice assistant Activiation method, and specifically as shown in Figure 1, the method comprises:
101, voice assistant active device receives the first indicator signal, and the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller.
Concrete, in prior art, before the speech data that obtains user's input, after at first the user need to enter on intelligent terminal the list of application interface that comprises voice assistance application icon, after clicking list of application voice assistance application icon, speech recognition equipment is just responded user's phonetic entry, obtains the speech data of user's input.
In the embodiment of the present invention, before the speech data that obtains user's input, at first speech recognition equipment receives the first indicator signal, and the first indicator signal indication starts the voice assistant, wherein, receiving the first indicator signal is to produce after the first button on user's trigger controller.Be that in the embodiment of the present invention, the mode of user by " key activate " starts the voice assistant, with respect to starting voice assistant's mode in prior art, this mode can under any circumstance start the voice assistant, makes more convenient to operately, starts more quick.
102, voice assistant active device, according to the first indicator signal, starts the voice assistant.
Concrete, in the embodiment of the present invention, after voice assistant active device receives the first indicator signal, will, according to the first indicator signal, start the voice assistant.
The embodiment of the present invention provides a kind of voice assistant Activiation method, and the method comprises: receive the first indicator signal, the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller; According to the first indicator signal, start the voice assistant.The voice assistant Activiation method provided based on the embodiment of the present invention, because voice assistant active device starts the voice assistant after receiving the first indicator signal, and the first indicator signal is to produce after the first button on user's trigger controller, can start the voice assistant by the mode of " key activates ", after the list of application interface that does not need the user to enter on intelligent terminal to comprise voice assistance application icon, just start the voice assistant after clicking list of application voice assistance application icon, therefore the voice assistant Activiation method that the embodiment of the present invention provides can make voice activated assistant under user's meaning in office interface, and mode of operation is simple, can promote the user experiences.
Embodiment bis-,
The embodiment of the present invention provides a kind of voice assistant Activiation method, and specifically as shown in Figure 2, the method comprises:
201, voice assistant active device receives the first indicator signal, and the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller.
Concrete, in prior art, before the speech data that obtains user's input, after at first the user need to enter on intelligent terminal the list of application interface that comprises voice assistance application icon, after clicking list of application voice assistance application icon, speech recognition equipment is just responded user's phonetic entry, obtains the speech data of user's input.
In the embodiment of the present invention, before the speech data that obtains user's input, at first speech recognition equipment receives the first indicator signal, and the first indicator signal indication starts the voice assistant, wherein, receiving the first indicator signal is to produce after the first button on user's trigger controller.Be that in the embodiment of the present invention, the mode of user by " key activate " starts the voice assistant, with respect to starting voice assistant's mode in prior art, this mode can under any circumstance start the voice assistant, makes more convenient to operately, starts more quick.
202, voice assistant active device, according to the first indicator signal, starts the voice assistant.
Concrete, in the embodiment of the present invention, after voice assistant active device receives the first indicator signal, will, according to the first indicator signal, start the voice assistant.
203, voice assistant active device obtains the speech data of user's input.
Concrete, when voice assistant active device carries out speech recognition, can obtain the speech data of user's input, this speech data may be an instruction, such as being " shutdown ", " open any browser " etc., the embodiment of the present invention is not done concrete restriction to this.
It should be noted that, voice assistant active device is included in intelligent terminal, and intelligent terminal may be TV, may be also pad, and the embodiment of the present invention is not done concrete restriction to this.
204, voice assistant active device determines whether to identify speech data by the local voice storehouse of pre-stored.
Concrete, voice assistant active device is after the speech data that obtains user's input, at first determine whether by the local voice storehouse identification speech data of pre-stored, if determine, can, by the local voice storehouse identification speech data of pre-stored, to perform step 205;
If determine and cannot, by the local voice storehouse identification speech data of pre-stored, perform step 206.
Concrete, determine whether can to comprise by the local voice storehouse identification speech data of pre-stored:
Speech data in speech data and local voice storehouse is mated, obtained the first matching value;
Can be by the local voice storehouse identification speech data of pre-stored if the first matching value, in the first threshold scope, is determined;
Cannot be by the local voice storehouse identification speech data of pre-stored if the first matching value, not in the first threshold scope, is determined.
It should be noted that, in the above-mentioned method that determines whether local voice storehouse identification speech data that can be by pre-stored, first threshold is a numerical range of setting for the matching result of speech data and local voice database, at the matching result of local voice database and speech data during a higher scope, determine and can identify speech data by local voice storehouse that can pre-stored, otherwise determining cannot be by the local voice storehouse identification speech data of pre-stored, the speech data of the user's input caused in the time of can avoiding like this or user's babble Chu noisy at environment mates inaccurate situation in the local voice storehouse, make voice identification result more accurate.
If 205 determine and can identify speech data by the local voice storehouse of pre-stored, voice assistant active device is according to the local voice storehouse identification speech data of pre-stored.
Concrete, if can, by the local voice storehouse identification speech data of pre-stored, can identify speech data according to the local voice storehouse of pre-stored if voice assistant active device is determined.Like this, in the situation that do not have network to connect, also can carry out phonetic entry and voice and control, and the local voice storehouse is less with respect to voice-over-net of the prior art storehouse, can be so that recognition result be more accurate according to the local voice storehouse identification speech data of pre-stored.
It should be noted that, in the local voice storehouse pre-stored some usual instructions.The method of identifying speech data by the local voice storehouse of pre-stored can be described as " identified off-line " technology, " identified off-line " is by the local voice storehouse of pre-stored in the voice assistance application, after the user completes phonetic entry, in the local voice storehouse, coupling approaches the word of input content most.The speech data that embodiment of the present invention is inputted the user by " identified off-line " carries out speech recognition.
If 206 determine and cannot identify speech data by the local voice storehouse of pre-stored, voice assistant active device is according to the voice-over-net storehouse identification speech data of pre-stored.
Concrete, when voice assistant active device is determined the local voice storehouse identification speech data that cannot pass through pre-stored, can be according to voice-over-net of the prior art storehouse identification speech data.Wherein, the method for identifying speech data by the voice-over-net storehouse is called " ONLINE RECOGNITION ", and the dual fail-safe pattern in conjunction with " identified off-line " and " ONLINE RECOGNITION " makes speech recognition more accurate like this.
Further, the embodiment of the present invention also provides a kind of audio recognition method, and specifically as shown in Figure 3, the method comprises:
301, voice assistant active device receives the first indicator signal, and the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller.
Concrete, in prior art, before the speech data that obtains user's input, after at first the user need to enter on intelligent terminal the list of application interface that comprises voice assistance application icon, after clicking list of application voice assistance application icon, voice assistant active device is just responded user's phonetic entry, obtains the speech data of user's input.
In the embodiment of the present invention, before the speech data that obtains user's input, at first voice assistant active device receives the first indicator signal, and the first indicator signal indication starts the voice assistant, wherein, receiving the first indicator signal is to produce after the first button on user's trigger controller.Be that in the embodiment of the present invention, the mode of user by " key activate " starts the voice assistant, with respect to starting voice assistant's mode in prior art, this mode can under any circumstance start the voice assistant, makes more convenient to operately, starts more quick.
302, voice assistant active device, according to the first indicator signal, starts the voice assistant.
Concrete, in the embodiment of the present invention, after voice assistant active device receives the first indicator signal, will, according to the first indicator signal, start the voice assistant.
303, voice assistant active device obtains the speech data of user's input.
Concrete, when voice assistant active device carries out speech recognition, can obtain the speech data of user's input, this speech data may be an instruction, such as being " shutdown ", " open any browser " etc., the embodiment of the present invention is not done concrete restriction to this.
It should be noted that, voice assistant active device is included in intelligent terminal, and intelligent terminal may be TV, may be also pad, and the embodiment of the present invention is not done concrete restriction to this.
304, voice assistant active device is mated the speech data in speech data and local voice storehouse, obtains the first matching value.
Concrete, after voice assistant active device obtains the speech data of user's input, the speech data in speech data and speech database can be mated, obtain the first matching value.Wherein, the first matching value is for characterizing data of the speech data matching degree in speech data and local voice database.If the speech data in speech data and local voice database extremely mates, this first matching value will be higher, otherwise on the low side.
Concrete, pre-stored speech data corresponding to the first instruction in the local voice storehouse, wherein, the first instruction comprises behavior part and object part;
If the speech data of user's input is the first instruction, the speech data in speech data and local voice storehouse is mated specifically and can be comprised:
The speech data of the first instruction behavior part and the speech data in the local voice storehouse are mated, the first speech data in the speech data of the first instruction object part and local voice storehouse is mated, wherein, the first speech data is speech data corresponding with the first instruction behavior part in the local voice storehouse.
Exemplary, such as the user thinks open any browser, speech data " open any browser " by obtaining user input afterwards, at first identify " opening " this action, show to open certain application, then identify object " browser " from the list of application corresponding with " opening ", such voice recognition mode is more quick.
If 305 first matching values are in the first threshold scope, voice assistant active device is determined can be by the local voice storehouse identification speech data of pre-stored.
Concrete, first threshold is a numerical range of setting for the matching result of speech data and local voice database, at the matching result of local voice database and speech data during a higher scope, determine and can identify speech data by local voice storehouse that can pre-stored, otherwise determining cannot be by the local voice storehouse identification speech data of pre-stored, the speech data of the user's input caused in the time of can avoiding like this or user's babble Chu noisy at environment mates inaccurate situation in the local voice storehouse, make voice identification result more accurate.
Concrete, if determining, voice assistant active device can, by the local voice storehouse identification speech data of pre-stored, perform step 307.
If 306 first matching values are not in the first threshold scope, voice assistant active device is determined cannot be by the local voice storehouse identification speech data of pre-stored.
Concrete, according to the description of step 305, if the first matching value not in the first threshold scope, voice assistant active device is determined cannot be by the local voice storehouse identification speech data of pre-stored.
Concrete, if determining, voice assistant active device cannot, by the local voice storehouse identification speech data of pre-stored, perform step 308.
If 307 determine and can identify speech data by the local voice storehouse of pre-stored, voice assistant active device is according to the local voice storehouse identification speech data of pre-stored.
Concrete, if can, by the local voice storehouse identification speech data of pre-stored, can identify speech data according to the local voice storehouse of pre-stored if voice assistant active device is determined.Like this, in the situation that do not have network to connect, also can carry out phonetic entry and voice and control, and the local voice storehouse is less with respect to voice-over-net of the prior art storehouse, can be so that recognition result be more accurate according to the local voice storehouse identification speech data of pre-stored.
It should be noted that, in the local voice storehouse pre-stored some usual instructions.The method of identifying speech data by the local voice storehouse of pre-stored can be described as " identified off-line " technology, " identified off-line " is by the local voice storehouse of pre-stored in the voice assistance application, after the user completes phonetic entry, in the local voice storehouse, coupling approaches the word of input content most.The speech data that embodiment of the present invention is inputted the user by " identified off-line " carries out speech recognition.
If 308 determine and cannot identify speech data by the local voice storehouse of pre-stored, voice assistant active device is according to the voice-over-net storehouse identification speech data of pre-stored.
Concrete, when voice assistant active device is determined the local voice storehouse identification speech data that cannot pass through pre-stored, can be according to voice-over-net of the prior art storehouse identification speech data.Wherein, the method for identifying speech data by the voice-over-net storehouse is called " ONLINE RECOGNITION ", and the dual fail-safe pattern in conjunction with " identified off-line " and " ONLINE RECOGNITION " makes speech recognition more accurate like this.
Further, after receiving the first indicator signal, method also comprises:
Show voice assistance application interface according to the first pre-configured display mode, wherein, the first display mode is the floating window display mode that background presents translucent.
Concrete, in the embodiment of the present invention, for when using voice assistant active device, do not affect the normal operation of current background application, the also pre-configured display mode at voice assistance application interface, i.e. the first display mode, the first display mode is the floating window display mode that background presents translucent.
Exemplary, suppose that the user is when watching video or carrying out game on line, want to carry out phonetic entry, voice assistant active device is after receiving the first indicator signal, can present the floating window display interface that background presents translucent, the user can, when using voice assistant active device, still can not allow and miss net cast or arbitrary partial information at the outpost of the tax office of playing like this.
It should be noted that, show that according to the first pre-configured display mode voice assistance application interface is the action that the display module in voice assistant active device is carried out, after voice assistant active device receives the first indicator signal, until voice assistant active device quits work, just finish to show, and in above-described embodiment, each step does not have inevitable sequencing.
The embodiment of the present invention provides a kind of voice assistant Activiation method, and the method comprises: receive the first indicator signal, the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller; According to the first indicator signal, start the voice assistant.The voice assistant Activiation method provided based on the embodiment of the present invention, because voice assistant active device starts the voice assistant after receiving the first indicator signal, and the first indicator signal is to produce after the first button on user's trigger controller, can start the voice assistant by the mode of " key activates ", after the list of application interface that does not need the user to enter on intelligent terminal to comprise voice assistance application icon, just start the voice assistant after clicking list of application voice assistance application icon, therefore the voice assistant Activiation method that the embodiment of the present invention provides can make voice activated assistant under user's meaning in office interface, and mode of operation is simple, can promote the user experiences.
Embodiment tri-,
The embodiment of the present invention provides a kind of voice assistant active device 400, specifically as shown in Figure 4, installs 400 and comprises receiving element 401, start unit 402.
Receiving element 401, for receiving the first indicator signal, the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller.
Further, as shown in Figure 5, install 400 and also comprise acquiring unit 403, determining unit 404, recognition unit 405.
Acquiring unit 403, at start unit 402 according to the first indicator signal, after starting the voice assistant, obtain the speech data of user's input.
Determining unit 404, for determining whether the speech data that can obtain by the local voice storehouse identification acquiring unit 403 of pre-stored.
Further, as shown in Figure 6, determining unit 404 comprises acquisition module 4041, determination module 4042.
Identification module, if for the first matching value in the first threshold scope, determining can be by the local voice storehouse identification speech data of pre-stored.
Identification module, if also for the first matching value not in the first threshold scope, determining cannot be by the local voice storehouse identification speech data of pre-stored.
Further, pre-stored speech data corresponding to the first instruction in the local voice storehouse, wherein, the first instruction comprises behavior part and object part;
The speech data of the first instruction behavior part and the speech data in the local voice storehouse are mated, the first speech data in the speech data of the first instruction object part and local voice storehouse is mated, wherein, the first speech data is speech data corresponding with the first instruction behavior part in the local voice storehouse.
Further, as shown in Figure 7, install 400 and also comprise display unit 406.
Concrete, but realizing the method reference example one of voice assistant activation and the description of embodiment bis-by voice assistant active device, the embodiment of the present invention does not repeat them here.
The embodiment of the present invention provides a kind of voice assistant active device, comprising: receiving element, start unit.Receiving element receives the first indicator signal, and the first indicator signal indication starts the voice assistant, and wherein, the first indicator signal is to produce after the first button on user's trigger controller; Start unit, according to the first indicator signal, starts the voice assistant.The voice assistant active device provided based on the embodiment of the present invention, because the receiving element in voice assistant active device starts the voice assistant after receiving the first indicator signal, and the first indicator signal is to produce after the first button on user's trigger controller, can start the voice assistant by the mode of " key activates ", after the list of application interface that does not need the user to enter on intelligent terminal to comprise voice assistance application icon, just start the voice assistant after clicking list of application voice assistance application icon, therefore the voice assistant active device that the embodiment of the present invention provides can make voice activated assistant under user's meaning in office interface, and mode of operation is simple, can promote the user experiences.
The those skilled in the art can be well understood to, for convenience and simplicity of description, only the division with above-mentioned each functional module is illustrated, in practical application, can above-mentioned functions be distributed and completed by different functional modules as required, the inner structure that is about to device is divided into different functional modules, to complete all or part of function described above.The specific works process of the device of foregoing description, can, with reference to the corresponding process in preceding method embodiment, not repeat them here.
In the several embodiment that provide in the application, should be understood that disclosed apparatus and method can realize by another way.For example, device embodiment described above is only schematic.Another point, shown or discussed coupling each other or direct-coupling can be by some interfaces, the indirect coupling of device can be electrically, machinery or other form.
Unit as the separating component explanation can or can not be also physically to separate, and the parts that show as unit can be a physical location or a plurality of physical location, can be positioned at a place, or also can be distributed to a plurality of different local.Can select according to the actual needs some or all of unit wherein to realize the purpose of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can be also that the independent physics of unit exists, and also can be integrated in a unit two or more unit.Above-mentioned integrated unit both can adopt the form of hardware to realize, also can adopt the form of SFU software functional unit to realize.
If the form of SFU software functional unit of usining integrated unit realizes and during as production marketing independently or use, can be stored in a read/write memory medium.Understanding based on such, the all or part of of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this software product is stored in a storage medium, comprise that some instructions are with so that an equipment (can be single-chip microcomputer, chip etc.) or processor (processor) are carried out all or part of step of each embodiment method of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CDs.
Above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; can expect easily changing or replacing, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.
Claims (10)
1. a voice assistant Activiation method, is characterized in that, described method comprises:
Receive the first indicator signal, described the first indicator signal indication starts the voice assistant, and wherein, described the first indicator signal is to produce after the first button on user's trigger controller;
According to described the first indicator signal, start described voice assistant.
2. method according to claim 1, is characterized in that, described, according to described the first indicator signal, after starting described voice assistant, described method also comprises:
Obtain the speech data of user's input;
Determine whether to identify described speech data by the local voice storehouse of pre-stored;
If determine and can identify described speech data by the local voice storehouse of pre-stored, according to the local voice storehouse of described pre-stored, identify described speech data;
If determine and cannot identify described speech data by the local voice storehouse of pre-stored, according to the voice-over-net storehouse of pre-stored, identify described speech data.
3. method according to claim 2, is characterized in that, describedly determines whether that can identify described speech data by the local voice storehouse of pre-stored comprises:
Speech data in described speech data and described local voice storehouse is mated, obtained the first matching value;
If described the first matching value, in the first threshold scope, is determined and can be identified described speech data by the local voice storehouse of pre-stored;
If described the first matching value, not in described first threshold scope, is determined and cannot be identified described speech data by the local voice storehouse of pre-stored.
4. method according to claim 3, is characterized in that, pre-stored speech data corresponding to the first instruction in described local voice storehouse, and wherein, described the first instruction comprises behavior part and object part;
If the speech data of user input is described the first instruction, described speech data in described speech data and described local voice storehouse mated specifically and comprised:
The speech data of described the first instruction behavior part and the speech data in described local voice storehouse are mated, the first speech data in the speech data of described the first instruction object part and described local voice storehouse is mated, wherein, described the first speech data is speech data corresponding with described the first instruction behavior part in described local voice storehouse.
5. according to the described method of claim 1-4 any one, it is characterized in that, after described reception the first indicator signal, described method also comprises:
Show voice assistance application interface according to the first pre-configured display mode, wherein, described the first display mode is the floating window display mode that background presents translucent.
6. a voice assistant active device, is characterized in that, described device comprises receiving element, start unit;
Described receiving element, for receiving the first indicator signal, described the first indicator signal indication starts the voice assistant, and wherein, described the first indicator signal is to produce after the first button on user's trigger controller;
Described start unit, for the first indicator signal of obtaining according to described acquiring unit, start described voice assistant.
7. device according to claim 6, is characterized in that, described device also comprises acquiring unit, determining unit, recognition unit;
Described acquiring unit, at described start unit according to described the first indicator signal, after starting described voice assistant, obtain the speech data of user input;
Described determining unit, can identify the speech data that described acquiring unit obtains by the local voice storehouse of pre-stored for determining whether;
Described recognition unit, if can identify described speech data by the local voice storehouse of pre-stored for determining, identify described speech data according to the local voice storehouse of described pre-stored;
Described recognition unit, if also for determining, cannot identify described speech data by the local voice storehouse of pre-stored, identify described speech data according to the voice-over-net storehouse of pre-stored.
8. device according to claim 7, is characterized in that, described determining unit comprises acquisition module, determination module;
Described acquisition module, mated for the speech data by described speech data and described local voice storehouse, obtains the first matching value;
Described identification module, if for described the first matching value in the first threshold scope, determine and can identify described speech data by the local voice storehouse of pre-stored;
Described identification module, if also for described the first matching value not in described first threshold scope, determine and cannot identify described speech data by the local voice storehouse of pre-stored.
9. device according to claim 8, is characterized in that, pre-stored speech data corresponding to the first instruction in described local voice storehouse, and wherein, described the first instruction comprises behavior part and object part;
Described acquisition module specifically for:
The speech data of described the first instruction behavior part and the speech data in described local voice storehouse are mated, the first speech data in the speech data of described the first instruction object part and described local voice storehouse is mated, wherein, described the first speech data is speech data corresponding with described the first instruction behavior part in described local voice storehouse.
10. according to the described device of claim 6-9 any one, it is characterized in that, described device also comprises display unit;
Described display unit, for after described receiving element is received the first indicator signal, show voice assistance application interface according to the first pre-configured display mode, and wherein, described the first display mode is the floating window display mode that background presents translucent.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310467383.9A CN103488401A (en) | 2013-09-30 | 2013-09-30 | Voice assistant activating method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310467383.9A CN103488401A (en) | 2013-09-30 | 2013-09-30 | Voice assistant activating method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103488401A true CN103488401A (en) | 2014-01-01 |
Family
ID=49828678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310467383.9A Pending CN103488401A (en) | 2013-09-30 | 2013-09-30 | Voice assistant activating method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103488401A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105467978A (en) * | 2016-01-13 | 2016-04-06 | 北京光年无限科技有限公司 | Multimoding activation data processing method and system, and intelligent robot |
CN105721696A (en) * | 2016-02-16 | 2016-06-29 | 广东欧珀移动通信有限公司 | Mobile terminal information inputting method and device, and mobile terminal |
CN106572418A (en) * | 2015-10-09 | 2017-04-19 | 芋头科技(杭州)有限公司 | Voice assistant expansion device and working method therefor |
CN106898349A (en) * | 2017-01-11 | 2017-06-27 | 梅其珍 | A kind of Voice command computer method and intelligent sound assistant system |
CN106959746A (en) * | 2016-01-12 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | The processing method and processing device of speech data |
WO2017206661A1 (en) * | 2016-05-30 | 2017-12-07 | 深圳市鼎盛智能科技有限公司 | Voice recognition method and system |
CN107491286A (en) * | 2017-07-05 | 2017-12-19 | 广东艾檬电子科技有限公司 | Pronunciation inputting method, device, mobile terminal and the storage medium of mobile terminal |
CN107643921A (en) * | 2016-07-22 | 2018-01-30 | 联想(新加坡)私人有限公司 | For activating the equipment, method and computer-readable recording medium of voice assistant |
CN108459880A (en) * | 2018-01-29 | 2018-08-28 | 出门问问信息科技有限公司 | voice assistant awakening method, device, equipment and storage medium |
CN108874460A (en) * | 2017-05-11 | 2018-11-23 | 塞舌尔商元鼎音讯股份有限公司 | Speech transmission device and its method for executing voice assistant program |
CN109741737A (en) * | 2018-05-14 | 2019-05-10 | 北京字节跳动网络技术有限公司 | A kind of method and device of voice control |
US10664533B2 (en) | 2017-05-24 | 2020-05-26 | Lenovo (Singapore) Pte. Ltd. | Systems and methods to determine response cue for digital assistant based on context |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102377622A (en) * | 2010-08-17 | 2012-03-14 | 鸿富锦精密工业(深圳)有限公司 | Remote control interface and remote control method thereof |
CN102708865A (en) * | 2012-04-25 | 2012-10-03 | 北京车音网科技有限公司 | Method, device and system for voice recognition |
CN102883041A (en) * | 2012-08-02 | 2013-01-16 | 聚熵信息技术(上海)有限公司 | Voice control device and method for mobile terminal |
CN103247291A (en) * | 2013-05-07 | 2013-08-14 | 华为终端有限公司 | Updating method, device, and system of voice recognition device |
CN103257787A (en) * | 2013-05-16 | 2013-08-21 | 北京小米科技有限责任公司 | Method and device for starting voice assistant application |
-
2013
- 2013-09-30 CN CN201310467383.9A patent/CN103488401A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102377622A (en) * | 2010-08-17 | 2012-03-14 | 鸿富锦精密工业(深圳)有限公司 | Remote control interface and remote control method thereof |
CN102708865A (en) * | 2012-04-25 | 2012-10-03 | 北京车音网科技有限公司 | Method, device and system for voice recognition |
CN102883041A (en) * | 2012-08-02 | 2013-01-16 | 聚熵信息技术(上海)有限公司 | Voice control device and method for mobile terminal |
CN103247291A (en) * | 2013-05-07 | 2013-08-14 | 华为终端有限公司 | Updating method, device, and system of voice recognition device |
CN103257787A (en) * | 2013-05-16 | 2013-08-21 | 北京小米科技有限责任公司 | Method and device for starting voice assistant application |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106572418A (en) * | 2015-10-09 | 2017-04-19 | 芋头科技(杭州)有限公司 | Voice assistant expansion device and working method therefor |
CN106959746A (en) * | 2016-01-12 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | The processing method and processing device of speech data |
CN105467978A (en) * | 2016-01-13 | 2016-04-06 | 北京光年无限科技有限公司 | Multimoding activation data processing method and system, and intelligent robot |
CN105467978B (en) * | 2016-01-13 | 2018-11-30 | 北京光年无限科技有限公司 | Multi-modal activation data processing method, system and intelligent robot |
CN105721696B (en) * | 2016-02-16 | 2020-01-10 | Oppo广东移动通信有限公司 | Mobile terminal information input method and device and mobile terminal |
CN105721696A (en) * | 2016-02-16 | 2016-06-29 | 广东欧珀移动通信有限公司 | Mobile terminal information inputting method and device, and mobile terminal |
WO2017206661A1 (en) * | 2016-05-30 | 2017-12-07 | 深圳市鼎盛智能科技有限公司 | Voice recognition method and system |
CN107643921A (en) * | 2016-07-22 | 2018-01-30 | 联想(新加坡)私人有限公司 | For activating the equipment, method and computer-readable recording medium of voice assistant |
US10621992B2 (en) | 2016-07-22 | 2020-04-14 | Lenovo (Singapore) Pte. Ltd. | Activating voice assistant based on at least one of user proximity and context |
CN106898349A (en) * | 2017-01-11 | 2017-06-27 | 梅其珍 | A kind of Voice command computer method and intelligent sound assistant system |
CN108874460A (en) * | 2017-05-11 | 2018-11-23 | 塞舌尔商元鼎音讯股份有限公司 | Speech transmission device and its method for executing voice assistant program |
CN108874460B (en) * | 2017-05-11 | 2022-12-02 | 达发科技股份有限公司 | Voice transmission device and method for executing voice assistant program |
US10664533B2 (en) | 2017-05-24 | 2020-05-26 | Lenovo (Singapore) Pte. Ltd. | Systems and methods to determine response cue for digital assistant based on context |
CN107491286A (en) * | 2017-07-05 | 2017-12-19 | 广东艾檬电子科技有限公司 | Pronunciation inputting method, device, mobile terminal and the storage medium of mobile terminal |
CN108459880A (en) * | 2018-01-29 | 2018-08-28 | 出门问问信息科技有限公司 | voice assistant awakening method, device, equipment and storage medium |
CN109741737A (en) * | 2018-05-14 | 2019-05-10 | 北京字节跳动网络技术有限公司 | A kind of method and device of voice control |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103488384A (en) | Voice assistant application interface display method and device | |
CN103488401A (en) | Voice assistant activating method and device | |
CN103489444A (en) | Speech recognition method and device | |
CN107919130B (en) | Cloud-based voice processing method and device | |
CN105593868B (en) | Fingerprint identification method and device and mobile terminal | |
CN107644642B (en) | Semantic recognition method and device, storage medium and electronic equipment | |
CN107591155B (en) | Voice recognition method and device, terminal and computer readable storage medium | |
CN102568478B (en) | Video play control method and system based on voice recognition | |
US9449163B2 (en) | Electronic device and method for logging in application program of the electronic device | |
JP2019185062A (en) | Voice interaction method, terminal apparatus, and computer readable recording medium | |
CN105551498A (en) | Voice recognition method and device | |
US10559304B2 (en) | Vehicle-mounted voice recognition device, vehicle including the same, vehicle-mounted voice recognition system, and method for controlling the same | |
CN103594088A (en) | Information processing method and electronic equipment | |
US9870772B2 (en) | Guiding device, guiding method, program, and information storage medium | |
CN102945671A (en) | Voice recognition method | |
CN103092981B (en) | A kind of method and electronic equipment setting up phonetic symbol | |
CN103106061A (en) | Voice input method and device | |
CN204496731U (en) | A kind of Voice command dictation device | |
CN104267922A (en) | Information processing method and electronic equipment | |
CN105679357A (en) | Mobile terminal and voiceprint identification-based recording method thereof | |
CN106228047B (en) | A kind of application icon processing method and terminal device | |
CN111179915A (en) | Age identification method and device based on voice | |
CN104091601A (en) | Method and device for detecting music quality | |
EP3593346A1 (en) | Graphical data selection and presentation of digital content | |
CN112017650A (en) | Voice control method and device of electronic equipment, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140101 |
|
RJ01 | Rejection of invention patent application after publication |