CN114327198A - Control function pushing method and device - Google Patents
Control function pushing method and device Download PDFInfo
- Publication number
- CN114327198A CN114327198A CN202011057612.6A CN202011057612A CN114327198A CN 114327198 A CN114327198 A CN 114327198A CN 202011057612 A CN202011057612 A CN 202011057612A CN 114327198 A CN114327198 A CN 114327198A
- Authority
- CN
- China
- Prior art keywords
- application
- terminal
- control instruction
- input
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000006870 function Effects 0.000 claims description 135
- 238000004590 computer program Methods 0.000 claims description 9
- 238000002360 preparation method Methods 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 11
- 238000012545 processing Methods 0.000 description 32
- 238000004891 communication Methods 0.000 description 18
- 238000007726 management method Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 230000002618 waking effect Effects 0.000 description 11
- 238000010295 mobile communication Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 8
- 230000005236 sound signal Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a control function pushing method and control function pushing equipment, and relates to the field of electronic equipment. The control instruction pushed by the terminal is more personalized, and the use experience of the user is improved. The specific scheme is as follows: when a user needs to operate a certain application, such as a first application, on the terminal, the user may open the first application, and the terminal may display an interface of the first application. After the user inputs the awakening voice control function to the terminal, for example, after the first input, the terminal can receive the first input of the awakening voice control function; in response to the first input, the terminal displays at least one control instruction. The at least one control instruction corresponds to the first application, the control instructions corresponding to different applications are different, and each control instruction in the at least one control instruction corresponds to one function of the first application.
Description
Technical Field
The embodiment of the application relates to the field of electronic equipment, in particular to a control function pushing method and control function pushing equipment.
Background
With the current popularization of all universal terminal voice assistants, the voice assistants become a main way for users to perform convenient functional operation on applications. When the voice assistant is turned on, the voice assistant generally pushes and displays some operation instructions that can be implemented, so that a user can conveniently know which operation instructions can be used for performing corresponding convenient function operations on the application by the voice assistant.
For example, when the voice assistant is awakened, the terminal displays a voice control interface. As shown in FIG. 1, the voice control interface may include a list option that includes a plurality of different operational directives. For example, the list options include operation instructions of calling XX, navigating XX, sending a message to XX, opening XX application, weather today, and the like. The user can select a certain operation instruction in the list options to realize the control of the terminal. For example, the user may speak a voice command "what weather today" near the terminal. The terminal can be triggered to execute convenient and fast function operation corresponding to the voice instruction according to the received voice instruction, namely, the weather application can be started through the terminal so as to inquire the weather condition of the day and the weather condition is displayed to a user.
Disclosure of Invention
The embodiment of the application provides a control function pushing method and device. The terminal can push a control instruction corresponding to the application running in the foreground to the user according to the difference of the application running in the foreground. The control instruction pushed by the terminal is more personalized, and the use experience of the user is improved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides a control function pushing method, which may be applied to a terminal with a voice control function. The method can comprise the following steps: when a user needs to operate a certain application, such as a first application, on the terminal, the user may open the first application, and the terminal may display an interface of the first application. After the user inputs the awakening voice control function to the terminal, for example, after the first input, the terminal can receive the first input of the awakening voice control function; in response to the first input, the terminal displays at least one control instruction. The at least one control instruction corresponds to the first application, the control instructions corresponding to different applications are different, and each control instruction in the at least one control instruction corresponds to one function of the first application.
By adopting the technical scheme, when a user operates a certain application, such as a first application, namely the terminal uses the first application as an application running in the foreground, if the user inputs the voice control function to the terminal in a wake-up mode, at the moment, the terminal can display a control instruction corresponding to the displayed first application (or the application running in the foreground). By pushing a control instruction (or an operation instruction) corresponding to an application running in the foreground to the user according to a different application running in the foreground. The control instruction pushed by the terminal is more personalized, and the use experience of the user is improved.
In a possible implementation manner, after the user selects a control instruction in the at least one control instruction of the terminal, for example, the user selects a first control instruction in the at least one control instruction, the terminal may receive an input of the first control instruction, for example, a second input, from the user. In response to the second input, the terminal may execute a function of the first application corresponding to the first control instruction. Therefore, the user can control the terminal to execute the corresponding function of the first application according to the control instruction displayed by the terminal pushing, and the user can conveniently and fast operate the function of the first application.
In another possible implementation manner, the second input may be a touch operation on the first control instruction. For example, the user may perform a touch operation on the first control instruction on a display screen of the terminal. Therefore, the user can conveniently and quickly select and input the control instruction corresponding to the first application displayed by the terminal.
In another possible implementation, the second input may be voice data corresponding to the first control instruction. At this time, after the terminal receives a first input to wake up the voice control function, the terminal may wake up the voice control function in response to the first input. In this way, the terminal can receive and respond to voice data input by the user through the awakened voice control function. Therefore, the user can conveniently select the control instruction displayed by the terminal in a non-contact way in a voice control mode. The function consistency of the user for executing the first application by using the voice control function control terminal is improved, and the user experience is improved.
In another possible implementation manner, the displaying, by the terminal, at least one control instruction may specifically include: the terminal displays at least one control instruction in a preset area of an interface of the first application. The preset area is an area outside the area where the control in the interface of the first application is located. Therefore, when the terminal displays at least one control instruction, the control instruction can be prevented from shielding the interface of the first application displayed by the terminal. Thereby facilitating the user to view the content in the interface of the first application.
In another possible implementation, if the terminal displays the interfaces of multiple applications in a split screen manner, for example, the terminal simultaneously displays the interface of the first application and the interface of the second application. The display area of the display screen of the terminal may include a first area and a second area. The displaying, by the terminal, the interface of the first application may specifically include: the terminal displays an interface of the first application in the first area. The method may further comprise: and the terminal displays the interface of the second application in the second area. Wherein the first application is in an operational state and the second application is not in an operational state. Therefore, when the terminal displays a plurality of applications in a split screen mode and receives an instruction for awakening the voice control function, the terminal can only display at least one control instruction corresponding to the first application in the operating state, and does not need to display at least one control instruction corresponding to the second application not in the operating state. Therefore, the situation that the control instruction displayed by the terminal does not correspond to the application which is operated by the user in the foreground is avoided.
In a second aspect, an embodiment of the present application provides an apparatus, which may be applied to a terminal having a voice control function, for implementing the method in the first aspect. The functions of the device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, for example, a display module, an input module, a processing module, and the like.
The display module is used for displaying an interface of the first application; the input module is used for receiving a first input of the awakening voice control function; the display module is further used for responding to the first input and displaying at least one control instruction. At least one control instruction corresponds to the first application, and the control instructions corresponding to different applications are different. And, each control instruction in the at least one control instruction corresponds to a function of the first application.
In a possible embodiment, the at least one control instruction comprises a first control instruction. The input module is also used for receiving a second input of the first control instruction. And the processing module is used for responding to the second input and executing the function of the first application corresponding to the first control instruction.
In one possible embodiment, the second input may be a touch operation to the first control instruction.
In one possible embodiment, the second input may be voice data corresponding to the first control instruction. The processing module is further configured to wake up the voice control function in response to the first input.
In a possible implementation manner, the display module is specifically configured to display at least one control instruction in a preset area of the interface of the first application. The preset area is an area outside the area where the control in the interface of the first application is located.
In a possible embodiment, the at least one control command is predefined.
In one possible implementation, a display area of a display screen of a terminal includes a first area and a second area. The display module is specifically used for displaying an interface of a first application in a first area; and displaying an interface of the second application in the second area. Wherein the first application is in an operational state and the second application is not in an operational state.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory for storing instructions executable by the processor. The processor is configured to execute the above instructions to enable the electronic device to implement the control function pushing method according to the first aspect or any one of the possible implementation manners of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having computer program instructions stored thereon. The computer program instructions, when executed by the electronic device, cause the electronic device to implement the control function push method as described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes computer readable code, when the computer readable code is executed in an electronic device, causing the electronic device to implement the control function pushing method according to the first aspect or any one of the possible implementation manners of the first aspect.
It should be appreciated that the description of technical features, solutions, benefits, or similar language in this application does not imply that all of the features and advantages may be realized in any single embodiment. Rather, it is to be understood that the description of a feature or advantage is intended to include the specific features, aspects or advantages in at least one embodiment. Therefore, the descriptions of technical features, technical solutions or advantages in the present specification do not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantages described in the present embodiments may also be combined in any suitable manner. One skilled in the relevant art will recognize that an embodiment may be practiced without one or more of the specific features, aspects, or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
Fig. 1 is a schematic interface diagram of a control function pushing method applied in the prior art;
fig. 2 is a schematic interface diagram of another control function pushing method provided in the prior art when applied;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method for controlling function pushing according to an embodiment of the present application;
fig. 5 is a schematic interface diagram of an application of a control function pushing method according to an embodiment of the present disclosure;
fig. 6 is a schematic interface diagram of another control function pushing method applied in the embodiment of the present application;
fig. 7 is a schematic interface diagram of another control function pushing method applied in the embodiment of the present application;
fig. 8 is a schematic interface diagram of another control function pushing method applied in the embodiment of the present application;
fig. 9 is a schematic interface diagram of another control function pushing method applied in the embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure.
Detailed Description
With the continuous upgrade of the terminal, the interaction between the terminal and the user is gradually enriched on the basis of contact interaction modes such as key pressing and touch. For example, many terminals currently have a non-contact interaction method such as a gesture control function and a voice control function. By utilizing the gesture control function, a user can control the terminal in a gesture action mode in front of a camera of the terminal. By using the voice control function, a user can control the terminal by speaking voice instructions within the microphone pickup range of the terminal.
The voice control function of the terminal can be realized by installing a voice assistant in the terminal. The voice assistant is typically in a dormant state. Before using the voice control function of the terminal, the user needs to wake up the voice assistant. The way of waking up the voice assistant may be voice wake-up, touch wake-up, or key wake-up. For example, the user may speak a wakeup word preset in the terminal near the terminal, and the terminal activates the voice assistant in response to the wakeup word spoken by the user. Or, the user may press a key with a function of waking up the voice assistant, or touch a control with a function of waking up the voice assistant on the screen of the terminal, or input a predefined wake-up gesture on the screen of the terminal, so that the terminal wakes up the voice assistant in response to the above wake-up operation input by the user. In this embodiment, the voice control function may refer to: after the voice assistant of the terminal is started, a user triggers the terminal to automatically execute the function corresponding to the instruction by inputting the instruction.
In order to enable the user to know more conveniently, the convenient function operation can be performed by means of the voice control function of the terminal, such as a voice assistant. Generally, when the voice control function is opened, the terminal pushes and displays some operation instructions corresponding to the convenient function operation which can be realized. For example, the voice control interface of the background art shown in fig. 1.
At present, when a voice control function is turned on, a terminal push display operation instruction mainly has the following modes:
one way is to register in advance an operation instruction corresponding to a convenient function operation that can be realized by the terminal. When the voice control function (such as a voice assistant) of the terminal is awakened, the terminal displays the pre-registered operation instructions one by one through list options contained in the voice control interface.
It can be seen that when the voice control function is turned on, the operation instruction pushed and displayed by the terminal in this way is preset in advance. No matter what application scene the terminal is in, the operation instruction of the final push display is always the same.
For example, list options in a voice control interface displayed by the terminal always show "call to XX", "message to XX", "navigate to XX", and the like.
In another mode, an operation instruction corresponding to a convenient function operation that can be realized by the terminal may be registered in advance in the terminal. And, establishing a mapping relation between corresponding operation instructions in advance according to the relevance between the convenient function operations. When the voice control function (e.g., voice assistant) of the terminal is awakened and responds to a certain operation instruction input by the user, the terminal may display other operation instructions corresponding to the input operation instruction according to the input operation instruction and the mapping relationship.
For example, a mapping relationship between the operation instruction "navigate to go XX" and the operation instruction "play XX music" is set in advance. The user speaks a voice command to "navigate to XX" near the terminal. The terminal receives and responds to the voice command to trigger the terminal to execute convenient and fast function operation corresponding to the voice command. And presenting an operation instruction associated with the convenient function operation corresponding to the voice instruction: "play XX music". For example, as shown in fig. 2, the terminal announces "navigation, you can say XX music is played during navigation".
It can be seen that when the voice control function is turned on, the terminal pushes and displays the operation instructions in this way, and although the operation instructions can be associated and pushed according to the previous voice instruction, the associated push between the operation instructions is set by default, and the operation instructions of the associated push do not change correspondingly for different application scenarios.
The embodiment of the application provides a control function pushing method which can be applied to a terminal with a voice control function. When a user needs to operate a certain application (e.g. called a first application) on the terminal, the first application can be started. The terminal can display the interface of the first application on a screen for the user to operate. When the terminal displays the interface of the first application, if an input for waking up the voice control function, for example, the first input is received, the terminal may display a control instruction (i.e., an operation instruction) corresponding to the function of the first application. For example, the terminal displays a control instruction corresponding to the first application in a displayed voice control interface. Therefore, the terminal can display the control instruction corresponding to the application according to the difference of the currently opened (or running in the foreground) application. Thereby enabling the user to more conveniently know which control commands the application he is using can support. So that the user can conveniently and fast operate the corresponding application according to the control instruction pushed by the terminal.
In some embodiments, the terminal is a terminal having a voice control function, an application can be installed, and a display (or a display screen). In this embodiment of the application, the terminal may be a touch screen device or a non-touch screen device. For example, the terminal may be an electronic device such as a mobile phone, a tablet computer, a handheld computer, a wearable device (e.g., a smart watch, a smart bracelet), a smart home device (e.g., a smart screen, a smart television), a car machine (e.g., a car computer), a game machine, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, a media player, and the like. The specific form of the electronic device is not limited in the embodiments of the present application.
For example, please refer to fig. 3, which is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may include a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charge management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, a button 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identification Module (SIM) card interface 395, and the like. The sensor module 380 may include a pressure sensor 380A, a gyroscope sensor 380B, an air pressure sensor 380C, a magnetic sensor 380D, an acceleration sensor 380E, a distance sensor 380F, a proximity light sensor 380G, a fingerprint sensor 380H, a temperature sensor 380J, a touch sensor 380K, an ambient light sensor 380L, a bone conduction sensor 380M, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device. In other embodiments, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The controller may be a neural center and a command center of the electronic device. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of acquiring the instruction and executing the instruction.
In the embodiment of the present application, as an example, a wake-up word (e.g., "small E") for waking up the voice control function may be provided in the electronic device. The DSP may monitor the voice data in real time through the microphone 370C of the electronic device. When the DSP monitors the voice data, the DSP can check the monitored voice data to determine whether the monitored voice data is a wake-up word set in the electronic equipment, and if the monitored voice data passes the check, the DSP wakes up the voice control function.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 310, thereby increasing the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 340 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 340 may receive charging input from a wired charger via the USB interface 330. In some wireless charging embodiments, the charging management module 340 may receive a wireless charging input through a wireless charging coil of the electronic device. The charging management module 340 may also supply power to the electronic device through the power management module 341 while charging the battery 342.
The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 and provides power to the processor 310, the internal memory 321, the external memory, the display 394, the camera 393, and the wireless communication module 360. The power management module 341 may also be configured to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in an electronic device may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device. The mobile communication module 350 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 350 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the processor 310. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the same device as at least some of the modules of the processor 310.
The wireless communication module 360 may provide solutions for wireless communication applied to the electronic device, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of the electronic device is coupled to the mobile communication module 350 and antenna 2 is coupled to the wireless communication module 360 so that the electronic device can communicate with the network and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device implements display functions via the GPU, the display 394, and the application processor, among other things. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 394 is used to display images, video, and the like. The display screen 394 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device may include 1 or N display screens 394, N being a positive integer greater than 1.
The electronic device may implement the shooting function through the ISP, camera 393, video codec, GPU, display 394, application processor, etc.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device selects a frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent cognition of electronic equipment, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The processor 310 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 321. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, phone book and the like) created in the using process of the electronic device. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device may implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310.
The speaker 370A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic device can listen to music through the speaker 370A or listen to a hands-free conversation.
The receiver 370B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device answers a call or voice message, the receiver 370B can be used to answer the voice by being placed close to the ear.
The touch sensor 380K is also referred to as a "touch panel". The touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 380K is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display 394. In other embodiments, the touch sensor 380K can be disposed on a surface of the electronic device at a different location than the display 394.
Keys 390 include a power-on key, a volume key, etc. The keys 390 may be mechanical keys. Or may be touch keys. The electronic device may receive a key input, and generate a key signal input related to user settings and function control of the electronic device.
The motor 391 may generate a vibration cue. The motor 391 may be used for both incoming call vibration prompting and touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 391 may also respond to different vibration feedback effects by performing touch operations on different areas of the display 394. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The control function pushing method provided by the embodiment of the application can be implemented in the electronic device (or called as a terminal) with the hardware structure. In the following, taking an example that the terminal is a mobile phone and the voice control function is implemented by a voice assistant deployed on the terminal, the control function pushing method will be described with reference to fig. 4.
It should be noted that, in this embodiment, the voice assistant may be an embedded application in the terminal (i.e., a system application of the terminal), or may be a downloadable application. An embedded application is an application program provided as part of the implementation of a terminal, such as a cell phone. A downloadable application is an application that may provide its own Internet Protocol Multimedia Subsystem (IMS) connection. The downloadable application may be pre-installed in the terminal or may be a third party application downloaded by the user and installed in the terminal.
Fig. 4 is a flowchart illustrating a method for controlling function pushing according to an embodiment of the present application. As shown in fig. 4, the control function push method may include the following S401-S405.
S401, displaying an interface of the first application by the mobile phone.
The first application may be any application installed in the mobile phone, and may be a system application of the mobile phone, or may be a third-party application, and this embodiment is not specifically limited herein. In the embodiment of the present application, the first application displayed by the mobile phone may also be referred to as a foreground application.
In some embodiments, the first application may have multiple functions, and the handset may implement the functions of the first application according to one or more operations performed by the user.
Take the first application as a video application as an example. As shown in fig. 5, after the mobile phone receives an operation of opening the video application by the user, the mobile phone may display a home page 501 of the video application. The home page 501 may include posters 502 of a plurality of different videos, video names 503 of the respective videos displayed in correspondence with the posters, and the like. The video application may have the function of showing the detail pages of different videos in its home page 501, as well as the function of playing each video. For example, a user can click on a corresponding poster 502 or a video name 503 according to the needs of the user, so that the mobile phone enters a detail page of a corresponding video in the video application. The detailed page of the video can be further provided with a playing control, and a user can click the playing control to realize the function of playing the corresponding video by the video application.
S402, the mobile phone receives a first input for waking up the voice assistant.
In some embodiments, the first input may be first voice data corresponding to a wakeup word that is spoken by the user and registered in the mobile phone in advance. For example, the wake-up word pre-registered in the handset may be "small E". The user can speak the awakening word 'small E' close to the mobile phone (or in the sound pickup range of the microphone of the mobile phone), when no other software or hardware of the mobile phone is using the microphone to collect voice data, the DSP of the mobile phone can detect the first voice data corresponding to the awakening word 'small E' through the microphone and cache the first voice data. If the received voice data is verified to be consistent with the first voice data corresponding to the pre-registered awakening word, the mobile phone can determine that the first input for awakening the voice assistant is received.
Or, in other embodiments, the first input may also be a touch gesture operation performed by the user on the touch screen of the mobile phone. For example, a touch gesture to wake up a voice assistant may be predefined, such as swiping from the bottom left of the screen. When a touch sensor arranged in a touch screen of a mobile phone, for example, detects an operation of sliding up from the left side of the bottom of a screen, the mobile phone can determine that a first input for waking up a voice assistant is received.
In still other embodiments, the mobile phone may be provided with a key having a function of waking up the voice assistant, and the first input may be a key operation performed by the user on the key. For example, the key provided by the mobile phone and having the function of waking up the voice assistant may be a power key, and the first input may be a key operation of pressing the power key of the mobile phone for a long time (e.g., 1 second).
And S403, responding to the first input, and displaying at least one control instruction by the mobile phone.
The at least one control instruction corresponds to a first application, and the control instructions corresponding to different applications are different.
Wherein each control instruction in the at least one control instruction corresponds to a function of the first application.
In some embodiments, the control instructions may be stored in a mobile phone. For example, control commands corresponding to different functions of the first application are respectively configured in advance in an application file of the first application.
After the mobile phone receives the first input, the mobile phone may first query an application corresponding to the currently displayed interface, such as the first application (i.e., foreground application), and obtain and display a control instruction corresponding to the first application by obtaining an application file of the first application.
In some embodiments, the mobile phone may display the control instruction in a preset area of the interface of the first application, where the preset area may be an area outside an area where the control in the interface of the first application is located. For example, in conjunction with fig. 6, the cell phone hovers the control command in a bottom area 602 of the interface 601 of the first application.
Or, in other embodiments, the mobile phone may further display the control instruction in another area on the interface of the first application to avoid blocking normal display of the control in the interface of the first application, which is not limited herein.
With reference to fig. 6, continuing with the first application as a video application and the first input as first voice data (e.g., "small E"), as shown in fig. 6 (a), the mobile phone currently displays an interface 601 of the first application. The user speaks 'small E' close to the mobile phone, and after receiving the first voice data, the mobile phone can acquire an application file of the video application, wherein the application file comprises at least one control instruction corresponding to the video application. For example, the at least one control instruction may include: "play first video", "share first video", "play second video", "share second video", "play third video", "share third video", and so on. The control instruction can respectively correspond to the video application to play the first video, the second video and the third video and share the first video, the second video and the third video. Thereafter, as shown in fig. 6 (b), the mobile phone may display the at least one control instruction in a blank area, such as a bottom area 602, of the interface 601 of the first application.
In this example, the at least one control instruction may be pre-configured within an application file of the video application by a configuration parameter. For example, the control command may be configured in an application manifest file (android manifest.xml) of the video application by a developer according to a convenient operation function that can be supported by the video application when the video application is developed.
For example, the configuration parameters configured in the application manifest file may be:
<meta-data android:name=“chips"android:value="@string/meta_application"></meta-data>。
the configuration parameter may be added as an additional data item under the < application > component in the application manifest file. Wherein, value "@ string/meta _ application" is the data item value, namely the specific content of the control instruction. Name's value "chips" is a data item name used to identify a data item value. For example, the configuration parameters may be:
< meta-data android, name ═ chips "android, value ═ play first video" > < meta-data >.
I.e. one control instruction of the video application is "play first video".
It should be noted that the above configuration of the control instruction is only an example, and other control instructions in the at least one control instruction are similar to the above configuration and are not described again here.
In this example, after the handset receives the voice data of "small E", the configured control instruction may be obtained in the application manifest file of the video application for display.
In a possible implementation manner, the code for the mobile phone to obtain the configured control instruction in the application manifest file of the video application may be as follows:
getPackageManager().getApplicationInfo(getPackageName(),PackageManager.GET_META_DATA)。
the code indicates obtaining application information for the video application, the application information including an application package name and META _ DATA metadata. The META _ DATA metadata is a configuration parameter of the control command configured in the application manifest file of the video application.
By predefining the control instruction in the first application, the control instruction corresponding to the first application can be updated when the first application is updated. And when the first application is a third-party application, the mobile phone can still acquire the control instruction defined by the first application, so that the third-party application can be conveniently adapted to the voice assistant of the mobile phone, and the support and recommendation of the voice assistant of the mobile phone on the convenient functional operation of the third-party application are realized.
After the mobile phone displays at least one control instruction, the user can select the control instruction in the at least one control instruction. For example, the control instruction selected by the user is the first control instruction, and this embodiment may further include the following S404-S405.
S404, the mobile phone receives a second input of the first control instruction.
In some embodiments, the second input may be second voice data spoken by the user near the handset (or within pickup range of the handset microphone) corresponding to the first control instruction.
In other embodiments, the second input may be a click operation of the user on a function control corresponding to the first control instruction displayed by the mobile phone.
When the second input is the second voice data, the mobile phone can wake up the voice assistant after receiving the first input for waking up the voice assistant. So that the voice assistant can recognize and respond to the second voice data when the user subsequently inputs the first control instruction through the second voice data. Or, the second input is other input such as touch operation, and after the mobile phone receives the first input for waking up the voice assistant, the mobile phone may also wake up the voice assistant directly, which is not limited here.
For example, the second input is second voice data corresponding to the first control instruction and spoken by the user near the mobile phone. If the control command displayed by the mobile phone includes "play the first video", "share the first video", "play the second video", "share the second video", "play the third video", "share the third video", and the like, the second voice data may be "play the first video", "share the second video", and the like.
For another example, the second input is a click operation of the user on a function control corresponding to the first control instruction displayed on the mobile phone. The function controls corresponding to the respective control instructions may be preconfigured. And the user can perform touch operation on the function control corresponding to the first control instruction through the touch screen to realize second input of the first control instruction. The handset may also be provided with a selection key having a selection function. The user can select and click the function control corresponding to the first control instruction through the selection key, and second input of the first control instruction is achieved.
S405, responding to the second input, and executing the function of the first application corresponding to the first control instruction by the mobile phone.
For example, after the mobile phone receives the second input, the mobile phone may transmit the first control instruction to the first application, so that the first application executes the corresponding function according to the received first control instruction.
Take the first application as a video application, the first control command as "play the first video", and the second input as a touch operation to the first control command as an example. The function of the first application may be that the mobile phone enters the first video detail page of the first application and plays the first video.
Referring to fig. 7 and 8, the user performs a touch operation on a first control instruction 701, such as "play a first video", of at least one control instruction displayed by the mobile phone (as shown in fig. 7). After receiving the touch operation, the mobile phone executes a function of a video application corresponding to "play the first video", that is, the mobile phone opens the detail interface 801 of the first video and plays the first video (as shown in fig. 8).
The mobile phone may transmit a first control instruction to the video application, such as "play the first video", so that the video application executes a corresponding function according to the received first control instruction.
For example, the mobile phone may deliver the first control instruction to the video application, such as "play the first video" may be implemented by:
Intent intent=new Intent(action);
Intent.setPackageName(packageName);
put ("chips", "play first video").
The meaning is that an application package name is set, and metadata containing a first control instruction (namely, "play first video") is sent to a video application corresponding to the application package name.
The metadata containing the first control instruction may be sent to the first application in a deep link (deep link) form, and accordingly, the first application is configured with a deep link response function so that the mobile phone executes a function corresponding to the first application according to the received metadata.
Optionally, in this embodiment of the application, when the mobile phone executes the function of the first application corresponding to the first control instruction, the mobile phone may not display each control instruction any more, so as to avoid affecting normal use of the function of the first application corresponding to the first control instruction by the user.
In some embodiments, the mobile phone may also display interfaces of multiple applications simultaneously, that is, the interface of multiple applications is displayed in a split screen manner. After the mobile phone receives the first input, the mobile phone may first determine which application of the applications is the application in the operating state (or the application being operated and used by the user), and then perform S403-S405 on the application in the operating state, for example, in response to the first input, the mobile phone displays at least one control instruction corresponding to the application in the operating state.
In some optional embodiments, the application in the operating state may be determined by querying a host process state of the application through the mobile phone. If the mobile phone inquires the multiple applications displayed simultaneously, the application with the main process state being the running state is the application in the operating state.
For example, as shown in fig. 9, a display area of a display screen of a cellular phone may include a first area 901 and a second area 902. The mobile phone can display an interface of a first application, such as a home interface of a video application, in the first area 901, and display an interface of a second application, such as a chat interface of a chat application, in the second area 902. Therefore, the interface of the first application and the interface of the second application are displayed simultaneously. Wherein the user is operating the video application, i.e. the video application is in an operational state, while the chat application is not. By adopting the method, when the user performs application foreground operation, the terminal can display the application interface operated by the user. If the terminal receives the input of the wake-up voice control function, the terminal may display a control instruction (i.e., an operation instruction) corresponding to the function of the application. Therefore, the terminal can dynamically push the operation instruction of the voice control function according to the application used by the user. The voice control function using experience of the user is improved.
Corresponding to the method in the foregoing embodiment, an apparatus is also provided in the embodiment of the present application. The device can be applied to a terminal with a voice control function and is used for realizing the method in the embodiment. The functions of the device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions. For example, fig. 10 shows a schematic structural diagram of an apparatus, as shown in fig. 10, which includes: a display module 1001, an input module 1002, a processing module 1003, etc.
The display module 1001 is configured to display an interface of a first application; an input module 1002, configured to receive a first input of a wake-up voice control function; the display module 1001 is further configured to display at least one control instruction in response to the first input. At least one control instruction corresponds to the first application, and the control instructions corresponding to different applications are different. And, each control instruction in the at least one control instruction corresponds to a function of the first application.
In a possible embodiment, the at least one control instruction comprises a first control instruction. The input module 1002 is further configured to receive a second input of the first control instruction. And the processing module 1003 is configured to, in response to the second input, execute a function of the first application corresponding to the first control instruction.
In one possible embodiment, the second input may be a touch operation to the first control instruction.
In one possible embodiment, the second input may be voice data corresponding to the first control instruction. The processing module 1003 is further configured to wake up a voice control function in response to the first input.
In a possible implementation manner, the display module 1001 is specifically configured to display at least one control instruction in a preset area of an interface of the first application. And when the area is preset, the area outside the area where the control in the interface of the first application is located.
In a possible embodiment, the at least one control command is predefined.
In one possible implementation, a display area of a display screen of a terminal includes a first area and a second area. A display module 1001, specifically configured to display an interface of a first application in a first area; and displaying an interface of the second application in the second area. Wherein the first application is in an operational state and the second application is not in an operational state.
It should be understood that the division of units or modules (hereinafter referred to as units) in the above apparatus is only a division of logical functions, and may be wholly or partially integrated into one physical entity or physically separated in actual implementation. And the units in the device can be realized in the form of software called by the processing element; or may be implemented entirely in hardware; part of the units can also be realized in the form of software called by a processing element, and part of the units can be realized in the form of hardware.
For example, each unit may be a processing element separately set up, or may be implemented by being integrated into a chip of the apparatus, or may be stored in a memory in the form of a program, and a function of the unit may be called and executed by a processing element of the apparatus. In addition, all or part of the units can be integrated together or can be independently realized. The processing element described herein, which may also be referred to as a processor, may be an integrated circuit having signal processing capabilities. In the implementation process, the steps of the method or the units above may be implemented by integrated logic circuits of hardware in a processor element or in a form called by software through the processor element.
In one example, the units in the above apparatus may be one or more integrated circuits configured to implement the above method, such as: one or more ASICs, or one or more DSPs, or one or more FPGAs, or a combination of at least two of these integrated circuit forms.
As another example, when a unit in a device may be implemented in the form of a processing element scheduler, the processing element may be a general purpose processor, such as a CPU or other processor capable of invoking programs. As another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In one implementation, the means for implementing the respective corresponding steps of the above method by the above apparatus may be implemented in the form of a processing element scheduler. For example, the apparatus may include a processing element and a memory element, the processing element calling a program stored by the memory element to perform the method described in the above method embodiments. The memory elements may be memory elements on the same chip as the processing elements, i.e. on-chip memory elements.
In another implementation, the program for performing the above method may be in a memory element on a different chip than the processing element, i.e. an off-chip memory element. At this time, the processing element calls or loads a program from the off-chip storage element onto the on-chip storage element to call and execute the method described in the above method embodiment.
For example, the embodiments of the present application may also provide an apparatus, such as: an electronic device may include: a processor, a memory for storing instructions executable by the processor. The processor is configured to execute the above instructions, so that the electronic device implements the control function pushing method according to the foregoing embodiment. For example, the electronic device may be a terminal having a voice control function as described in the foregoing embodiments. The memory may be located within the electronic device or external to the electronic device. And the processor includes one or more.
In another implementation, the unit of the apparatus for implementing the steps of the above method may be configured as one or more processing elements, which may be disposed on the terminal with the voice control function, where the processing element may be an integrated circuit, for example: one or more ASICs, or one or more DSPs, or one or more FPGAs, or a combination of these types of integrated circuits. These integrated circuits may be integrated together to form a chip.
For example, the embodiment of the present application further provides a chip, and the chip may be applied to the terminal with the voice control function. The chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the processor receives and executes computer instructions from the memory of the electronic device through the interface circuitry to implement the methods described in the method embodiments above.
Embodiments of the present application further provide a computer program product, which includes an electronic device, such as the computer instruction executed by the above terminal (e.g., mobile phone).
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of software products, such as: and (5) programming. The software product is stored in a program product, such as a computer readable storage medium, and includes several instructions for causing a device (which may be a single chip, a chip, or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
For example, embodiments of the present application may also provide a computer-readable storage medium having stored thereon computer program instructions. The computer program instructions, when executed by the electronic device, cause the electronic device to implement the control function push method as described in the aforementioned method embodiments.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A control function pushing method is applied to a terminal with a voice control function, and comprises the following steps:
the terminal displays an interface of a first application;
the terminal receives a first input for awakening the voice control function;
responding to the first input, the terminal displays at least one control instruction, the at least one control instruction corresponds to the first application, the control instructions corresponding to different applications are different, and each control instruction in the at least one control instruction corresponds to one function of the first application.
2. The method of claim 1, wherein the at least one control instruction comprises a first control instruction;
after the terminal displays at least one control instruction, the method further comprises:
the terminal receives a second input of the first control instruction;
and responding to the second input, and executing the function of the first application corresponding to the first control instruction by the terminal.
3. The method of claim 2, wherein the second input is a touch operation to the first control instruction.
4. The method of claim 2, wherein the second input is voice data corresponding to the first control instruction;
after the terminal receives a first input that wakes up the voice control function, the method further includes:
and responding to the first input, and the terminal awakens the voice control function.
5. The method of any of claims 1 to 4, wherein the terminal displays at least one control instruction comprising:
and the terminal displays the at least one control instruction in a preset area of the interface of the first application, wherein the preset area is an area outside an area where a control in the interface of the first application is located.
6. The method of any of claims 1 to 5, wherein the at least one control instruction is predefined.
7. The method of any of claims 1 to 6, wherein the display area of the display screen of the terminal comprises a first area and a second area;
the terminal displays an interface of a first application, and comprises the following steps: the terminal displays an interface of the first application in the first area;
the method further comprises the following steps: the terminal displays an interface of a second application in the second area;
wherein the first application is in an operational state and the second application is not in an operational state.
8. An electronic device, comprising: a processor, a memory for storing the processor-executable instructions;
the processor is configured to, when executing the instructions, cause the electronic device to implement the method of any of claims 1-7.
9. A computer readable storage medium having stored thereon computer program instructions; it is characterized in that the preparation method is characterized in that,
the computer program instructions, when executed by an electronic device, cause the electronic device to implement the method of any of claims 1-7.
10. A computer program product comprising computer readable code which, when run in an electronic device, causes the electronic device to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011057612.6A CN114327198A (en) | 2020-09-29 | 2020-09-29 | Control function pushing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011057612.6A CN114327198A (en) | 2020-09-29 | 2020-09-29 | Control function pushing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114327198A true CN114327198A (en) | 2022-04-12 |
Family
ID=81011652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011057612.6A Pending CN114327198A (en) | 2020-09-29 | 2020-09-29 | Control function pushing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114327198A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023246894A1 (en) * | 2022-06-25 | 2023-12-28 | 华为技术有限公司 | Voice interaction method and related apparatus |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102929502A (en) * | 2012-10-15 | 2013-02-13 | 上海华勤通讯技术有限公司 | Mobile terminal and operation method for application programs of mobile terminal |
CN105975201A (en) * | 2016-04-27 | 2016-09-28 | 努比亚技术有限公司 | Mobile terminal and split screen processing method therefor |
US20170185263A1 (en) * | 2014-06-17 | 2017-06-29 | Zte Corporation | Vehicular application control method and apparatus for mobile terminal, and terminal |
CN109584879A (en) * | 2018-11-23 | 2019-04-05 | 华为技术有限公司 | A kind of sound control method and electronic equipment |
CN110377220A (en) * | 2019-07-18 | 2019-10-25 | Oppo(重庆)智能科技有限公司 | A kind of instruction response method, device, storage medium and electronic equipment |
-
2020
- 2020-09-29 CN CN202011057612.6A patent/CN114327198A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102929502A (en) * | 2012-10-15 | 2013-02-13 | 上海华勤通讯技术有限公司 | Mobile terminal and operation method for application programs of mobile terminal |
US20170185263A1 (en) * | 2014-06-17 | 2017-06-29 | Zte Corporation | Vehicular application control method and apparatus for mobile terminal, and terminal |
CN105975201A (en) * | 2016-04-27 | 2016-09-28 | 努比亚技术有限公司 | Mobile terminal and split screen processing method therefor |
CN109584879A (en) * | 2018-11-23 | 2019-04-05 | 华为技术有限公司 | A kind of sound control method and electronic equipment |
CN110377220A (en) * | 2019-07-18 | 2019-10-25 | Oppo(重庆)智能科技有限公司 | A kind of instruction response method, device, storage medium and electronic equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023246894A1 (en) * | 2022-06-25 | 2023-12-28 | 华为技术有限公司 | Voice interaction method and related apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110430324B (en) | Display method, electronic equipment and storage medium for application screen opening information | |
CN111724775B (en) | Voice interaction method and electronic equipment | |
WO2020029094A1 (en) | Method for generating speech control command, and terminal | |
WO2020073288A1 (en) | Method for triggering electronic device to execute function and electronic device | |
CN111030990A (en) | Method for establishing communication connection, client and server | |
CN112527093A (en) | Gesture input method and electronic equipment | |
WO2020051852A1 (en) | Method for recording and displaying information in communication process, and terminals | |
CN113488042A (en) | Voice control method and electronic equipment | |
CN114822525A (en) | Voice control method and electronic equipment | |
WO2023005711A1 (en) | Service recommendation method and electronic device | |
CN114449333B (en) | Video note generation method and electronic equipment | |
CN113473013A (en) | Display method and device for beautifying effect of image and terminal equipment | |
CN114327198A (en) | Control function pushing method and device | |
EP4113276B1 (en) | Sound playback method and device | |
CN113282632A (en) | Data transmission method and equipment | |
CN115185441A (en) | Control method, control device, electronic equipment and readable storage medium | |
CN114093368A (en) | Cross-device voiceprint registration method, electronic device and storage medium | |
CN115841099B (en) | Intelligent recommendation method of page filling words based on data processing | |
CN114268689B (en) | Electric quantity display method of Bluetooth device, terminal and storage medium | |
WO2023071730A1 (en) | Voiceprint registration method and electronic devices | |
WO2024022154A1 (en) | Method for determining device user, and related apparatus | |
CN114449492B (en) | Data transmission method and terminal equipment | |
CN113626115B (en) | Method for generating dial plate and related device | |
CN113672187A (en) | Data double-sided display method and device, electronic equipment and storage medium | |
CN119232765A (en) | Cross-equipment service calling method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |